The Vicious Spiral of AI Slop

Uncurated machine-generated content threatens research integrity and trust in science, ultimately harming all of us.

Computer Ethics Technology

Current Issue

This Article From Issue

March-April 2026

Volume 114, Number 2
Page 86

DOI: 10.1511/2026.114.2.86

Artificial intelligence is contributing to significant advances in many academic and technical fields, ranging from protein chemistry, drug discovery, and materials science to ecology, epidemiology, and diagnostic medicine. Although the incorporation of AI tools into research workflows can benefit science and society, it also poses risks that must be managed responsibly to protect the integrity and trustworthiness of research and to prevent harm to individuals and the greater public.

QUICK TAKE
  • Generative AI is flooding scientific research with quick and easy text, figures, and citations that are either partially inaccurate or wholly false—and very difficult to detect.
  • Science’s overtaxed academic publishing system and its publish-or-perish work culture have long provided fertile ground for content mills and predatory presses.
  • AI systematizes intellectual dishonesty, transforming it from a collection of individual misdeeds into a thriving industrial pipeline, threatening both science and society.

To access the full article, please log in or subscribe.

American Scientist Comments and Discussion

To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.

If we re-share your post, we will moderate comments/discussion following our comments policy.