A new study has revealed that large language models (LLMs) like ChatGPT are increasingly shaping the language used in scientific literature, raising red flags about research authenticity and scholarly integrity. Researchers from the U.S. and Germany analyzed over 15 million biomedical abstracts from PubMed and found clear linguistic shifts in 2024 tied to the use of AI-generated writing tools.
Published in Science Advances, the study found that at least 13.5% of scientific papers published this year show signs of LLM assistance. Instead of detecting AI use by comparing writing samples—a method prone to bias—the researchers examined excess word frequencies before and after the release of ChatGPT to uncover consistent linguistic patterns.
They observed a significant shift from content-heavy noun usage to more stylistic and flowery wording, such as “showcasing,” “pivotal,” and “grappling.” Before 2024, nearly 80% of excess words were nouns. In 2024, that dropped to just over 20%, with 66% being verbs and 14% adjectives—a hallmark of LLM-written content.
This shift, the authors argue, indicates a stylistic fingerprint of AI-generated text infiltrating peer-reviewed journals. The trend raises growing concerns over how much of today’s scientific output may be shaped—or even authored—by AI, without disclosure.
The study also found variations in AI usage depending on research fields, countries, and journals, showing uneven adoption and possible vulnerabilities in editorial standards. With AI tools now easily accessible, the academic community is being urged to develop clearer guidelines and detection methods to safeguard research credibility.
The growing influence of AI language models in academic publishing has sparked calls for increased transparency from authors and stricter disclosure requirements from journals. Experts warn that unchecked AI use may blur the lines between original thought and machine-generated output, potentially undermining peer review standards and academic trust. As institutions grapple with the rapid adoption of LLMs, the need for robust policies on AI usage in scholarly work is becoming more urgent than ever.