A new academic analysis of thousands of recent newspaper stories in the United States shows how artificial intelligence has begun shaping everyday journalism. The research team studied 186,000 articles across print organizations large and small. They applied an automated detection system to determine when parts of a story were created by machine. The study states that “approximately 9% of newly-published articles are either partially or fully AI-generated.”
It becomes most common where the business side of news keeps shrinking. Smaller local newsrooms face lean operations and fewer hands on deck. Communities depend on these outlets to keep them informed about basic civic life. Yet the paper notes that “AI use in American newspapers is widespread, uneven, and rarely disclosed.” Readers get little indication when something that looks like a journalist wrote it is instead shaped by software.
Local News Sees More AI in the Byline
Regional gaps appear throughout the data. The study reports that automation rises where circulation drops. In larger city papers, AI does not play the same role. The authors observe that “AI use is significantly higher on pages of newspapers without high circulation than at nationally-circulated papers.” In these newsrooms, the volume of daily reporting can overwhelm the available staff. Publishing tools help fill space with quick briefs and standardized information.
Certain topics suit this approach. Articles about the weather depend on formal forecasts. Science updates sometimes follow press announcements and public databases. Health stories often summarize new research. These are areas with clear numerical inputs that feed directly into templated writing. For that reason, the paper points out that “topic distributions show elevated AI-detected content for weather, science and tech, [and] health.”
Spanish-language coverage published by U.S. newspapers also sees more automated text than English versions. The study suggests that translation systems and generative rewriting may be working behind the scenes to support bilingual news production.
Opinion Pages Shift Without Warning
The researchers also analyzed opinion articles from three nationally recognized papers with strong reputations. They include The New York Times, The Washington Post, and The Wall Street Journal. These pages shape how the country thinks about large issues. The data reveals a different pattern from general news reporting. The paper states that “opinion content is 6.4 times more likely to contain AI-generated content than news articles from the same publications.”[1]
This change grows after the end of 2022 when new writing systems became more common. Guest contributors, who do not work permanently in those newsrooms, rely on writing aids more than regular columnists. In total, hundreds of commentary pieces contain at least some detectable machine-written text. The responsibility of shaping public dialogue makes this category important. These pages do not just report. They argue.
Mixed and Hidden Text Blurs Trust
The study’s detection method shows that full articles written entirely by AI remain a small fraction. The more common case looks like a mix. There might be a human interview plus automated rewriting. There might be a reporter’s outline turned into paragraphs by a system tool. The authors note that many flagged articles still include quoted statements from real people.
Yet readers do not know when this blending happens. The paper gives attention to disclosure and finds that clear labeling is rare. Even where newsroom rules promise transparency, execution falls short. Trust in journalism depends on knowing who or what is talking. Hidden authorship makes that judgment difficult.
Pressures Behind the Quiet Shift
Newsroom budgets continue to shrink. Many local publishers operate with few reporters, and some communities have no traditional paper at all. The study points to industry conditions where “news deserts” grow and automated filling appears as a survival mechanism. Tools ease workloads but they change the nature of the work. When software handles simple local updates, it builds habits that can spread into more complicated stories.
The researchers do not claim that AI harms accuracy every time it appears. They do emphasize that people deserve to understand the source of information they rely on. Without that clarity, an audience cannot evaluate context or credibility.
A Developing Landscape
Automation has already become part of the reporting process in this country. Most readers have likely seen stories produced with help from a machine, even if they never noticed it. The findings suggest a future where more articles contain hidden layers of automated writing. Whether those layers support journalists or replace them depends on decisions that news organizations must face soon.
For now, the rise of AI text remains quiet. It is hardest to see in the places where communities can least afford missing facts. Transparency offers a way to protect trust before the trend becomes too familiar to question.
Note: This post was edited/created using GenAI tools.
Read next:
• AI now generates over half of all online content, reshaping digital publishing and blurring human authorship boundaries[2]
• Erase.com Explains the Hidden ROI of Online Reputation Management[3]
• AI Bots Spark Conversation but Don’t Change Posting Habits[4]
References
- ^ The researchers (arxiv.org)
- ^ AI now generates over half of all online content, reshaping digital publishing and blurring human authorship boundaries (www.digitalinformationworld.com)
- ^ Erase.com Explains the Hidden ROI of Online Reputation Management (www.digitalinformationworld.com)
- ^ AI Bots Spark Conversation but Don’t Change Posting Habits (www.digitalinformationworld.com)



