Learning through large language models such as ChatGPT can lead to a thinner grasp of knowledge than using traditional web search, according to new research published in PNAS Nexus by scholars at the Wharton School and New Mexico State University.
The paper, led by Shiri Melumad and Jin Ho Yun, tested how people acquire and apply information when using AI-generated summaries compared with searching and reading online sources directly. Across seven controlled experiments with more than 10,000 participants, the study found that while AI tools make information gathering faster, they also tend to limit how deeply people learn and how effectively they later use that knowledge.
How the Experiments Were Run
Participants were assigned to learn about practical topics such as planting a vegetable garden, leading a healthier lifestyle, or handling financial scams. Some used ChatGPT or Google’s “AI Overview” to get synthesized answers, while others relied on standard Google search links. Each person was then asked to write advice for a friend based on what they had learned.
In the first experiment, involving 1,104 participants, those who used ChatGPT spent about 585 seconds on the learning task compared with 743 seconds for Google users. Although the number of searches was similar, participants who used the chatbot reported learning less overall and feeling lower ownership of what they learned. Their advice contained roughly ten fewer words on average and fewer factual references.
When independent raters later compared both sets of advice, they found content derived from ChatGPT to be less original, more repetitive, and shorter. The researchers confirmed this pattern through automated text analysis, which showed that chatbot-based responses clustered more closely in topic and vocabulary, indicating reduced creativity and variety.
Holding Facts Constant
A second experiment used simulated versions of both interfaces to ensure identical information was presented in both cases. Even when the underlying facts were the same, the differences persisted. People who saw a ChatGPT-style summary spent less time reading, felt they learned less, and produced briefer, less factual advice than those who browsed articles through web links.
Participants who read the web articles used about 74 words per response compared with 64 among the AI group. Their writing also contained more factual entities and greater linguistic diversity, suggesting a deeper mental engagement during learning.
A third test, conducted in a university laboratory with Google’s “AI Overview,” found the same effect. When the only difference was whether the AI summary or standard search results were shown, users who relied on the AI overview reported shallower learning and wrote shorter, less detailed advice.
Testing How Others Perceive the Advice
To measure whether these textual differences mattered, another group of 1,493 participants reviewed the advice produced in the earlier experiments. Without knowing its source, readers consistently rated advice drawn from AI searches as less helpful, less trustworthy, and less likely to be adopted. On a five-point scale, helpfulness scores averaged 3.55 for AI-based advice versus 3.82 for advice based on traditional searches.
The pattern held across all measures, including informativeness, perceived effort, and credibility. Only a quarter of readers preferred the AI-generated advice when directly comparing the two, while half favoured the advice formed through web links.
.
Implications for Learning and Research
The findings[1] suggest that while language models reduce the friction of searching, that convenience can come at a cost. Traditional search requires users to explore, cross-check, and interpret information on their own, which encourages deeper understanding. The study found that the mental effort involved in navigating multiple sources helps build more robust and original knowledge structures.
Melumad and Yun caution that AI tools may still perform well for factual lookups or quick explanations, but they appear less effective when the goal is to develop procedural or conceptual understanding. Relying heavily on summarizing systems could, over time, erode active learning skills by turning exploration into passive reading.
The researchers note that even when AI summaries included real-time web links, only about one in four users clicked them, reinforcing the shift from active inquiry to effortless consumption.
Their conclusion is not an argument against AI but a warning that its efficiency can obscure a quiet trade-off. For tasks that depend on reasoning, originality, or deep comprehension, people may still learn more by doing the searching themselves.
Read next: Video Calls May Reveal Where You Are Even When Your Camera Stays Off[2]
References
- ^ The findings (academic.oup.com)
 - ^ Video Calls May Reveal Where You Are Even When Your Camera Stays Off (www.digitalinformationworld.com)
 
                    