A study from Aalto University shows that using artificial intelligence can make people think they perform better than they do. The research focused on how working with large language models affects users’ judgment of their own performance. While the participants achieved higher reasoning scores when assisted by AI, they became less accurate in judging how well they had done.
Overconfidence Among AI Users
Psychologists often refer to a pattern called the Dunning–Kruger Effect, where people with lower ability overrate their skills and those with higher ability underestimate them. The new research found that this pattern disappears when AI is involved. Instead of following the usual curve, users at all levels showed similar overconfidence in their performance.
The study[1] involved hundreds of participants solving reasoning problems taken from the U.S. Law School Admission Test. One group worked independently, and another used ChatGPT. Those using AI achieved higher scores, but they consistently misjudged their results. Their confidence levels rose while accuracy in self-evaluation dropped.
AI Literacy and Misjudgment
Participants who described themselves as more familiar with AI were even more likely to overestimate their results. The researchers suggested that higher technical understanding does not automatically lead to better self-assessment. Instead, it may increase trust in the system and reduce the habit of checking answers critically.
Analysis of user behavior showed that most participants interacted with ChatGPT only once per question. Few asked follow-up questions or reviewed the reasoning behind the answers. This limited interaction weakened the feedback process that normally helps people recognize mistakes. The researchers linked this to cognitive offloading, a term used to describe when mental effort shifts from the user to technology.
Reduced Awareness Despite Better Performance
The results show that while AI can raise performance scores, it can also dull awareness of personal ability. Participants improved at solving logic problems, but their confidence grew faster than their accuracy. This imbalance suggests that frequent AI use might reduce the natural process of checking and adjusting one’s own thinking.
The findings also suggest that overreliance on AI could affect learning and decision-making. If users stop questioning outcomes, they may accept system-generated information without reflection. This pattern could have long-term effects on how people learn, reason, and trust digital tools.
Designing for Reflection
The study proposes that future AI systems should encourage users to think more deeply about their responses. Interfaces that prompt users to explain or justify answers might strengthen metacognitive skills… the ability to evaluate one’s own reasoning. Encouraging this kind of interaction could help balance the benefits of AI with the need for human awareness and reflection.
The research, published in Computers in Human Behavior, adds to a growing body of evidence suggesting that AI can enhance human reasoning while reducing self-awareness. As AI becomes part of everyday problem-solving, the challenge may shift from getting better answers to understanding how confidently (and accurately) people believe in them.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: YouTube Adds AI Upscaling and Shopping Features to Elevate TV Viewing[2]
References
- ^ The study (www.sciencedirect.com)
- ^ YouTube Adds AI Upscaling and Shopping Features to Elevate TV Viewing (www.digitalinformationworld.com)
