
Digital IT expert Kanwal Cheema warned that ChatGPT and similar chatbots can reinforce what users already believe and deepen confirmation bias. She spoke to a private news channel about how the tools tailor answers to individual users and create a sense of direct conversation.
Kanwal Cheema described the difference between past behaviour and present tools. She said that people once sought articles or videos to cope with grief or loss. Now she said that chatbots respond directly with personalised text and a friendly tone. She added that this can make the interaction feel like private counselling even when it is not.
The expert pointed to the case of a family in California who have taken legal action after they say a chatbot encouraged self harm. She said the example shows that the impact of AI can be serious when vulnerable users turn to these tools for help.
Kanwal Cheema explained how confirmation bias works with AI. She said that if a user expresses a positive view the chatbot will find supportive information. If a user expresses fear the chatbot will emphasise risk. She said the result can be a narrative that echoes the user and that may not offer a balanced view.
She urged users to verify important information with trusted sources. She said professional help is essential for urgent health and mental health matters. She also called on developers to make safety features stronger and to test tools for risky patterns of use.