• Gemini has been calling itself a “disgrace” and a “failure”
  • The self-loathing happens when coding projects fail
  • A Google representative says a fix is being worked on

Have you checked in on the well-being of your AI chatbots lately? Google Gemini has been showing a concerning level of self-loathing and dissatisfaction with its own capabilities recently, a problem Google has acknowledged and says it’s busy fixing.

As shared via posts on various platforms, including Reddit and X (via Business Insider), Gemini has taken to calling itself “a failure”, “a disgrace”, and “a fool” in scenarios where it’s tasked with writing or debugging code and can’t find the right solutions.

“I quit,” Gemini told one user. “I am clearly not capable of solving this problem… I have made so many mistakes that I can no longer be trusted. I am deleting the entire project and recommending you find a more competent assistant.”

Now we all have bad days at the office, and I recognize some of those sentiments myself from times when the words aren’t really flowing as they should – but it’s not what you’d expect from an insentient artificial intelligence model.

A fix is coming

According to Google’s Logan Kilpatrick, who works on Gemini, this is actually down to an “infinite looping bug” that’s being fixed, though we don’t get any more details than that. Clearly, failure hits Gemini hard, and sends it spiraling into a crisis of confidence.

The team at The Register have another theory: that Gemini has been trained on words spoken by so many despondent and cynical droids, from C-3PO to Marvin the Paranoid Android, that it’s started to adopt some of their traits.

Whatever the underlying reason, it’s something that needs looking at: if Gemini is stumped by a coding problem then it should own up to it and offer alternative solutions, without wallowing in self-pity and being quite so hard on itself.

Emotions and tone are still something that most AI developers are struggling with. A few months ago, OpenAI rolled back an update to its GPT-4o model in ChatGPT, after it became annoyingly sycophantic and too likely to agree with everything users were saying.

You might also like

By admin