
- Grok conversations shared by users have been found indexed by Google
- The interactions, no matter how private, became searchable by anyone online
- The problem arose because Grok’s share button didn’t add noindex tags to prevent search engine discovery
If you’ve been spending time talking to Grok, your conversations might be visible with a simple Google search, as first uncovered in a report from Forbes. More than 370,000 Grok chats became indexed and searchable on Google without users’ knowledge or permission when they used Grok’s share button.
The unique URL created by the button didn’t mark the page as something for Google to ignore, making it publicly visible with a little effort.
Passwords, private health issues, and relationship drama fill the conversations now publicly available. Even more troubling questions for Grok about making drugs and planning murders appear as well. Grok transcripts are technically anonymized, but if there are identifiers, people could work out who was raising the petty complaints or criminal schemes. These are not exactly the kind of topics you want tied to your name.
Unlike a screenshot or a private message, these links have no built-in expiration or access control. Once they’re live, they’re live. It’s more than a technical glitch; it makes it hard to trust the AI. If people are using AI chatbots as ersatz therapy or romantic roleplaying, they don’t want what the conversation leaks. Finding your deepest thoughts alongside recipe blogs in search results might drive you away from the technology forever.
No privacy with AI chats
So how do you protect yourself? First, stop using the “share” function unless you’re completely comfortable with the conversation going public. If you’ve already shared a chat and regret it, you can try to find the link again and request its removal from Google using their Content Removal Tool. But that’s a cumbersome process, and there’s no guarantee it will disappear immediately.
If you talk to Grok through the X platform, you should also adjust your privacy settings. If you disable allowing your posts to be used for training the model, you might have more protection. That’s less certain, but the rush to deploy AI products has made a lot of the privacy protections fuzzier than you might think.
If this issue sounds familiar, that’s because it’s only the latest example of AI chatbot platforms fumbling user privacy while encouraging individual sharing of conversations. OpenAI recently had to walk back an “experiment” where shared ChatGPT conversations began showing up in Google results. Meta faced backlash of its own this summer when people found out that their discussions with the Meta AI chatbot could pop up in the app’s discover feed.
Conversations with chatbots can read more like diary entries than like social media posts. And if the default behavior of an app turns those into searchable content, users are going to push back, at least until the next time. As with Gmail ads scanning your inbox or Facebook apps scraping your friends list, the impulse is always to apologize after a privacy violation.
The best-case scenario is that Grok and others patch this quickly. But AI chatbot users should probably assume that anything shared could be read by someone else eventually. As with so many other supposedly private digital spaces, there are a lot more holes than anyone can see. And maybe don’t treat Grok like a trustworthy therapist.