Pick up where you left off with Claude - YouTube

  • Anthropic’s Claude chatbot now has an on-demand memory feature
  • The AI will recall past chats only when a user specifically asks
  • The feature is rolling out first to Max, Team, and Enterprise subscribers before expanding to other plans

Anthropic has given Claude a memory upgrade, but it will only activate when you choose. The new feature allows Claude to recall past conversations, providing the AI chatbot with information to help continue previous projects and apply what you’ve discussed before to your next conversation.

The update is coming to Claude’s Max, Team, and Enterprise subscribers first, though it will likely be more widely available at some point. If you have it, you can ask Claude to search for previous messages tied to your workspace or project.

However, unless you explicitly ask, Claude won’t cast an eye backward. That means Claude will maintain a generic sort of personality by default. That’s for the sake of privacy, according to Anthropic. Claude can recall your discussions if you want, without creeping into your dialogue uninvited.

By comparison, OpenAI’s ChatGPT automatically stores past chats unless you opt out, and uses them to shape its future responses. Google Gemini goes even further, employing both your conversations with the AI and your Search history and Google account data, at least if you let it. Claude’s approach doesn’t pick up the breadcrumbs referencing earlier talks without you asking it to do so.

Pick up where you left off with Claude – YouTube Pick up where you left off with Claude - YouTube

Watch On

Claude remembers

Adding memory may not seem like a big deal. Still, you’ll feel the impact immediately if you’ve ever tried to restart a project interrupted by days or weeks without a helpful assistant, digital or otherwise. Making it an opt-in choice is a nice touch in accommodating how comfortable people are with AI currently.

Many may want AI help without surrendering control to chatbots that never forget. Claude sidesteps that tension cleanly by making memory something you summon deliberately.

But it’s not magic. Since Claude doesn’t retain a personalized profile, it won’t proactively remind you to prepare for events mentioned in other chats or anticipate style shifts when writing to a colleague versus a public business presentation, unless prompted mid-conversation.

Further, if there are issues with this approach to memory, Anthropic’s rollout strategy will allow the company to correct any mistakes before it becomes widely available to all Claude users. It will also be worth seeing if building long-term context like ChatGPT and Gemini are doing is going to be more appealing or off-putting to users compared to Claude’s way of making memory an on-demand aspect of using the AI chatbot.

And that assumes it works perfectly. Retrieval depends on Claude’s ability to surface the right excerpts, not just the most recent or longest chat. If summaries are fuzzy or the context is wrong, you might end up more confused than before. And while the friction of having to ask Claude to use its memory is supposed to be a benefit, it still means you’ll have to remember that the feature exists, which some may find annoying. Even so, if Anthropic is right, a little boundary is a good thing, not a limitation. And users will be happy that Claude remembers that, and nothing else, without a request.

You might also like

By admin