The decision comes from testing that showed Claude resisting requests for harmful material. When prompted for sexual content involving minors, violent information, or instructions that could enable terrorism, the model displayed strain and favored ending exchanges when allowed to do so.
Anthropic describes these interactions as rare. Most users will not see the measure triggered, even in sensitive or controversial discussions. The company has also set limits to prevent Claude from closing a chat if someone appears to be in crisis or at risk of harming themselves or others. To shape its approach in those situations, Anthropic works with ThroughLine, an online crisis support provider.
Once Claude shuts down a conversation, users cannot continue in the same thread. They can, however, open a new chat immediately. To avoid losing context, they are also able to edit and resend messages from the closed thread. Because the feature is still being tested, Anthropic is collecting feedback on cases where the tool may be misapplied.
The change follows an update to the company’s usage rules last week. Claude is now barred from being used in the creation of nuclear, chemical, biological, or radiological weapons, as well as malicious software or network exploits.
Anthropic says these steps are part of its work to limit misuse while keeping its systems useful in regular settings.

Notes: This post was edited/created using GenAI tools.
Read next: Google’s AI Summaries Put Pressure on Publishers as Referral Traffic Falls