Anthropic users have been told to choose by September 28 whether to allow their conversations to be used to train the company models. The company will now use chats and coding sessions to improve models for users who do not opt out. This marks a sharp change from the earlier policy that deleted consumer data within thirty days by default.

New users will select a preference during signup. Existing users see an in app notice asking them to accept updated terms or to opt out. The interface shows a large accept button with a smaller training permission toggle below that is set to on by default. Privacy experts warn that many users may click accept without noticing what they agree to.

Anthropic says that data from consenting Anthropic users will be used to improve safety systems and model skills such as coding and reasoning. The company said “Enterprise products are not affected” and business and government customers will remain protected. Users who allow data use will have their chats retained for up to five years for new chats.

The move comes as AI firms compete for real world conversational data while also facing legal challenges. Major disputes over data use have hit the sector and platforms are under growing scrutiny for their training practices. The debate over retention and consent is now front and center for consumer AI services.

Anthropic users who do not want their chats used can opt out and change that decision in settings. Any data that has already been used for training cannot be removed retroactively. Experts advise that users check their privacy settings and act before the deadline if they have concerns.

This innovation will challenge the balance of innovation and privacy and trust by consumers and regulators. The decision made by users within the next few weeks will determine the way businesses gather and utilize conversational data.

By admin