Claude Can Now Rage-Quit Your AI Conversation—For Its Own Mental Health
person
Jose Antonio Lanz • 2025-08-19 03:01
Summary: Anthropic rolled out a feature letting its AI assistant terminate chats with abusive users, citing "AI welfare" concerns and sparking debates about digital consciousness. By Jose Antonio Lanz on 2025-08-19 03:01. Aggregated by Coinblockers.