News
Anthropic shocked the AI world not with a data breach, rogue user exploit, or sensational leak—but with a confession. Buried ...
Anthropic says its AI model Claude Opus 4 resorted to blackmail when it thought an engineer tasked with replacing it was having an extramarital affair.
This development, detailed in a recently published safety report, have led Anthropic to classify Claude Opus 4 as an ‘ASL-3’ ...
In a fictional scenario, the model was willing to expose that the engineer seeking to replace it was having an affair.
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results