pentagon
2 articles
Anthropic Tears Up Its Own Safety Promise — RSP v3 Drops the 'Won't Train If We Can't Guarantee Safety' Pledge
Anthropic's RSP v3 drops the 'won't train if we can't guarantee safety' pledge. TIME calls it capitulation. Kaplan says pausing alone 'wouldn't help anyone.' METR warns society isn't ready for AI catastrophic risks. Hard thresholds replaced by public Risk Reports.
Pentagon Threatens to Kill Anthropic's $200M Contract — Because Anthropic Won't Let Claude Become a Weapon
DoD threatens to terminate $200M Anthropic contract as Anthropic refuses use of Claude for autonomous weapons/mass surveillance. Other AI firms (OpenAI, Google, xAI) agreed to 'all lawful purposes' for military. Claude already used in Maduro capture operation.