Anthropic's Mythos AI falls into unauthorized hands, fueling weaponization fears

Anthropic says Claude Mythos Preview, a powerful cybersecurity AI capable of identifying and exploiting vulnerabilities, was accessed by a small, unauthorized group via a third‑party vendor. The attackers, tied to a private Discord channel and reportedly using data from a Mercor breach to locate the model, have demonstrated Mythos with screenshots and a live demo, and reportedly not for cybersecurity to avoid detection. Access to Mythos is restricted to a handful of firms under Project Glasswing (including Nvidia, Google, AWS, Apple, Microsoft) with governments eyeing the tech. Anthropic is investigating and says there’s no evidence of impact on its systems; the company has no plans to publicly release Mythos due to weaponization concerns.
- Anthropic’s most dangerous AI model just fell into the wrong hands The Verge
- Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users Bloomberg.com
- Anthropic investigating unauthorised access of powerful Mythos AI model Financial Times
- Claude Mythos and the AI Cybersecurity Wake-Up Call Bain & Company
- Brace yourself for a flood of patches in all of your tech gadgets Fast Company
Reading Insights
0
6
2 min
vs 3 min read
80%
554 → 113 words
Want the full story? Read the original article
Read on The Verge