AI startup Anthropic is investigating reports that a group of users gained unauthorized access to its Claude Mythos model, a powerful AI system that was deliberately released to only a small circle of trusted companies due to its advanced cybersecurity capabilities.

A report from Bloomberg reveals that Anthropic disclosed on Tuesday that it was examining claims that individuals had accessed the model through a system designated for third-party companies performing work on Anthropic’s behalf. In a statement, the company said: “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments.”

The incident surfaces serious questions about whether Anthropic, valued at approximately $380 billion, can effectively safeguard its most powerful technologies from falling into the hands of malicious actors. The company had intentionally restricted the release of Claude Mythos Preview to a select group of technology firms, explicitly citing concerns that the model could be misused to launch cyber attacks at a scale and speed exceeding human capabilities.

One individual who obtained unauthorized access reportedly leveraged their permissions as a contractor for Anthropic to tap into Mythos. The company stated it had no evidence of activity extending beyond the “vendor environment,” the infrastructure that third parties use to access systems for model development. AI laboratories frequently employ third-party contractors for responsibilities such as model testing, though Anthropic did not identify which specific vendor was implicated in this incident.

The security breach intensifies existing anxieties surrounding Mythos, which has already generated significant turbulence across markets and catalyzed high-level conversations among financial institutions and global regulatory bodies. Security specialists have warned that if acquired by hostile parties, the model could enable hackers to identify and exploit software vulnerabilities more rapidly than organizations can deploy patches and fixes.

Anthropic introduced Mythos earlier this month to a carefully chosen roster of corporate partners including Amazon, Microsoft, Apple, Cisco, and CrowdStrike. The company indicated these collaborators would have the opportunity to detect and secure cyber vulnerabilities using Mythos’s sophisticated capabilities ahead of any broader public release.

 

Breitbart News previously reported that Anthropic suffered a serious breach of security when it accidentally leaked aspects of its source code online:

the latest incident comes mere days after Fortune revealed that Anthropic had inadvertently made nearly 3,000 internal files publicly accessible, including a draft blog post describing an upcoming AI model called “Mythos” or “Capybara” that the company warned presents serious cybersecurity risks.

This second leak exposed approximately 500,000 lines of code contained within roughly 1,900 files. When contacted for comment, Anthropic acknowledged that “some internal source code” had been leaked as part of a “Claude Code release.” A company spokesperson stated: “No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

The instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI,  written by Breitbart News social media director Wynton Hall, serves as a blueprint for conservatives to create effective policies around AI not only for the nation, but also their family. This becomes even more important when the AI giants themselves are struggling to secure their AI models.

Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised Code Red as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.”  Award-winning investigative journalist and Public founder Michael Shellenberger calls Code Red “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”

Read more at Bloomberg here.

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version