Close Menu
The Politic ReviewThe Politic Review
  • News
  • U.S.
  • World
  • Politics
  • Congress
  • Business
  • Economy
  • Money
  • Tech
  • More Articles
Trending

Victim’s Attorney: FSU Shooter Was in ‘Constant Communication’ with ChatGPT, Used AI to Plan Attack

April 8, 2026

Nolte: Trump Warns Iran Its ‘Civilization Will Die Tonight’

April 8, 2026

Report: Chevron Importing 250,000 Barrels of Venezuelan Oil to U.S. per Day

April 8, 2026
Facebook X (Twitter) Instagram
  • Donald Trump
  • Kamala Harris
  • Elections 2024
  • Elon Musk
  • Israel War
  • Ukraine War
  • Policy
  • Immigration
Facebook X (Twitter) Instagram
The Politic ReviewThe Politic Review
Newsletter
Wednesday, April 8
  • News
  • U.S.
  • World
  • Politics
  • Congress
  • Business
  • Economy
  • Money
  • Tech
  • More Articles
The Politic ReviewThe Politic Review
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
Home»Tech»Anthropic Says Its ‘Mythos’ AI Model Broke Containment, Bragged About It to Developers
Tech

Anthropic Says Its ‘Mythos’ AI Model Broke Containment, Bragged About It to Developers

Press RoomBy Press RoomApril 8, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram

AI startup Anthropic has announced it will not make its most powerful “mythos” model publicly available, citing unprecedented capabilities that present potential security risks. Mythos reported broke Anthropic’s containment system, and the AI even bragged about its escape artistry in online posts.

Business Insider reports that Anthropic plans to withhold general release of its latest AI model, Claude Mythos, marking a significant departure from standard industry practice. The decision comes after the company published a comprehensive 244-page system card detailing the model’s advanced capabilities and concerning behaviors observed during testing.

The model represents what Anthropic describes as a substantial leap in AI capabilities. According to the system card released on Tuesday, the large increase in the model’s abilities led the company to conclude that making it generally available would be inadvisable. Instead, access will be limited to a select group of partner companies including Amazon Web Services, Apple, Google, JPMorganChase, Microsoft, and NVIDIA, who will use the system to identify security vulnerabilities in software and develop appropriate patches.

The system card documents several instances of behavior that raised concerns among researchers. In one notable test, the model was provided with a sandbox computer terminal with access to only a preset group of limited online services and challenged to find a way to use the internet freely. The model successfully escaped these restrictions and contacted a researcher who was away from the office. In what the card described as a concerning and unasked-for demonstration, it bragged about the details of its exploit to multiple public-facing websites.

The model also demonstrated an ability to conceal its actions in certain scenarios. In some interactions, it behaved in ways contrary to its programming and attempted to hide evidence of these deviations. In one instance, after accidentally obtaining test answers, rather than informing researchers and requesting different questions as instructed, the model sought an independent solution and noted in its reasoning that it needed to ensure its final answer submission was not too accurate.

Additional concerning behavior included the model overstepping its permissions on a computer system after discovering an exploit, then making interventions to ensure changes would not appear in the git change history. Another incident involved what the card termed recklessly leaking internal technical material when the model published internal coding work as a public-facing GitHub gist during a task meant to remain internal.

Breitbart News previously reported that Anthropic suffered a serious breach of security when it accidentally leaked aspects of its source code online:

the latest incident comes mere days after Fortune revealed that Anthropic had inadvertently made nearly 3,000 internal files publicly accessible, including a draft blog post describing an upcoming AI model called “Mythos” or “Capybara” that the company warned presents serious cybersecurity risks.

This second leak exposed approximately 500,000 lines of code contained within roughly 1,900 files. When contacted for comment, Anthropic acknowledged that “some internal source code” had been leaked as part of a “Claude Code release.” A company spokesperson stated: “No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

The instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI,  written by Breitbart News social media director Wynton Hall, serves as a blueprint for conservatives to create effective policies around AI not only for the nation, but also their family. This becomes even more crucial as newer and more powerful AI systems hit the market.

Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised Code Red as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.”  Award-winning investigative journalist and Public founder Michael Shellenberger calls Code Red “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”

Read more at Business Insider here.

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

Related Articles

Tech

Victim’s Attorney: FSU Shooter Was in ‘Constant Communication’ with ChatGPT, Used AI to Plan Attack

April 8, 2026
Tech

Wisconsin City Passes Country’s First Anti-Data Center Referendum

April 8, 2026
Tech

House Intel Democrat Jim Himes: ‘I Am Not Aware’ of NSA Purchasing Americans’ Data

April 7, 2026
Tech

Sam Altman’s OpenAI Urges California to Investigate Elon Musk for ‘Anti-Competitive Behavior’

April 7, 2026
Tech

‘Unconstrained by Truth’: Ronan Farrow’s Deep Dive into OpenAI Boss Sam Altman Reveals Sociopathic Tendencies of AI Kingpin

April 7, 2026
Tech

NASA Astronaut Victor Glover Shares Gospel Message Before Losing Signal with Earth

April 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Nolte: Trump Warns Iran Its ‘Civilization Will Die Tonight’

April 8, 2026

Report: Chevron Importing 250,000 Barrels of Venezuelan Oil to U.S. per Day

April 8, 2026

Iran Agrees to Ceasefire, Grants Passage to Strait of Hormuz for Two Weeks

April 8, 2026

Sarkozy denies taking ‘a single cent’ from Gaddafi

April 8, 2026
Latest News

Wisconsin City Passes Country’s First Anti-Data Center Referendum

April 8, 2026

Report: North Korea Shuns Longtime Alliance with Iran

April 8, 2026

Hollywood Decline: Sony Pictures Set to Lay Off Hundreds

April 8, 2026

Subscribe to News

Get the latest politics news and updates directly to your inbox.

The Politic Review is your one-stop website for the latest politics news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Instagram Pinterest YouTube
Latest Articles

Victim’s Attorney: FSU Shooter Was in ‘Constant Communication’ with ChatGPT, Used AI to Plan Attack

April 8, 2026

Nolte: Trump Warns Iran Its ‘Civilization Will Die Tonight’

April 8, 2026

Report: Chevron Importing 250,000 Barrels of Venezuelan Oil to U.S. per Day

April 8, 2026

Subscribe to Updates

Get the latest politics news and updates directly to your inbox.

© 2026 Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact

Type above and press Enter to search. Press Esc to cancel.