Sam Altman’s OpenAI reached an agreement with the Department of War to deploy its AI models on classified networks, just hours after the Trump administration ordered the federal government to “immediately cease” using Anthropic AI following a battle for control of the artificial intelligence model.

CNBC reports that OpenAI CEO Sam Altman announced late Friday that his company has finalized terms with the Department of War for the use of its AI models on classified government networks. The announcement came at the end of a turbulent day that saw rival AI company Anthropic shut out from federal contracts by the Trump administration.

“Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” Altman wrote in a post on the social media platform X. “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”

The agreement marks a significant development in the increasingly politicized landscape of AI and national security. Earlier the same day, Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security,” an unprecedented label typically reserved for foreign adversaries such as China or Russia. The designation would require all Department of Defense vendors and contractors to certify they do not use Anthropic’s AI models in their operations.

President Trump also issued a directive instructing every federal agency in the United States to immediately cease all use of Anthropic’s technology, effectively banning the company from government work across the entire federal apparatus.

The dramatic actions against Anthropic followed weeks of contentious negotiations between the AI company and the Defense Department. Anthropic had been the first AI company to successfully deploy its models across the Department of War’s classified network infrastructure. However, discussions over the ongoing terms of its contract ultimately collapsed due to disagreements over acceptable use cases.

According to sources familiar with the negotiations, Anthropic sought specific assurances that its AI models would not be deployed for fully autonomous weapons systems or for conducting mass surveillance operations targeting American citizens. The Defense Department, conversely, wanted Anthropic to agree to permit military use of the models across all lawful applications without such restrictions.

As Breitbart News previously reported:

Chief Pentagon Spokesman Sean Parnell addressed the controversy Thursday, stating that the Department of War has no interest in using Anthropic’s models for fully autonomous weapons or conducting mass surveillance of Americans, activities he noted are illegal. Despite these assurances, Parnell emphasized the agency’s position that it requires agreement for all lawful uses of the technology.

“This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk,” Parnell wrote in a post on X. “We will not let ANY company dictate the terms regarding how we make operational decisions.”

In a memo to OpenAI employees on Thursday, Altman had indicated that his company shared similar safety concerns, referring to them as “red lines” comparable to those advocated by Anthropic. However, in his Friday announcement, Altman stated that the Department of War had agreed to accommodate OpenAI’s safety requirements.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

As part of the agreement, Altman said OpenAI will implement technical safeguards designed to ensure its models operate as intended within the classified environment. The company also plans to deploy personnel to assist with model operations and maintain safety oversight.

In an apparent gesture toward industry-wide standards, Altman called for the War Department to extend similar terms to all AI companies. “We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” Altman wrote. “We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.”

Anthropic responded to its designation as a supply chain risk with a statement expressing the company’s disappointment. The firm said it was “deeply saddened” by the Pentagon’s decision and announced its intention to challenge the designation through legal channels.

AI is havinga  dramatic impact on the government, our military, and the American economy. It is crucial for conservatives to develop a game plan for how to address this revolutionary technology without sacrificing our rights or giving control of the government to Silicon Valley leftist who hate the MAGA movement. Developing this gameplan is the topic of the forthcoming book Code Red: The Left, the Right, China, and the Race to Control AI, written by Breitbart News social media director Wynton Hall.

Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised Code Red as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.”  Award-winning investigative journalist and Public founder Michael Shellenberger calls Code Red “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”

Read more at CNBC here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Read the full article here

Share.
Leave A Reply

Exit mobile version