Artificial intelligence company Anthropic is preparing to fight a Pentagon designation labeling it as a supply-chain security risk, even as CEO Dario Amodei issued a groveling apology for whiny complaints about the Trump Administration made in a leaked internal communication.

The Wall Street Journal reports that the War Department formally notified Anthropic’s leadership on Thursday that the company and its AI tools present security threats, according to a senior Pentagon official. This designation, typically reserved for businesses from foreign adversaries, represents the latest development in a weeks-long dispute between the AI startup and the military establishment. The conflict centers on disagreements over how Anthropic’s technology can be used by the War Department. The Pentagon is seeking the ability to use AI in all lawful-use cases, while Anthropic is demanding explicit guarantees that its technology will not be deployed in autonomous weapons systems or for mass domestic surveillance purposes.

The Pentagon official stated that the military would not permit a vendor to insert itself into the chain of command and potentially endanger armed services members by restricting the lawful use of critical capabilities.

War Secretary Pete Hegseth had announced last week his intention to issue the supply-chain risk notice, though both parties had continued discussions in hopes of reaching an agreement, according to people familiar with the situation. Those negotiations appear to have ended. Emil Michael, the undersecretary of War for research and engineering, wrote on X on Thursday: “I want to end all speculation: there is no active @DeptofWar negotiation with @AnthropicAI.”

The designation could have significant implications for Anthropic’s partners and investors, which include major corporations such as Lockheed Martin, Amazon.com, and Alphabet’s Google. This marks one of the first instances where the supply-chain risk designation has been applied to an American company, raising concerns among some observers about potential chilling effects on other businesses considering government contracts.

The designation can be implemented broadly or in a narrowly tailored manner. In a post last week, Hegseth indicated that any company working with the military cannot conduct business with Anthropic. Some analysts have suggested this directive exceeds the Pentagon’s legal authority, arguing that the War Department can only prohibit companies from using Anthropic’s Claude AI system specifically in their Pentagon-related work.

The Pentagon’s move followed a report by The Information revealing that Amodei had sent Anthropic staff an internal memo suggesting the administration was targeting the company for not providing “dictator-style praise” to President Trump and for resisting the administration’s AI agenda.

The memo, which was reviewed by the Journal, also criticized OpenAI, which recently accepted an agreement with the Defense Department allowing its systems to be used in classified environments. Amodei wrote that OpenAI CEO Sam Altman was engaging in “spin/gaslighting” by claiming to support Anthropic’s position while quickly signing its own agreement with the Pentagon.

Regarding OpenAI employees, Amodei wrote: “It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees. Due to selection effects, they’re sort of a gullible bunch.”

In a groveling apology for the memo, Amodei explained he had written it immediately after Trump and Hegseth announced measures that would significantly damage Anthropic’s business relationships and after OpenAI revealed its Pentagon deal. “It was a difficult day for the company, and I apologize for the tone of the post,” he wrote. “It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation.”

Despite the apology, Amodei maintained that Anthropic would challenge the designation legally. “We do not believe this action is legally sound,” he stated. He noted that the letter informing the company of the designation was written with narrow scope, supporting Anthropic’s interpretation that it “applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” contrary to Hegseth’s earlier suggestion.

A Microsoft spokesperson addressed the designation’s impact, saying: “Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers—other than the Department of War—through platforms such as M365, GitHub, and Microsoft’s AI Foundry and that we can continue to work with Anthropic on non-defense related projects.”

Anthropic was founded by former OpenAI executives and previously held the distinction of being the only company approved for use in classified government settings. Elon Musk’s xAI recently received similar approval. Anthropic’s Claude system has been utilized in military operations in Iran and was deployed in Venezuela during the operation to capture Venezuelan leader Nicolás Maduro. The company maintains a partnership with data-mining firm Palantir Technologies, which provides software to the Defense Department.

As AI companies, led and built by leftists with a history of attacking Donald Trump and the MAGA movement, continue to fight amongst each other to win the AI wars, the fallout could impact everything from America’s defense posture and elections to the economy and the mental health of our children and grandchildren. Breitbart News social media director Wynton Hall has written his forthcoming book, book Code Red: The Left, the Right, China, and the Race to Control AI, to serve as the definitive guide on how the MAGA movement can create positions on AI that benefit humanity without handing control of our nation to the leftists of Silicon Valley or allowing the Chinese to take over the world.

Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised Code Red as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.”  Award-winning investigative journalist and Public founder Michael Shellenberger calls Code Red “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”

Read more at the Wall Street Journal here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.



Read the full article here

Share.
Leave A Reply

Exit mobile version