The United States government has described AI startup Anthropic as an unacceptable national security risk in a court filing, citing concerns about the company’s reliability as a wartime partner.
The New York Times reports that in a 40-page legal filing submitted to the U.S. District Court for the Northern District of California on Tuesday, government attorneys outlined their reasoning for slashing Anthropic’s qualification as a trusted partner for national defense purposes. The filing represents the government’s first formal response to lawsuits initiated by the San Francisco-based AI company, which produces the Claude chatbot system formerly used by the Pentagon.
According to the filing, the core of the government’s concern centers on the possibility that Anthropic could disable or modify its technology to align with corporate interests rather than national priorities during wartime scenarios. Government lawyers emphasized the particular vulnerability of AI systems to manipulation, arguing that granting Anthropic access to the Department of War’s infrastructure would introduce unacceptable risks into supply chains.
The conflict between Anthropic and the Pentagon originated from negotiations over a $200 million contract for AI implementation in classified government systems. During these discussions, Anthropic established specific boundaries for its technology usage, explicitly stating it did not want its AI employed for mass surveillance of American citizens or integrated with autonomous lethal weapons systems. Pentagon officials countered that private companies should not dictate terms regarding how the military utilizes acquired technology, adding that its use of AI is limited to legal applications.
When the parties failed to reach an agreement, War Secretary Pete Hegseth designated Anthropic as a supply chain risk in February. This classification effectively prohibits the company from conducting business with the U.S. government. The label had previously been reserved exclusively for foreign companies that presented national security threats, making Anthropic’s designation unprecedented for a domestic firm.
In response to the Pentagon’s action, Anthropic filed two separate lawsuits on March 9. One was submitted to the U.S. District Court for the Northern District of California, while the other was filed with the U.S. Court of Appeals for the District of Columbia Circuit. In these legal challenges, Anthropic accused the Pentagon of weaponizing the supply chain risk designation as punishment for ideological reasons and claimed the action violated the company’s First Amendment rights.
The company has requested judicial intervention to block the government’s designation, warning that more than 100 business customers might terminate their relationships with Anthropic due to the risk label. This potential exodus could result in billions of dollars in lost revenue for the company. A hearing on Anthropic’s preliminary injunction request is scheduled for next week.
In Tuesday’s filing, government lawyers clarified that their dispute with Anthropic stemmed from the company’s behavior during contract negotiations rather than the specific limitations the startup proposed regarding mass surveillance and autonomous weapons. The attorneys asserted that the Pentagon was merely exercising its legitimate authority to select appropriate vendors for defense contracts.
Addressing Anthropic’s First Amendment arguments, government lawyers stated that constitutional protections do not grant companies the right to unilaterally impose contract terms on the government. They argued that Anthropic had provided no legal precedent supporting what they characterized as a radical interpretation of First Amendment protections.
Dario Amodei, Anthropic’s chief executive, defended his company’s position in a February 26 statement, emphasizing that the military, not the company, ultimately determines how its technologies are deployed. “We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei said. He later retracted several of his attacks on the Trump administration and OpenAI in a groveling apology.
Microsoft submitted a friend-of-the-court brief urging federal courts to temporarily block the Pentagon’s supply chain risk designation. Additionally, thirty-seven engineers and researchers from OpenAI and Google, including Google’s chief scientist Jeff Dean, filed a brief supporting Anthropic’s position in the legal dispute.
The dispute between the Pentagon and Anthropic underscores the cultural tensions between the defense establishment and Silicon Valley. While the technology sector has historical roots in military innovation, many companies have grown increasingly uncomfortable with their technologies being applied to warfare. Shockwaves continue to be felt in the AI industry, such as the recent departure of OpenAI’s robotics head over concerns about its Pentagon deal.
Breitbart News social media director Wynton Hall lays out the dangers of AI technology being controlled by Silicon Valley leftists hostile to not only the MAGA movement, but America in general, and how conservatives can protect their family members and the country at large from this menace in the newly released book Code Red: The Left, the Right, China, and the Race to Control AI.
Topics included in CODE RED include:
- Why AI is wired for woke indoctrination—and how to resist it.
- How elites plan to weaponize fears over AI job losses to push dependency.
- How America can beat China without becoming China.
- How to prepare your kids for the blinding speed of AI disruption.
- The new national security threats AI unleashes—and how we defend against them.
- Why “AI girlfriends” are luring millions—and what it will take to preserve authentic human connection.
- How AI will test faith and meaning—and why spiritual renewal may be its most surprising outcome.
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here


