A federal judge has temporarily halted the Pentagon’s classification of AI company Anthropic as a supply chain risk, delivering an early legal win for the company in its battle against the Department of War.

Axios reports that U.S. District Judge Rita Lin granted a preliminary injunction on Thursday that pauses the Pentagon’s designation of Anthropic as a supply chain risk. The decision provides temporary relief to the AI company, which has argued that the classification was causing immediate and irreparable damage to its business operations and reputation.

Anthropic had sought the injunction on grounds that the designation was prompting business partners to reconsider their contracts with the company and leading federal agencies to discontinue use of Claude, Anthropic’s AI product. The company maintains that the preliminary injunction will provide protection from ongoing reputational harm while offering greater certainty to its commercial partners moving forward.

The legal battle is unfolding on multiple fronts, with a parallel case currently proceeding in a D.C. court. In both legal proceedings, Anthropic has challenged the Pentagon’s actions on constitutional and procedural grounds, arguing that War Secretary Pete Hegseth and the Pentagon violated First Amendment protections and procurement law through the supply chain risk designation.

In her written order, Judge Lin addressed the constitutional implications of the Pentagon’s actions. She stated that nothing in the governing statute regarding supply chain risk designations supports what she characterized as an Orwellian concept that would allow the government to brand an American company as a potential adversary and saboteur of the United States simply for expressing disagreement with government policies.

Breitbart News previously reported that the government’s lawyers attacked the notion that the company’s First Amendment rights had been infringed:

In Tuesday’s filing, government lawyers clarified that their dispute with Anthropic stemmed from the company’s behavior during contract negotiations rather than the specific limitations the startup proposed regarding mass surveillance and autonomous weapons. The attorneys asserted that the Pentagon was merely exercising its legitimate authority to select appropriate vendors for defense contracts.

Addressing Anthropic’s First Amendment arguments, government lawyers stated that constitutional protections do not grant companies the right to unilaterally impose contract terms on the government. They argued that Anthropic had provided no legal precedent supporting what they characterized as a radical interpretation of First Amendment protections.

An Anthropic spokesperson welcomed the court’s decision, stating, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits.” The spokesperson added, “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

During a court hearing earlier in the week, Judge Lin had expressed skepticism regarding the alignment between the punitive measures against Anthropic and legitimate national security concerns. The judge questioned why the Pentagon would need to implement such extensive restrictions when it could simply choose to stop using Claude’s services directly.

The scope of the administration’s designation extends beyond merely prohibiting the Pentagon’s own use of Anthropic’s products. The classification requires any company conducting business with the Pentagon to sever ties with Anthropic, creating a far-reaching impact on the AI company’s commercial relationships and market position.

Breitbart News social media director Wynton Hall lays out the dangers of AI technology being controlled by Silicon Valley leftists hostile to not only the MAGA movement, but America in general, and how conservatives can protect their family members and the country at large from this menace in the newly released instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI.

Topics included in CODE RED include:

  • Why AI is wired for woke indoctrination—and how to resist it.
  • How elites plan to weaponize fears over AI job losses to push dependency.
  • How America can beat China without becoming China.
  • How to prepare your kids for the blinding speed of AI disruption.
  • The new national security threats AI unleashes—and how we defend against them.
  • Why “AI girlfriends” are luring millions—and what it will take to preserve authentic human connection.
  • How AI will test faith and meaning—and why spiritual renewal may be its most surprising outcome.

Read more at Axios here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

 

Read the full article here

Share.
Leave A Reply

Exit mobile version