Pentagon Chief Technology Officer Emil Michael joined The Alex Marlow Show to explain the fall from grace of Anthropic, once the Pentagon’s only approved AI partner. According to Michael, Anthropic was a “chosen winner” of the Biden administration who “wanted to get in between the command structure and the warfighter.”
During Wednesday’s episode of The Alex Marlow Show, the Breitbart News editor-in-chief noted that the Pentagon had “become very dependent on Anthropic” before the company was blacklisted by President Donald Trump, who labeled it a “radical left, woke company.” Marlow then asked Michael to describe how dependent the Pentagon was on its single provider, and how the Department of War plans to address potential risks.
Watch Below:
Michael explained that the Biden administration had an executive order that “was designed to create a small number of winners who are more tightly under the government’s control, and prevent, or at least make it difficult, for new startups to build AI companies.”
“Anthropic was one of those chosen winners because of their political philosophy,” Michael said.
The U.S. Under Secretary of Defense for Research and Engineering elaborated:
When I got here, and I got a hold of the contracts that were signed during that period, I had a ‘holy cow’ moment. The contracts basically prevented the Department of War from doing Department of War stuff — like doing weapons design, whether you’re doing physics or material science or aerospace dynamics — and I had to say, ‘Holy cow, we have to do something about this.’
So I went to all these companies, model companies, and said, ‘Number one, we can’t be dependent on one provider,’ and it’s not everywhere in the DoW, it’s just in a few sensitive areas. And number two, the terms have to let us do the things we do. If you’re a software provider, why sell to the Department of War if you can’t do Department of War things?
So I set out to just make that equivalent, like what we were buying and what they were selling was useful to us.
“Every other AI company, Grok, Google, even OpenAI, it took one or two weeks to come to terms,” Michael said. “I spent three months with Anthropic, trying to come to terms.”
“Ultimately, it was clear that they just wanted to get in between the command structure and the war fighter,” the Pentagon chief technology officer added. “They wanted to call the shots.”
“So they could basically be the de facto commander-in-chief of the military?” Marlow asked, adding, “That seems like the most egregious, over-the-top thing I’ve ever heard of, where a company could just dictate to the government how they would behave.”
Michael agreed, calling the scenario “Orwellian,” adding that “the 25 pages of terms and conditions” gave the AI company “the right” to “turn off the software in the middle of a battle” if the U.S. government is to be deemed having “tripped over one of those 25 pages, which had 50 different prohibitions.”
“So they could decide the battle’s over,” the Pentagon chief technology officer said.
Michael went on to offer an example:
They could decide that you were using this tool to plan a Maduro raid in Venezuela and they didn’t like that. And the software guardrails could have automatically clicked in without even any human intervention, because the model kind of senses what you’re trying to do.
Whether it was a human or a software guardrail — the model, that they say is almost sentient, could sense it because, remember, Anthropic has its own constitution. Not corporate values, its own constitution. Its own soul. And if you trip over them, the software could shut off at the critical moment.
“If you’re an investor, how does this change things for Anthropic now that you guys have designated it as a supply chain risk?” Marlow asked.
Michael replied, “It’s pretty logical. We at the Department of War have to transition off of Anthropic, and we have to have more than one provider going forward. That’s the simple part.”
The Pentagon technology officer added that when it comes to the supply chain risk, the Trump administration doesn’t want companies that are selling critical components to the Department of War “using Anthropic to design the stuff that they provide us.”
“Because there’s an insider threat risk,” Michael said. “There’s what we call ‘model poisoning’ — you’re going to hear a lot about this in the coming year — where a model’s poisoned to act a certain way by an individual or by the way they teach the model to do certain things.”
Something like this, Michael said, “could cause a miscalculation on purpose that invades the software and then ends up in a war fighter’s hands.”
“And I can’t take that risk,” the Pentagon technology officer asserted. “We can’t take that risk.”
“If Boeing wants to use Anthropic for their commercial jets, that’s fine,” he added. “If they want to use it for fighter jets, they can’t. Because I’m buying those at the Department of War, and I don’t want anything corrupted in that supply chain.”
The Alex Marlow Show, hosted by Breitbart Editor-in-Chief Alex Marlow, broadcasts coast to coast on weekdays from noon to 1 p.m. Eastern on the Salem Radio Network stations. You can listen to the radio show online here. The show also airs at 9 p.m. Eastern on the Salem TV news channel. Marlow’s podcast, The Alex Marlow Show Presented by Breitbart News, is released weekdays at 9 p.m. Eastern. You can subscribe to the podcast on YouTube, Rumble, Apple Podcasts, and Spotify.
Alana Mastrangelo is a reporter for Breitbart News. You can follow her on Facebook and X at @ARmastrangelo, and on Instagram.
Read the full article here



