Anthropic has publicly come out against a proposed Illinois law supported by OpenAI that would protect AI companies from legal responsibility if their systems are used to inflict large-scale harm, such as mass casualties or property damage exceeding $1 billion.
Wired reports that the proposed legislation, known as SB 3444, is creating a sharp divide between two of America’s most prominent AI companies over how the technology should be governed. While policy analysts believe the bill faces long odds of passage, it has highlighted growing political tensions between Anthropic and OpenAI as both organizations expand their lobbying efforts nationwide.
Breitbart News reported earlier this week that OpenAI has come out in favor of the bill:
Jamie Radice, an OpenAI spokesperson, said in an emailed statement: “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony supporting the bill and echoed the call for federal AI regulation. Her arguments aligned with the Trump administration’s opposition to inconsistent state-level AI safety laws. Niedermeyer emphasized the importance of avoiding what she called “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” She also suggested that state laws can be valuable when they “reinforce a path toward harmonization with federal systems.”
According to sources familiar with the matter, Anthropic has been actively lobbying Illinois state senator Bill Cunningham, who sponsored SB 3444, along with other state lawmakers, urging them to substantially revise or reject the bill in its current form. An Anthropic spokesperson confirmed to Wired that the company opposes SB 3444 and noted that discussions with Cunningham about using the measure as a foundation for future AI legislation have been productive.
Cesar Fernandez, Anthropic’s head of US state and local government relations, said in a statement: “We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability. We know that Senator Cunningham cares deeply about AI safety, and we look forward to working with him on changes that would instead pair transparency with real accountability for mitigating the most serious harms frontier AI systems could cause.”
At the heart of the dispute between OpenAI and Anthropic is the question of who bears legal responsibility if an AI system contributes to a catastrophic event, an issue US legislators have only started to grapple with in recent years. Under SB 3444, an AI lab would avoid liability if a malicious actor employed its model to cause severe damage, provided that the lab had created its own safety framework and published it online.
Some experts warn that the legislation would undermine current legal protections designed to discourage corporate misconduct. Thomas Woodside, cofounder and senior policy adviser at the Secure AI Project, a nonprofit involved in shaping AI safety legislation in California and New York, said: “Liability already exists under common law and provides a powerful incentive for AI companies to take reasonable steps to prevent foreseeable risks from their AI systems. SB 3444 would take the extreme step of nearly eliminating liability for severe harms. But it’s a bad idea to weaken liability, which in most states is the most significant form of legal accountability for AI companies that’s already in place.”
Last week, Anthropic testified in support of a separate Illinois bill, SB 3261, which would rank among the strictest AI safety laws in the country if enacted. That measure would require frontier AI developers, including both OpenAI and Anthropic, to produce public safety and child protection plans and have them evaluated by independent third-party auditors.
Author Wynton Hall argues in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that AI isn’t just a tool, it is political power:
The conservative response, Hall argues, cannot be indifference. “Some dismiss AI as overhyped Silicon Valley PR,” he writes. “Others reduce it to a mere tool, a glorified spellchecker or a turbocharged Google search. A few shrug it off as sci-fi silliness or a ‘shiny object’ they’re too busy to learn or worry about. I respectfully, yet vehemently, disagree.” Hall contends that AI’s architects “are building systems capable of muzzling dissent, manipulating narratives, disrupting economies, displacing jobs, evangelizing leftist ideologies, unleashing new national security threats, warping human relationships, cementing educational indoctrination, maximizing surveillance capitalism, and controlling media and information on an unprecedented scale.”
Read more at Wired here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.
Read the full article here
