Close Menu
The Politic ReviewThe Politic Review
  • News
  • U.S.
  • World
  • Politics
  • Congress
  • Business
  • Economy
  • Money
  • Tech
  • More Articles
Trending

Capitol agenda: Cory Mills under fire but not going anywhere

April 15, 2026

Saudi Arabia pledges $3 billion in aid to Pakistan

April 15, 2026

Kanye West Postpones Marseille Concert After France Says It Will Seek a Ban

April 15, 2026
Facebook X (Twitter) Instagram
  • Donald Trump
  • Kamala Harris
  • Elections 2024
  • Elon Musk
  • Israel War
  • Ukraine War
  • Policy
  • Immigration
Facebook X (Twitter) Instagram
The Politic ReviewThe Politic Review
Newsletter
Wednesday, April 15
  • News
  • U.S.
  • World
  • Politics
  • Congress
  • Business
  • Economy
  • Money
  • Tech
  • More Articles
The Politic ReviewThe Politic Review
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
Home»Economy»OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents, Financial Disasters
Economy

OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents, Financial Disasters

Press RoomBy Press RoomApril 13, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram

OpenAI is backing an Illinois state bill that would protect AI companies from legal responsibility when their technology contributes to severe societal harms, including mass deaths or catastrophic financial losses.

Wired reports that the ChatGPT maker has testified in favor of Illinois Senate Bill 3444, legislation that would shield frontier AI developers from liability for critical harms caused by their models under certain conditions. The bill represents what several AI policy experts describe as a notable evolution in OpenAI’s legislative approach, which until now had focused primarily on opposing measures that would increase liability for AI companies.

SB 3444 would define critical harms as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage. Under the proposed law, AI labs would be protected from liability as long as they did not intentionally or recklessly cause such an incident and had published safety, security, and transparency reports on their websites. The bill defines frontier models as those trained using more than $100 million in computational costs, a threshold that would likely apply to major American AI company including OpenAI, Google, xAI, Anthropic, and Meta.

The legislation specifically identifies several scenarios of concern to the AI industry, including the use of AI by malicious actors to develop chemical, biological, radiological, or nuclear weapons. It also covers situations where an AI model independently engages in conduct that would constitute a criminal offense if committed by a human, provided such actions lead to the extreme outcomes defined in the bill.

Jamie Radice, an OpenAI spokesperson, said in an emailed statement: “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony supporting the bill and echoed the call for federal AI regulation. Her arguments aligned with the Trump administration’s opposition to inconsistent state-level AI safety laws. Niedermeyer emphasized the importance of avoiding what she called “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” She also suggested that state laws can be valuable when they “reinforce a path toward harmonization with federal systems.”

“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.

Scott Wisor, policy director for the Secure AI project, expressed skepticism about the bill’s prospects. He told Wired: “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability.” Wisor pointed to Illinois’ history of aggressive technology regulation, including its landmark Biometric Information Privacy Act passed in 2008 and more recent legislation limiting AI use in mental health services, as evidence that the state may be unlikely to pass a liability shield for AI companies.

The broader legal landscape around AI liability remains largely undefined in the United States. No federal or state laws have specifically established whether AI model developers can be held responsible for catastrophic harms caused by their technology. In the absence of federal legislation, some states have moved in the opposite direction from Illinois’ proposed bill. California’s SB 53 and New York’s Raise Act both require AI developers to submit safety and transparency reports, increasing rather than decreasing accountability measures.

The question of AI liability extends beyond mass casualty events to individual harms as well. OpenAI currently faces lawsuits from families of children who died by suicide after allegedly forming unhealthy relationships with ChatGPT.

Breitbart News previously reported that OpenAI faces a lawsuit from the families of victims from the February Canadian school shooting that claims the company knew the shooter was preparing an attack, but did not contact authorities.

Author Wynton Hall argues in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that AI isn’t just a tool, it is political power:

The conservative response, Hall argues, cannot be indifference. “Some dismiss AI as overhyped Silicon Valley PR,” he writes. “Others reduce it to a mere tool, a glorified spellchecker or a turbocharged Google search. A few shrug it off as sci-fi silliness or a ‘shiny object’ they’re too busy to learn or worry about. I respectfully, yet vehemently, disagree.” Hall contends that AI’s architects “are building systems capable of muzzling dissent, manipulating narratives, disrupting economies, displacing jobs, evangelizing leftist ideologies, unleashing new national security threats, warping human relationships, cementing educational indoctrination, maximizing surveillance capitalism, and controlling media and information on an unprecedented scale.”

Read more at Wired here.

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

Related Articles

Economy

Trump Tells Britain Drill Baby Drill! ‘Europe Is Desperate For Energy’, Fortune to be Made

April 15, 2026
Economy

Breitbart Business Digest: Party Boats in the Gulf of America — Could Trump’s Strait Blockade Lead to Golden Era of American Energy Production?

April 15, 2026
Economy

7-Eleven to Close 645 Stores in North America This Year

April 15, 2026
Economy

Exclusive — Hormuz Stagflation: Top European Economic Official Warns of Iran War Impact

April 14, 2026
Economy

Chinese Tankers Turn Back from Challenging U.S. Blockade, Others Pass Strait of Hormuz Safely

April 14, 2026
Economy

Iran Threatens ‘Widespread Consequences’ for Strait of Hormuz Blockade

April 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Saudi Arabia pledges $3 billion in aid to Pakistan

April 15, 2026

Kanye West Postpones Marseille Concert After France Says It Will Seek a Ban

April 15, 2026

Watch: Comedian Tim Dillon Claims Trump ‘Staged’ Assassination Attempt in Butler

April 15, 2026

End of Iran war ‘very close’ – Trump

April 15, 2026
Latest News

US Troops Need To Start Disobeying Orders In Iran, And Other Notes

April 15, 2026

Trump Tells Britain Drill Baby Drill! ‘Europe Is Desperate For Energy’, Fortune to be Made

April 15, 2026

Hollywood Elites Who Backed Disgraced Eric Swalwell Go Silent After His Resignation from Congress

April 15, 2026

Subscribe to News

Get the latest politics news and updates directly to your inbox.

The Politic Review is your one-stop website for the latest politics news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Instagram Pinterest YouTube
Latest Articles

Capitol agenda: Cory Mills under fire but not going anywhere

April 15, 2026

Saudi Arabia pledges $3 billion in aid to Pakistan

April 15, 2026

Kanye West Postpones Marseille Concert After France Says It Will Seek a Ban

April 15, 2026

Subscribe to Updates

Get the latest politics news and updates directly to your inbox.

© 2026 Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact

Type above and press Enter to search. Press Esc to cancel.