Close Menu
Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On
30% Off Canon Promo Codes | April 2026

30% Off Canon Promo Codes | April 2026

9 April 2026
Bartesian Discount Codes and Deals: Up to 35% Off

Bartesian Discount Codes and Deals: Up to 35% Off

9 April 2026
Tesla is un-canceling its plan to build a smaller, cheaper EV: report

Tesla is un-canceling its plan to build a smaller, cheaper EV: report

9 April 2026
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

9 April 2026
Florida launches investigation into OpenAI

Florida launches investigation into OpenAI

9 April 2026
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Friday, April 10
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
News

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

By News Room9 April 20264 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.

The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.

“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.

Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.

In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”

“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.

Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

30% Off Canon Promo Codes | April 2026

30% Off Canon Promo Codes | April 2026

9 April 2026
Bartesian Discount Codes and Deals: Up to 35% Off

Bartesian Discount Codes and Deals: Up to 35% Off

9 April 2026
Tesla is un-canceling its plan to build a smaller, cheaper EV: report

Tesla is un-canceling its plan to build a smaller, cheaper EV: report

9 April 2026
Florida launches investigation into OpenAI

Florida launches investigation into OpenAI

9 April 2026
Artemis II Astronauts Witnessed 6 Meteorites Colliding With the Moon

Artemis II Astronauts Witnessed 6 Meteorites Colliding With the Moon

9 April 2026
Samsung’s Galaxy Watch 8 is easier to recommend now it starts at 0

Samsung’s Galaxy Watch 8 is easier to recommend now it starts at $260

9 April 2026
Top Articles
Sleep Apnea Often Goes Undetected in Women. That’s Starting to Change

Sleep Apnea Often Goes Undetected in Women. That’s Starting to Change

6 March 202633 Views
The Best Blind Boxes You Can Buy Online

The Best Blind Boxes You Can Buy Online

15 January 202633 Views
The US claims it just strongarmed Taiwan into spending 0 billion on American chip manufacturing

The US claims it just strongarmed Taiwan into spending $250 billion on American chip manufacturing

15 January 202624 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss
Artemis II Astronauts Witnessed 6 Meteorites Colliding With the Moon

Artemis II Astronauts Witnessed 6 Meteorites Colliding With the Moon

9 April 2026

During their flyby of the far side of the moon, the Artemis II astronauts aboard…

Samsung’s Galaxy Watch 8 is easier to recommend now it starts at 0

Samsung’s Galaxy Watch 8 is easier to recommend now it starts at $260

9 April 2026
Review: TCL NXTVISION Art TV

Review: TCL NXTVISION Art TV

9 April 2026
The best Apple Watch deals for April 2026

The best Apple Watch deals for April 2026

9 April 2026
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2026 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.