Senators propose licensing for powerful or “high risk” AI models

September 10, 2023

AI regulation

Senators Richard Blumenthal (Democrats) and Josh Hawley (Republicans) have put forth a bipartisan legislative framework that calls for stricter regulation of AI models.

Under their proposal, companies would need a government license to develop powerful or high-risk AI models, including those used in facial recognition.

In a statement published on X, the exact wording is “sophisticated general purpose AI models (e.g. GPT-4) or models used in high-risk situations (e.g. facial recognition).”

AI statement
Blumenthal and Hawley’s proposition. Source: X via Cointelegraph.

The legislative blueprint suggests creating a new federal body or a specialized group to oversee AI technologies. 

To obtain a license, companies would be required to test AI models for potential harm pre-launch, disclose any adverse incidents post-launch, and allow independent third-party audits.

The White House recently announced a voluntary framework that would require top-level AI companies to subject their models to testing “carried out in part by independent experts”

This recent bipartisan framework also proposes that companies should publicly disclose the training data used for AI models and suggests that technology liability protections won’t protect companies if they inflict AI-related harm. 

In some ways, this parallels the EU AI Act‘s “high risk” category, which subjects models under that definition to the most rigorous forms of compliance and regulation.

Points of contention

The framework has its share of skeptics.

Critics from both the libertarian-leaning political group Americans for Prosperity and the digital rights nonprofit Electronic Frontier Foundation have expressed concerns about stifling innovation and subjugating competitiveness. 

In response, the senators’ framework recommends robust conflict-of-interest rules for staff overseeing AI regulation.

This legislative proposal comes at a time when the US government is intensifying its focus on AI regulation. 

Senators Blumenthal and Hawley are set to oversee a Senate subcommittee hearing on AI accountability next week, with testimonies expected from Microsoft president Brad Smith and Nvidia’s chief scientist, William Dally. 

The US’ appetite for AI regulation has heated up. Senator Chuck Schumer is holding a series of recently announced “AI Insight Forums” to discuss AI regulation, which he describes as “one of the most difficult things we’ve ever undertaken.” 

The first forum is set to take place on September 13 and involves Elon Musk, Nvidia CEO Jensen Huang, Meta CEO Mark Zuckerberg, and OpenAI CEO Sam Altman, to name but a few.

Broader implications

The proposal by Blumenthal and Hawley indicates a willingness within Congress to take a more rigorous approach to AI regulation than current voluntary frameworks. 

The White House has also signaled a move toward stricter regulations, with the special adviser for AI, Ben Buchanan, stating that keeping society safe from AI harms will “require legislation.”

With Schumer’s talks forthcoming, Senators Blumenthal and Hawley are accelerating debates on AI ethics and regulation, setting the stage for what could be a critical period in the governance of AI.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions