US ramps up regulatory discussions on AI – a timeline

June 21, 2023

AI regulation

In an announcement this Wednesday, Senate Majority Leader Chuck Schumer presented his highly anticipated legislative proposal for managing AI. He cautioned that “Congress must join the AI revolution” to harness and govern the technology. 

Schumer, a Democrat from New York, indicated his plan to commence a sequence of “AI Insight Forums” later in the year. These forums will assemble AI leaders, researchers, community leaders, national security specialists, and other experts.

The outputs from these meetings, he said, will serve as the foundation for policy recommendations for Congress. 

Speaking at the Center for Strategic and International Studies, a bipartisan nonprofit focusing on foreign policy and national security, Schumer stated, “Some experts predict that in just a few years the world could be wholly unrecognizable from the one we live in today. That is what AI is: world-altering.”

Acknowledging recent changes brought about by AI, Schumer warned, “We have no choice but to acknowledge that AI’s changes are coming, and in many cases are already here. We ignore them at our own peril. Many want to ignore AI because it’s so complex. But when it comes to AI, we cannot be ostriches sticking our heads in the sand.”

Balancing risks and benefits

Schumer also highlighted the positive potential of AI, from combating disease and famine to boosting productivity. On the other hand, he voiced concerns raised about its potential dangers, including labor upheaval, misinformation, and fraud.

He warned of possible threats to election integrity, including fake distorted statements and other forms of slander designed to damage their campaigns or chatbots spreading misinformation to influence voters. “AI could be used to jaundice and even totally discredit our elections as early as next year,” Schumer said. “Without taking steps to make sure AI preserves our country’s foundations, then we risk the very survival of our democracy.”

Schumer said 6 months of discussions with over 100 AI stakeholders has culminated in a legislative blueprint called the “SAFE Innovation for AI” framework. It consists of 5 key elements: 

  • Security: Strengthen national security by addressing AI threats while also ensuring economic security for workers, particularly those with low income.
  • Accountability: Foster the development of “responsible” systems to tackle misinformation and bias and safeguard creators by addressing copyright issues and preserving intellectual property rights.
  • Foundations: Ensure that AI systems align with democratic values, safeguard elections, accentuate AI’s societal benefits while minimizing potential harms, and prevent the Chinese Communist Party from setting the global AI norms.
  • Explainability: Companies should document how AI systems reach a specific outcome in straightforward and comprehensible terms. This will allow users to better comprehend the reasoning behind the system’s particular response and its origin.
  • Innovation: Endorse US advancements in AI technologies and promote the US as a world leader in AI. 

“Innovation must be our north star,” Schumer continued. “But if people think AI innovation is not done safely, if there are not adequate guardrails in place – and the public feels that innovation will not be safe – that will stifle or even halt innovation altogether.”

This call to action aligns with urgent warnings from AI experts, including a statement from the Center for AI Safety (CAIS) signed by over 350 tech CEOs, academics, public figures, and other notable experts. 

During a visit to the Bay Area on Tuesday, President Joe Biden met with 8 AI experts, including Jim Steyer, CEO of Common Sense Media; Sal Khan, the founder and CEO of Khan Academy; and Tristan Harris, executive director and co-founder of the Center for Humane Technology and a former design ethicist at Google.

Biden acknowledged the rapid pace of technological change driven by AI, stating, “We’ll see more technological change in the next 10 years than we’ve seen in the last 50 years – and maybe even beyond that.”

Bipartisan cooperation encouraged on AI

Schumer is closely collaborating with the White House on AI strategy, hosting bipartisan briefings to educate senators on AI.

Highlighting the need for an innovative approach to navigating AI policy, Schumer emphasized that typical legislative processes may not be enough in this rapidly evolving tech landscape. His proposed “AI Insight Forums” will analyze copyright and intellectual property, national security, employment, and preparation for worst-case scenarios, among other topics.

Schumer stated, “AI is evolving so quickly – but also has such complexity – that a new approach is required. Hearings won’t be enough. We need an all-of-the-above approach because that’s what AI’s complexities and speed demands.”

Schumer’s speech shows the age of AI is dawning, and it’s time for bipartisan cooperation to navigate the technology’s uncharted territories.

“Like many great undertakings in our nation’s history, we must move ahead with bipartisanship and cooperation,” he concluded. “We must cast aside ideological hang-ups and political self-interest. That is the only way our efforts will succeed.”

AI regulation in the US: a timeline

Compared to the EU and China, the US’ AI regulatory landscape remained somewhat undefined. 

Sector-specific regulations 

While broad AI regulation is lacking in the US, there has been some sector-specific activity that provides clues about future governance.

Notably, the National Institute of Standards and Technology (NIST), the Federal Trade Commission (FTC), and the Food and Drug Administration (FDA) have all issued guidance on AI.

NIST on AI

On January 26, 2023, the NIST released the Artificial Intelligence Risk Management Framework 1.0 (RMF)

This framework is a voluntary guide for technology companies designing, developing, deploying, or using AI systems. It aims to manage the risks associated with AI, emphasizing the trustworthy and responsible development of AI systems.

The RMF identifies 7 characteristics of trustworthy AI systems:

  1. Safe: Monitors, controls, or otherwise intervenes with the AI system to prevent harm or endangerment to human life, health, or property.
  2. Secure and resilient: Protects against attacks and manages potential major incidents. 
  3. Explainable and interpretable: Outputs are explainable and not a ‘black box.’ 
  4. Privacy-enhanced: Protects human autonomy through maintaining anonymity, confidentiality, and control.
  5. Fair, with managed bias: Promotes equity and equality and mitigates various biases.
  6. Accountable and transparent: Provides information about the AI system’s life cycle and maintains governance practices to reduce potential harms.
  7. Valid and reliable: Ensures the AI system functions as intended through ongoing testing and monitoring.

FTC on AI

The FTC signaled increased scrutiny of businesses using AI. It issued various warnings to businesses to avoid unfair or misleading practices. 

Officials have indicated that existing laws still apply to various AI-related activities. For instance, Lina Khan, Chair of the FTC, says enforcement agencies must be aware of how existing laws apply to AI-related discrimination and prejudice. 

FDA on AI

The FDA announced plans to regulate AI-powered clinical tools and devices for healthcare decision-making. 

This was among the first practical AI-focused regulatory literature, initially published in 2019. The FDA’s plans were expanded in 2023.

Looking ahead

On June 14th, 2023, bipartisan legislation was proposed to make social media companies liable for harmful content generated by AI. 

The bill, introduced by Senators Josh Hawley, a Republican, and Richard Blumenthal, a Democrat, could pave the way for legal action against companies for harm caused by AI-driven technology, such as deep fakes. 

Fraudsters are already using deep fakes for fraud and disinformation, which a security expert predicted could cost US taxpayers billions annually.

This proposed amendment to Section 230, a law that currently provides immunity to online platforms from being held responsible for user-generated content, comes in the wake of two Supreme Court cases last month that upheld the scope of Section 230 protections. 

Section 230 is pivotal to potential libel cases involving AI developers like ChatGPT, as it currently renders legal cases against AI outputs as intractable. Currently, Section 230 of the Communications Decency Act of 1996 ensures that “interactive computer services” are not held responsible as the “publisher or speaker” of information provided by users. 

Blumenthal referred to the bill as a “first step” toward implementing safeguards around AI.

The bipartisan backing for the bill illustrates increasing appetite for AI regulation, which Schumer’s announcements have confirmed. 

There’s a broad consensus between Republicans and Democrats on the need to govern AI, but opinions diverge on practical action. 

The wheels are in motion for comprehensive AI regulation in the US, which could start appearing at a similar time to the EU AI Act, due to come into force in 2024. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions