The Biden administration is expected to unveil a comprehensive AI executive order on Monday the 30th of October.
According to a source close to VentureBeat, it’s set to be over 100 pages long, making it an extensive piece of AI regulation. The source said it’s the longest order he’s ever seen.
The executive order will address critical areas such as AI safety and bolstering the US AI industry, ensuring that advanced AI models undergo thorough assessments before deployment by federal workers.
In addition to safeguarding against AI’s risks, the US government has previously expressed its desire to consolidate its world-leading tech industry. The order is also expected to lower barriers to entry to the US for highly skilled workers, fostering a robust domestic tech ecosystem.
The order’s release is strategically timed, set to occur two days before the AI Safety Summit in Britain, where government leaders, top Silicon Valley executives, and civil society groups will convene to discuss the potential risks of AI to society.
The executive order demonstrates the US government’s proactive stance on AI regulation, aligning with global efforts to address the technology’s challenges and uncertainties. The UK recently ramped up discussions surrounding AI’s existential risks, aiming to inspire action ahead of their summit.
Federal government agencies, including the Defense Department, Energy Department, and intelligence agencies, will be mandated to conduct assessments on incorporating AI into their operations, emphasizing the enhancement of national cyber defenses.
Additionally, the executive order is expected to build upon voluntary commitments from major tech firms, including OpenAI, Adobe, Anthropic, Inflection, Meta, Google, and Nvidia, to develop technology that identifies AI-generated images and to share safety data with the government and academia.
The National Institute of Standards and Technology (NIST) is anticipated to lead the “red teaming” assessments of government-purchased large language models (LLMs) as part of the order’s evaluation process. Red teaming is the process of attempting to stress test, hack, and jailbreak digital systems, including machine learning models.
The US is now ready to unveil the first extensive piece of AI legislation in much of the Western world, with the EU’s AI Act still progressing through its lengthy process of legal ascent.