The 30th of October, 2023, marks an advancement in AI governance as President Joe Biden has announced a comprehensive executive order setting robust rules and guidelines for AI.
The order is expected to usher in a new era of transparency and standardization for AI companies, highlighting the necessity of clear content labeling and watermarking practices.
“To realize the promise of AI and avoid the risk, we need to govern this technology, there’s no way around it,” Biden spoke at the signing event at the White House.
At the event, Biden also spoke of AI deep fakes, announcing to humor among the audience, “I’ve watched one of me…I said, the hell did I say that?”
The order focuses on heightened transparency from AI developers and establishes a series of new standards, particularly for labeling AI-generated content.
The White House aims to enhance “AI safety and security” through the order. It includes a surprising requirement for developers to share safety test results for new AI models with the US government if the technology could potentially threaten national security.
This involves the Defense Production Act, typically reserved for national emergencies.
For those not familiar with US lawmaking, executive orders are not legislation – they do not create new laws or change existing laws. Instead, they instruct Congress on policy priorities.
The order, published here, does contain numerous deadlines specifying various actions that need to be taken to usher forth AI legislation.
Executive orders must be based on constitutional or statutory authority and cannot be used to circumvent Congress or create laws unilaterally.
As a result, many have highlighted that the executive order lacks enforcement mechanisms. It doesn’t carry the weight of congressional legislation on AI.
“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” observes Anu Bradford, a law professor at Columbia University.
The order comes days before the UK is set to the milestone AI Safety Summit, which will see politicians, researchers, tech executives, and members of civil society convene at Bletchley Park. Vice President Kamala Harris is attending. Notably, China will be represented at the summit, too.
“We intend that the actions we are taking domestically will serve as a model for international action,” Harris said at the White House event.
This statement draws attention to criticisms that the order could potentially undermine open international collaboration ahead of the AI Safety Summit.
The US was a slow starter in AI regulation, and the nature of executive orders means it remains one. It’s perhaps brazen to suggest other countries should follow its trajectory.
Harris continued that the US would “apply existing international rules and norms with a purpose to promote global order and stability, and where necessary to build support for additional rules and norms which meet this moment.”
What the White House has to say
The order introduces stringent standards for AI, safeguarding Americans’ privacy, fostering equity and civil rights, protecting consumers and workers, fueling innovation and competition, and reinforcing American leadership in AI.
It complements voluntary commitments from 15 leading companies to promote the safe and trustworthy development of AI.
One of the most notable elements of the order is that the President has stated developers of ‘powerful’ AI systems will need to share safety test results and critical information with the US government, aiming to ensure these systems are safe and trustworthy before public release.
The National Institute of Standards and Technology (NIST) will lead ‘red teaming’ efforts to test and analyze AI model safety.
Red team is the process of probing and stress testing AI model functionality and security.
Regarding privacy, the President’s call for bipartisan data privacy legislation reflects an understanding of the situation’s urgency.
However, as noted, the effectiveness of these measures will ultimately depend on the swift and decisive action of Congress, which has been historically slow in legislating on tech-related issues.
Additionally, the executive order takes a strong stance on advancing equity and combating algorithmic discrimination, with directives to ensure fairness in housing, criminal justice, and federal benefits programs.
Again, while these are positive steps, the success of these initiatives will hinge on rigorous enforcement and continuous oversight.
The order addresses eight key areas.
Here’s how President Biden’s landmark Executive Order on AI will ensure America leads the way in this period of technological change while keeping Americans safe. pic.twitter.com/SvBPxiZk3M
— The White House (@WhiteHouse) October 30, 2023
1. New standards for AI safety and security
- Developers of potent AI systems must share safety test results and crucial information with the U.S. government.
- The development of standards, tools, and tests to ensure AI systems’ safety and trustworthiness, led by the NIST.
- Protection against AI’s potential in engineering hazardous biological materials by establishing robust standards for biological synthesis screening.
- Establishing protocols to safeguard Americans from AI-enabled fraud and deception, including standards for detecting AI-generated content and authenticating official communications.
- Launching an advanced cybersecurity program to leverage AI in securing software and networks.
2. Protecting Americans’ privacy
- Advocating for federal support in the development and use of privacy-preserving techniques in AI.
- Strengthening research in privacy-preserving technologies.
- Enhancing federal agencies’ guidelines to ensure privacy in the collection and use of data, especially personally identifiable information.
3. Advancing equity and civil rights
- Providing guidance to mitigate AI’s potential to exacerbate discrimination in housing, justice, and employment.
- Promoting fairness across the criminal justice system through developing best practices in AI application.
4. Standing up for consumers, patients, and students
- Encouraging responsible AI use in healthcare for the development of affordable, life-saving medications and ensuring safety in AI-involved healthcare practices.
- Facilitating AI’s transformative role in education, supporting educators in deploying AI-enhanced learning tools.
5. Supporting workers
- Developing principles and best practices to balance the benefits and harms of AI in the workplace.
- Conducting comprehensive studies on AI’s impact on the labor market and fortifying federal support for workers facing labor disruptions due to AI.
6. Promoting innovation and competition
- Catalyzing AI research nationwide and ensuring a competitive AI ecosystem.
- Streamlining immigration processes for highly skilled individuals in critical AI sectors.
7. Advancing American leadership abroad
- Strengthening international collaborations and frameworks in AI.
- Promoting safe and responsible AI development and deployment worldwide.
8. Ensuring responsible and effective government use of AI
- Providing clear guidance for federal agencies on AI use, procurement, and deployment.
- Enhancing AI talent acquisition across the government and providing AI training to federal employees.
The Biden-Harris Administration is trying to strike a balance between retaining and enhancing the US’ world-leading AI industry while stunting obvious risks.
Deep fakes and misinformation are at the top of most people’s minds, seeing as we now have tangible evidence that they could influence election votes.
With the US general election next year, it’s perhaps unsurprising that the order increases pressure to watermark and highlight AI-generated content so users can easily determine real from fake.
Technically speaking, however, there are no robust solutions for achieving this in practice.
Industry reactions
Industry reactions – naturally – are mixed. Many praise the rapid progress toward signing the order, whereas others highlight that legislation and information about enforcement action are lacking.
Again, the order indicates that the White House seeks Congress to act on AI policy.
The only exception here is the Defense Production Act, which has been invoked to force AI companies to notify the government when developing models that interact with national security.
The official wording is an AI model that poses a “serious risk to national security, national economic security or national public health and safety.”
The AI Executive Order is a bit ridiculous and pretty hard to enforce.
Here are the issues –
1. Any foundation model that poses a serious risk to national security – How do you determine if something is a “serious risk to national security!”?
If this is about…
— Bindu Reddy (@bindureddy) October 30, 2023
Some highlighted that, compared to the EU AI Act, there’s no guidance on training data transparency, over which multiple AI developers are facing lawsuits.
Absent from new exec order:
AI companies must reveal their training set.
To develop safe AI, we need to know what the model is trained on.
Why aren’t the AI safety orgs advocating for this?https://t.co/yjr21bNIK4
— Damek Davis (@damekdavis) October 30, 2023
Adnan Masood, Chief AI Architect at UST, applauded the initiative, stating, “The order underscores a much-needed shift in global attention toward regulating AI, especially after the generative AI boom we have all witnessed this year.”
Avivah Litan, a Vice President at Gartner Research, noted that while the rules start off strong, there are still areas where the mandates fall short. She questioned the definition of ‘most powerful’ AI systems, the application to open source AI Models, and enforcing content authentication standards across social media platforms.
Bradley Tusk, CEO at Tusk Ventures, said AI developers aren’t likely to share proprietary data with the government, stating, “Without a real enforcement mechanism, which the executive order does not seem to have, the concept is great but adherence may be very limited.”
Randy Lariar, AI security leader at Optiv, said, “I worry that many open-source models, which are derived from the big foundational models, can be just as risky without the burden of red teaming — but this is a start.”
Ashley Leonard, chief executive officer of Syxsense, added that it will be very interesting to see how the order is implemented. “It takes real resources — budget, time, and staff — for even the most advanced companies to keep up with vulnerabilities and bug fixes,” said Leonard.
Max Tegmark, a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute, highlighted that the order needs to come equipped with a plan for creating and enforcing legislation, stating, “Policymakers, including those in Congress, need to look out for their citizens by enacting laws with teeth that tackle threats and safeguard progress.”
Jaysen Gillespie, Head of Analytics and Data Science at RTB House, viewed the executive order positively, stating that AI regulation is a topic where a bipartisan approach is truly possible.
Alejandro Mayorkas from Homeland Security said, “The unprecedented speed of AI’s development and adoption presents significant risks we must quickly mitigate, along with opportunities to advance and improve our work on behalf of the American people…It directs DHS to manage AI in critical infrastructure and cyberspace, promote the adoption of AI safety standards globally, reduce the risk of AI’s use to create weapons of mass destruction, combat AI-related intellectual property theft, and ensure our immigration system attracts talent to develop responsible AI in the United States.”
Casey Ellis, founder and CTO of Bugcrowd, said, “The directive mandates developers to share safety test results with the U.S. government, ensuring AI systems are extensively vetted before public release. It also highlights the importance of AI in bolstering cybersecurity, particularly in detecting AI-enabled fraud and enhancing software and network security. The order also champions the development of standards, tools, and tests for AI’s safety and security.”
A necessary step, but challenges ahead
President Biden’s executive order on AI attempts to be comprehensive, but Congress must follow up its urgency with legislative action. That is far from guaranteed.
While it establishes stringent standards and emphasizes transparency, accountability, and the prevention of discriminatory practices, the true effectiveness of this initiative will depend on its implementation and the ability to balance regulatory oversight with fostering innovation.
With the AI Safety Summit imminent, conversations surrounding AI’s risks and how to mitigate them are escalating.
Comprehensively controlling AI still relies on the speed and efficiency of lawmaking and the ability to enforce it.