OpenAI CEO Sam Altman ramps up discussion of AI regulation in anticipation of “superintelligence.”
In a new article published on the OpenAI blog, the company’s CEO Sam Altman and colleagues Ilya Sutskever and Greg Brockman discuss AI’s progression toward “superintelligence,” also known as artificial general intelligence (AGI).
This comes less than a week after Sam Altman testified before Congress, stating that AI requires “urgent” regulation to safeguard society while preserving its benefits.
Superintelligence describes AIs that surpass the cognitive abilities of humans, a milestone that Altman, Sutskever, and Brockman say is achievable in the next ten years.
Such AIs would be capable of productivity equivalent to some of the world’s largest companies, reshaping society to both positive and negative ends.
The article suggests three potential avenues for mitigating the risks of superintelligence:
Altman, Sutskever, and Brockman call for close collaboration between AI leaders to maintain safety and smooth out the integration of superintelligence into society.
This could be implemented through government projects or mutual agreements between companies, with AI leaders agreeing to limit the growth rate of their models’ intelligence.
2: International monitoring
Second, they suggest establishing an international agency for superintelligence, like the International Atomic Energy Agency (IAEA), founded in 1957, to mitigate the risks of emerging nuclear technology.
Regulatory frameworks created by the agency would be implemented voluntarily and through national governments.
3: Public oversight
Thirdly, OpenAI calls for public engagement in the rules surrounding superintelligence rules, suggesting that the public should oversee the “bounds and defaults” of AI through a democratic system.
Altman, Sutskever, and Brockman say, “We don’t yet know how to design such a mechanism, but we plan to experiment with its development.” OpenAI had previously described defaults as an AI’s “out-of-the-box” behavior and bounds as the limits placed on its capabilities.
OpenAI also argues that open-source projects developed below a set capability “threshold” should be encouraged.
They state, “We believe it would be unintuitively risky and difficult to stop the creation of superintelligence.”
Given the exponential growth of AI in the past decade, OpenAI’s visions for a regulated AI landscape might prove prescient.
However, the AI industry is sure to be tested by the company’s call for collaboration and willingness to be regulated. As of yet, there is little evidence of cross-collaboration between AI leaders.
Furthering competitive goals while protecting the public will be challenging – and if OpenAI is to be believed – the time to act on AI superintelligence is now.