OpenAI CEO Sam Altman has visited India and South Korea in his tour of Asia, which also includes Israel, Jordan, Qatar, and the UAE. Next week, he’s set to visit Japan, Singapore, Indonesia, and Australia.
excited to visit israel, jordan, qatar, the uae, india, and south korea this week!
— Sam Altman (@sama) June 4, 2023
Altman in India
During his visit to India, Altman spoke with The Economic Times, where he referred to ChatGPT as somewhat of a ‘Pandora’s Box,’ saying he worries he “did something really bad” by creating the AI.
“What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT,” Altman told Satyan Gajwani, the vice chairman of Times Internet. Altman said he hopes ChatGPT will lead the way in AI governance, emphasizing the technology’s potential to do good rather than its risks.
He said, “OpenAI is about a quest. This is one of the coolest things humanity has ever built. We want to make sure people get the benefit.”
Altman then described the SciFi-esque AI risks highlighted in recent weeks, including Terminator-like scenarios where ‘the robots take over.’ He said, “I do think there are SciFi concerns which will turn out to be wrong,” but described later versions of AI, or ‘superintelligent’ AI, “may be an extremely different thing.”
When asked about AI regulation, Altman re-iterated the nuclear-style watchdog system he previously discussed, “Let’s have a system in place so that we can audit people who are doing it, license it, have safety tests before deployment.”
Altman in South Korea
Speaking at an event during his visit to South Korea on Friday, Altman urged for a united global approach to regulation AI, “As these systems get very, very powerful, that does require special concern, and it has global impact. So it also requires global cooperation.”
Altman said he didn’t doubt humanity’s ability to act on AI risk but more so the speed at which AI might evolve.
“If you study the history of technological revolutions, seems like roughly in two generations, we can adapt to almost any amount of labor market change. But if this all happens in 10 years, that’s a new challenge,” he elaborated.
At another event in South Korea, when questioned about AI’s impact on jobs, which for many is the most immediate concern, Altman replied, “I think what will really happen is not that none of us have jobs, but we have different kinds of jobs that may not look much like the jobs of today.”
He went on, “When people 100 years from now look back at us now, they’ll be like, ‘Wow, I can’t believe they lived like that.’”
A member of the audience asked Altman how students should prepare for AI “survival,” Altman refuted, “You are about to enter, I think like, the greatest golden age of human possibility, technological development, economic growth.” Last month, Nvidia CEO Jensen Huang said now was probably the best time for students to graduate since the inception of the personal computer.
“[The] ability to learn new things fast and adapt to them and sort of evolve yourself into technology, those are the kinds of skills that I think are going to be very much rewarded,” Altman said.
As probably the most influential figure in AI development, at the helm of the most influential company, we have to hope Altman’s optimism about humanity controlling AI is prescient.
However, he’s gained somewhat of a reputation for veering between positivity about AI’s future and paranoia, which he once again illustrates by saying he sometimes loses sleep at night for having created ChatGPT. Whether hyperbolic or not, worries about AI are certainly not constrained to the public – they’re affecting its creators, too.