OpenAI’s CEO, Sam Altman, stirred the tech community when he announced that artificial general intelligence (AGI) had been “achieved internally,” only to later say he was “just memeing.”
Sam Altman is hardly as controversial as Elon Musk, but he’s become a somewhat polarizing figure for a playful jest that some deem inappropriate. After all, he might be considered one of the most influential individuals in technology.
Altman’s joke was made via a post on the Reddit forum r/singularity, where he stated, “AGI has been achieved internally.”
AGI, short for artificial general intelligence, refers to AI systems that parallel or even surpass human cognitive capabilities. OpenAI now labels itself as an AGI developer and intends to be the first to ‘break through’ into that paradigm.
Altman soon edited his flippant statement, adding a caveat: “Obviously this is just memeing, y’all have no chill, when AGI is achieved it will not be announced with a Reddit comment.”
Many took to Reddit and X to accuse Altman of trolling – perhaps a little jarring when people are genuinely worried about the future of AI.
EDIT: “agi has been achieved internally (EDIT obviously this is just memeing, y’all have no chill! when agi is achieved it will not be announced with a reddit comment…)” pic.twitter.com/nrEHnvuEcS
— Smoke-away (@SmokeAwayyy) September 26, 2023
The forum, r/singularity, primarily delves into discussions on the technological singularity – the point when computer intelligence could potentially eclipse human intelligence, leading to unpredictable and irreversible changes to the world as we know it.
The backdrop to this commentary has roots in a hypothetical scenario penned by Oxford University philosopher Nick Bostrom in his influential 2014 book, “Superintelligence: Paths, Dangers, Strategies.”
Bostrom describes the existential risks that ultra-advanced AI might pose. One of his thought experiments, known as the Paperclip Maximiser, paints a picture of an AGI whose innocent intent – to produce as many paperclips as possible – could inadvertently lead to humanity’s downfall.
He explains that the AI would deem human bodies as valuable sources of atoms to manufacture paper clips.
In a humorous nod to this thought experiment, following Altman’s post, OpenAI researcher Will Depue shared an AI-rendered image on Twitter showing “OpenAI offices overflowing with paperclips!”
BREAKING NEWS: OpenAI offices seen overflowing with paperclips! pic.twitter.com/rnV4dLi0cU
— will depue (@willdepue) September 26, 2023
Bostrom also posits, “The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off.”
It’s a familiar science fiction plotline – the machines in the Matrix began to harvest humans for the purposes of energy, for example.
As Morpheus says, “The human body generates more bioelectricity than a 120-volt battery and over 25,000 BTUs of body heat.” Bostrum theorizes that AI might reach such conclusions if provoked in the right scenarios.
Humanity being replaced by paperclips would be a quirky fate for the human race, to say the least.