ChatGPT seems to have glitched, spitting out responses ranging from quirky to nonsensical.
The buzz started on a Tuesday when perplexed users flocked to the r/ChatGPT subreddit, sharing screenshots of the AI’s bizarre antics.
One user summed up the confusion, saying, “It’s not just you, ChatGPT is having a stroke.”
The community was then flooded with descriptions of ChatGPT’s erratic behavior, saying it’s “going insane,” “off the rails,” and “rambling.”
Amidst growing Reddit chatter, a user named z3ldafitzgerald shared their eerie experience, stating, “It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia. It’s the first time anything AI-related sincerely gave me the creeps.”
As users delved deeper, the encounters grew stranger.
One user, puzzled by ChatGPT’s response to a simple question about computers, screenshotted the AI’s poetic but confusing answer: “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest.”
Speculation was rampant.
Some wondered if the AI’s ‘temperature’ (which controls output randomness) had been cranked up too high, leading to its unpredictable outputs, while others pondered if recent updates or new features were to blame.
You’d have to say the impact was mostly benign, but it left a lingering sense of whether there’s a ‘ghost in the machine’ – a ‘shadow self‘ waiting to be unleashed.
Reflecting on the incident, Dr. Sasha Luccioni from Hugging Face pointed out the vulnerabilities of relying on closed AI systems.
Dr. Luccioni shared, “Black box APIs can break in production when one of their underlying components gets updated. This becomes an issue when you build tools on top of these APIs, and these break down, too. That’s where open-source has a major advantage, allowing you to pinpoint and fix the problem!”
Cognitive scientist Dr. Gary Marcus highlighted that hallucinations might not be so amusing if these models were hooked to critical infrastructure or defense systems.
“The Great ChatGPT Meltdown has been fixed. Has OpenAI said anything about what caused it? With society’s increasing dependence on these tools, we should insist on transparency here, esp. if these tools wind up being used in defense, medicine, education, infrastructure, etc,” he said.
The Great ChatGPT Meltdown has been fixed. Has OpenAI said anything about what caused it?
With society’s increasing dependence on these tools, we should insist on transparency here, esp. if these tools wind up being used in defense, medicine, education, infrastructure, etc.
— Gary Marcus (@GaryMarcus) February 21, 2024
This isn’t the first time ChatGPT has exhibited such behaviors.
In 2023, GPT-4’s quality seemed to mysteriously shift and diminish. OpenAI somewhat acknowledged this but didn’t give the impression they knew why it was happening.
Later, some even speculated whether ChatGPT suffered from seasonal affect disorder (SAD), with one researcher finding that ChatGPT behaves differently when it ‘thinks’ it’s December versus when it ‘thinks’ it’s May.
ChatGPT offers periodical reminders of the unpredictable nature of AI. We shouldn’t take its ‘objectivity’ for granted.
A case of anthropomorphization?
ChatGPT’s erratic behavior also showed our tendency to anthropomorphize AI, attributing human-like characteristics, emotions, or intentions to the technology.
Descriptions of its whacky outputs, such as ChatGPT “having a stroke,” “going insane,” or “losing its mind,” immediately liken its behavior to ourselves.
Since ChatGPT ‘captures’ glitches and unpredictable behavior using natural language, it tricks us into offering a human interpretation.
Of course, ChatGPT is not sentient and doesn’t ‘suffer’ from any form of ailment. There will be a rational explanation, albeit an exceptionally complex one.