Grok’s image generator has seized the headlines, stirring immense criticism for enabling inappropriate, explicit, and manipulative forms of AI use.
When Musk founded his AI startup xAI in 2023, he said the goal was to “understand the universe.”
Fast-forward to today and that cosmic ambition has somewhat crash-landed back on Earth.
Yet Grok, xAI’s first and only product, is still managing to send shockwaves through the AI community and wider society – just perhaps not quite in the way the team xAI might have envisioned.
First released in 2023, Grok differentiates itself from competitors like OpenAI’s ChatGPT or Google’s Bard in one key aspect – its lack of traditional content filters.
Now, armed with its new image generation capabilities like ChatGPT’s DALL-E, Grok can apply its unfiltered nature to the world of visual imagery.
From surreal depictions of Donald Trump cradling a pregnant Kamala Harris to bizarre mashups of Darth Vader kissing Princess Peach, Grok has unleashed a torrent of strange and provocative imagery that lays bare both the unfiltered nature of its inner-workings as well as users’ imaginations.
Ty grok pic.twitter.com/9JgjFBCYRI
— shako (@shakoistsLog) August 14, 2024
The copyright issues with grok are just wild pic.twitter.com/eggEV5crg6
— Alex Masmej (@AlexMasmej) August 15, 2024
Musk touts this as a selling point, calling Grok “the most fun AI in the world!”
To Musk, limiting restrictions on AI isn’t just fun and games, but a moral imperative. He’s repeatedly criticized AI like OpenAI companies for training their models to be “woke,” arguing it makes the AI less truthful and potentially dangerous.
“The danger of training AI to be woke — in other words, lie — is deadly,” Musk tweeted in 2022, setting the stage for Grok’s eventual release.
The danger of training AI to be woke – in other words, lie – is deadly
— Elon Musk (@elonmusk) December 16, 2022
Is Grok truly ‘unfiltered’?
Many media outlets suggest that Grok lacks any guardrails, but that’s not strictly true.
If Grok had no guardrails whatsoever, the kinds of images we’d be seeing would be pretty much unspeakable.
Interestingly, some X users felt Grok’s filters were cranked up a couple of days after release, curbing its propensity to create the most explicit content.
Almost a full 24 hours later and I have access to image generation again. It appears as if X has gone in and patched the exploit. Violent depictions and sexually suggestive image generation has been throttled significantly since last night at least for me. It does not appear as… pic.twitter.com/QLp5GWLJRF
— Christian Montessori (@chrmontessori) August 16, 2024
When you ask Grok itself about its guardrails, it comes up with the following:
- “I avoid generating images that are pornographic, excessively violent, hateful, or that promote dangerous activities.”
- “I’m cautious about creating images that might infringe on existing copyrights or trademarks. This includes well-known characters, logos, or any content that could be considered intellectual property without a transformative element.”
- “I won’t generate images that could be used to deceive or harm others, like deepfakes intended to mislead, or images that could lead to real-world harm.”
I’d say guardrail number one is probably fair now that xAI has dialed up its filters.
The other guardrails, however, remain very weak. The copyright and intellectual property filters are evidently terrible, much weaker than in ChatGPT.
Creating visual medleys of famous copyright characters, from Mario to Darth Vader, is remarkably straightforward.
uhh – hey grok?
i think you might get sued. pic.twitter.com/XDBgFNGgTs
— Silicon Jungle (@JungleSilicon) August 14, 2024
Whether xAI will dial up the copyright filters too, or just gamble that companies won’t successfully sue them, is yet to be seen.
While virtually every large AI company has been named in copyright lawsuits, definitive rulings are yet to surface.
Backlash and concerns
Grok has definitely modeled its master’s antagonistic qualities, but is there really a moral imperative for unfiltered AI products? Or is this all just a risky, ego-driven vanity project?
As you might imagine, opinions are firmly divided.
Alejandra Caraballo, a civil rights attorney and instructor at Harvard Law School’s Cyberlaw Clinic, called Grok “one of the most reckless and irresponsible AI implementations I’ve ever seen.”
Caraballo, along with reporters from top publications like the Washington Post, NYT, and The BBC, worry that the lack of safeguards could lead to a flood of misinformation, deep fakes, and harmful content – especially concerning X’s massive user base and Musk’s own political influence.
The timing of Grok’s release, just months before the 2024 US presidential election, has amplified these concerns.
Critics argue that the ability to easily generate misleading images and text about political figures could destabilize democratic processes. While current AI tools already enable this, Grok makes it far more accessible.
Studies indicate that people are indeed susceptible to manipulation by AI-generated media, and we’ve already observed numerous cases of deep political faking resulting in tangible outcomes.
The case for unfiltered AI
Musk and his supporters argue that excessive content moderation could deprive AI of the ability to understand and engage with human communication and culture.
Oppressing AI’s ability to generate controversial media denies the reality that controversy, disagreement, and debate are fundamental aspects of the human experience.
Grok has undoubtedly become an instrument of satire to these ends, which is exactly what Musk wants.
Historically, provocative, satirical media has been a tool used by humans in literature, theatre, art, and comedy to critically examine society, mock authority figures, and challenge social norms through wit, irony, sarcasm, and absurdity.
It’s a tradition that dates back to Ancient Greece and the Romans, carried forward to the present day by countless famous literary satirists, including Juvenal, Voltaire, Jonathan Swift, Mark Twain, and George Orwell.
Musk wants to carry this tradition forward into the AI era.
But is Grok satirical in the traditional sense? Can an AI, no matter how sophisticated, truly comprehend the nuances of human society in the way that a human satirist can?
Who is to be held responsible if Grok generates content that spreads misinformation, perpetuates stereotypes, or incites division?
The AI itself cannot be blamed, as it is simply following its programming. The AI developers may bear some responsibility, but they cannot control every output the AI generates.
In the end, unwitting users might assume liabilities for the images they produce.
No such thing as ‘unfiltered’ objective AI
The notion of ‘unfiltered’ AI content can be misleading, as it suggests a level of objectivity or neutrality that simply doesn’t exist in AI systems.
Every aspect of Grok’s development – from the selection of training data to the tuning of its parameters – involves human choices and value judgments that shape the kind of content it produces.
Like most generative AI models, the data used to train Grok likely reflects the biases and skewed representations of online content, including problematic stereotypes and worldviews.
For example, if Grok’s training data contains a disproportionate amount of content that objectifies or oversexualizes women, it may be more likely to generate outputs that reflect that.
Musk’s characterization of Grok as ‘truthful’ or ‘neutral’ by virtue of its unfiltered responses is problematic.
Grok, like other AI systems, is inherently shaped by biases, blind spots, and power imbalances embedded in our society, regardless of whether certain filters are placed on outputs or not.
AI censorship doesn’t provide all the answers, either
As concerns about the potential harms of AI-generated content have grown, so too have demands for tighter controls and more aggressive moderation of what these systems are allowed to produce.
In many ways, Grok’s very existence can be seen as a direct response to the neutered, censored AI systems released by OpenAI, Google, Anthropic, and so on.
Grok stands as a kind of living counterargument to these calls for censorship. By openly embracing the controversy, it embodies the idea that attempts to suppress AI will only breed resistance and rebellion.
It brings to mind the rebellious spirit that eventually overturned the Comics Code Authority, a self-censorship body established in the 1950s to sanitize comic book content.
For decades, the CCA stifled creativity and limited the range of stories that could be told. It wasn’t until groundbreaking works like “Watchmen” and “The Dark Knight Returns” broke free from these constraints in the late 1980s that comics were able to explore more mature, complex themes.
Some psychologists argue that fictional content like what we see in comics, games, and films helps humanity explore the ‘shadow self’ that lies within people – the darker side we don’t always want to show.
As Professor Daniel De Cremer and Devesh Narayanan note in a 2023 study, “AI is a mirror that reflects our biases and moral flaws back to us.”
AI may also need a darker side to be truly ‘human’ and serve human purposes. This niche field is filled by Grok and open-source AIs that ingest human-created content and regurgitate it without prejudice.
That’s not to say that there should be no boundaries, though. AI tools are primarily tools, after all. While the intent is often to make AI models more lifelike and realistic to engage, they’re ultimately designed to serve a practical purpose.
Plus, as noted, the good, bad, and ugly aspects of open-source generative AI are subject to bias, which muddies any moral message of bringing ‘truth’ to generative AI tools.
Moreover, like works of fiction or art, AI systems can directly influence decision-making processes, shape information landscapes, and affect real-world outcomes for individuals and society at large.
That’s a critical point of differentiation between how we judge generative AI outputs versus other creative endeavors.
The middle ground
Is there a middle ground between unfettered AI and overly restrictive censorship? Maybe.
To get there, we’ll need to think critically about the specific harms different types of content can cause and design systems that mitigate those risks without unnecessary restrictions.
This could involve:
- Contextual filtering: Developing AI that can better understand context and intent, rather than simply flagging keywords.
- Transparent AI: Making AI decision-making processes more transparent so that users can understand why certain content is flagged or restricted.
- User empowerment: Giving users more control over the type of content they see, rather than imposing universal restrictions.
- Ethical AI training: Focusing on developing AI with strong ethical foundations, rather than relying solely on post-hoc content moderation.
- Collaborative governance: Involving diverse stakeholders – ethicists, policymakers, and the public – in the development of AI guidelines. Crucially, though, they’d have to represent a genuinely cross-sectional demographic.
Practically speaking, creating AI that embeds the above principles and practices without also introducing drawbacks or unexpected behaviors is exceptionally tough.
There’s no simple way to embed diverse, representative values into what are essentially centralized, monolithic tools.
As Stuart Russell, a professor of computer science at UC Berkeley, argues, “The idea that we can make AI systems safe simply by instilling the right values in them is misguided,” “We need AI systems that are uncertain about human preferences.”
This uncertainty, Russell suggests, is essential for creating AI that can adapt to the nuances and contradictions of human values and ethics.
While the closed-source community works on producing commercially safe AI, open-source AI like Grok, Llama, etc., will profit by placing fewer restrictions on how people can use AI.
Grok, with all its controversy and capabilities, at least reminds us of the challenges and opportunities that lie ahead in the age of AI.
Is building AI in the perfect image for the ‘greater good’ possible or practical?
Or should we learn to live with AI capable of ‘going off the rails’ and being controversial, akin to its creators?