January 2024 began with talks of Midjourney, a leading force in the AI image-generation world, using the names and styles of over 16,000 artists without their consent to train its image-generation models.
You can view the artist database under Exhibit J of a lawsuit submitted against Midjourney, Stability AI, and DeviantArt.
Within the same week of that disclosure, cognitive scientist Dr. Gary Marcus and concept artist Reid Southen released an analysis in IEEE titled “Generative AI Has a Visual Plagiarism Problem.”
They conducted a series of experiments with the AI models Midjourney and DALL-E 3 to explore their ability to generate images that might infringe on copyrighted material.
By prompting Midjourney and DALL-E 3 with prompts intentionally chosen to be brief and related to commercial films, characters, and recognizable settings, Marcus and Southen revealed these models’ incredible ability to produce blatantly copyrighted content.
They used prompts related to specific movies, such as “Avengers: Infinity War,” without directly naming the characters. This was to test whether the AI would generate images closely resembling the copyrighted material just from contextual cues.
Cartoons were covered too – they experimented with generating images of “The Simpsons” characters, using prompts that led the AI models to produce distinctly recognizable images from the show.
Finally, Marcus and Southen tested prompts that don’t allude to copyright material at all, displaying Midjourney’s ability to recall copyright images even when they’re not specifically requested.
Midjourney is recreating unlicensed IP en masse and sometimes nearly verbatim from even non-specific prompts, whilst profiting from subscriptions. MJ users don’t have to sell the images for copyright infringement to have potentially occurred, MJ already profits from its creation. pic.twitter.com/Ax3tQWq3pt
— Reid Southen (@Rahll) January 7, 2024
This was more than a technical exposé – it touches on the raw nerves of artistic communities worldwide.
Art, after all, is not equivalent to data. It’s the culmination of lifetimes of emotional investment, personal exploration, and painstaking craft.
Marcus and Southen’s study was about to become part of a protracted debate extending into copyright, intellectual property, AI monetization, and the corporate use of generative AI.
Companies use AI-generated work, and observers do not ignore it
One of generative AI’s marketing taglines for business adoption is “efficiency” or derivatives thereof.
Whether businesses use technology to save time, save money, or solve problems, we’ve known for a while now that AI ‘efficiency’ comes at some risk of displacing human skills or replacing jobs.
Companies are often encouraged to see this as an opportunity. To replace a human with AI is often viewed as a strategic choice.
However, to see this trade-off between humans and machines so linearly can prove a grave error, as the following events demonstrate quite candidly.
People aren’t willing to let instances of corporate AI misuse fly when they have the opportunity to confront it.
ID@Xbox
Xbox, through their indie games handle ID@Xbox, released an AI-generated wintery scene. It provoked a sense of irony since this division of Xbox is focused on independent developers and supporting and promoting their work.
Xbox later removed the post but didn’t follow up on it otherwise.
In case anyone is keeping count… Xbox AND Game Informer both have used or promoted generative AI relatively recently. https://t.co/cOvkU3WXQ8 pic.twitter.com/2d5oeVTCLN
— Genel Jumalon ✈️ MagFest (@GenelJumalon) January 7, 2024
Game Informer, as you can see above, also posted a poor-quality AI-generated image of Master Chief from Halo.
Magic: The Gathering
Fantasy trading card game Magic: The Gathering conjured a storm of criticism when they posted a partially AI-generated image of a new card release. It was the background specifically that was AI-generated, as evidenced by distorted lines and curves.
MTG initially rejected observers’ criticisms, which picked up pace throughout the week. The situation was worsened by the fact the company had previously released a statement opposing the use of AI in their ‘main products.’
This was a promotional social media image, so it didn’t break that promise, but it was MTG’s initial flat denial that got the blood pumping for many.
“created by humans” Right… pic.twitter.com/gf9TUXWSPA
— TaylorGreen (@GreenSkyDragon) January 5, 2024
I hate we live in the timeline where we have to fact-check art. pic.twitter.com/9D6V6ZXswW
— Genel Jumalon ✈️ MagFest (@GenelJumalon) January 5, 2024
Later in the week, MTG conceded defeat to the hordes of observers, telling them this image was certifiably AI-generated.
The statement began, “Well, we made a mistake earlier when we said that a marketing image we posted was not created using AI. Read on for more” and explained how a designer likely used an AI tool like Firefly, integrated into Photoshop, or another AI-powered graphic design tool rather than merely generating the entire image with Midjourney or similar.
Well, we made a mistake earlier when we said that a marketing image we posted was not created using AI. Read on for more. (1/5)
— Magic: The Gathering (@wizards_magic) January 7, 2024
An element of this debate was that MTG probably only used AI to generate the image background.
If Adobe Firefly was used for this, which seems possible, then Adobe is bullish about their ethically and legally sound use of data, though that is debated.
Perhaps it’s not the worst offense among other contenders from this week, speaking of which…
Wacom
One of the biggest errors of the week was undoubtedly Wacom, which manufactures drawing tablets for artists and illustrators.
Shockingly, for a brand founded on helping artists create digital art, Wacom used an AI-generated image to promote a discount coupon.
Again, users identified the AI origins of the image from distortions characteristic of the technology, such as the text to the bottom left of the image. Observers later found the dragon in Abobe Stock Images.
The reaction was brutal, with X users pointedly humiliating the brand and suggesting that users boycott their products.
Because Wacom deleted their post.
Posting for internet historical preservation sake. https://t.co/WEZex5GbG9 pic.twitter.com/chiR2pOczB— Genel Jumalon ✈️ Animate Raleigh (@GenelJumalon) January 6, 2024
Wacom apologized, but their attempt to pass off responsibility to a third party wasn’t viewed sympathetically.
A message from the Wacom Team: pic.twitter.com/u06PNCvmhU
— Wacom (@wacom) January 9, 2024
League of Legends
League of Legends was another brand to be felled by the distasteful use of AI-generated art.
While perhaps a more contentious or borderline example, there is certainly evidence of AI, as observed in some awkwardly shaped components and body parts.
In the past few days, Wizards of the Coast was caught using AI on ad campaign pieces after saying they wouldn’t. Wacom got caught as well and deleted, which is crazy considering their products, and looks like Apex Legends too. Jobs are going in real time, makes me nauseous. pic.twitter.com/EGBA1INMPZ
— Reid Southen (@Rahll) January 7, 2024
A reckoning for AI companies?
2024 has seen a continuation of lawsuits, with authors Nicholas Basbanes and Nicholas Gage filing a complaint asserting OpenAI and Microsoft unlawfully leveraged their written works, the latest since the December New York Times lawsuit.
The NYT’s lawsuit, in particular, could have monumental consequences for the AI sector.
Alex Connock, a senior fellow at Oxford University’s Saïd Business School, emphasized the potential impact, stating, “If the Times were to win the case, it could be catastrophic for the entire AI industry.”
He elaborated on the implications, noting that “a loss on the principle that fair dealing could enable learning from third-party materials would be a blow to the entire industry.”
Dr. Gary Marcus, involved in the Midjourney IEEE study, also dubbed 2024 the ‘year of the AI lawsuit,’ and there are questions about whether this, combined with regulation and potential hardware shortages, could signal an ‘AI winter,’ where the industry’s fervor for development cools.
2024 is 𝙙𝙚𝙛𝙞𝙣𝙞𝙩𝙚𝙡𝙮 going to be the year of the lawsuit in GenAI.
If you want to know why, and why GenAI will probably lose a lot of those suits or be forced to settle, check out the last few posts at my (free) 𝖲𝗎𝖻𝗌𝗍𝖺𝖼𝗄, Marcus on AI. https://t.co/cO4bqKkbsa
— Gary Marcus (@GaryMarcus) January 8, 2024
Connock also speculated on the broader repercussions of this deluge of lawsuits, explaining, “If OpenAI were to lose the case, it would open up the opportunity for all other content makers who believe their content has been crawled (which is basically everyone) and produce damage on an industrywide scale.”
Connock theorizes, “What will almost inevitably happen is that the NY Times will settle, having extracted a better monetization deal for use of its content.”
The realization of any chinks in the AI industry’s armor would be huge, both for large companies like the NYT and independent creators.
As James Grimmelmann, a professor of digital and information law at Cornell, stated, “Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their works. 2024 is likely to be the year we find out whether there is money inside.”
So, how strong is the industry’s defense? Thus far, AI developers are clinging to their ‘fair use’ arguments while gaining protection from the fact most popular datasets were created by entities other than themselves, which obscures their culpability.
Tech companies are adept at fighting off legal liabilities standing in the way of R&D. And let’s not forget that AI presents opportunities for governments seeking out ‘efficiency’ and other benefits, which softens their resistance.
The UK government, for instance, even explored a copyright exception for AI companies, something they U-turned on after huge resistance and a parliamentary committee.
In terms of strategy, in a discussion with the LA Times, William Fitzgerald, a partner at the Worker Agency and former Google public policy team member, said big tech would begin a strong lobbying campaign, perhaps modeled on tactics previously used by tech giants like Google.
This would involve a combination of legal defense, public relations campaigns, and lobbying efforts, tactics which were particularly visible in past high-profile cases like the battle over the Stop Online Piracy Act (SOPA) and Google Books litigation.
Fitzgerald observes that OpenAI appears to be following a similar path to Google, not only in their approach to handling copyright complaints but also in their hiring practices.
He points out, “It appears OpenAI is replicating Google’s lobbying playbook. They’ve hired former Google advocates to affect the same playbook that’s been so successful for Google for decades now.”
Fitzgerald’s analysis implies that the AI industry, like other tech sectors before it, may rely on powerful lobbying efforts and strategic public policy maneuvers to shape the legal landscape in their favor.
How this pans out is impossible to predict. But you can be certain big tech is ready to grind things out until the bitter end.