Raw Story, AlterNet, and The Intercept sue OpenAI and Microsoft

March 1, 2024

Lawsuit

What, another one?!

OpenAI and Microsoft have been struck by yet another lawsuit, this time involving digital media outlets Raw Story, AlterNet, and The Intercept over allegations of copyright infringement. 

These outlets have taken legal action against the tech duo for using copyrighted content without proper credit in training their AI technologies, demanding monetary compensation and removing the content from the AI’s training datasets.

It’s a familiar story and the second lawsuit OpenAI has faced within the last 24 hours, as Elon Musk is attempting to sue the company’s founders, Greg Brockman and Sam Altman, for breaking the company’s founding agreement. 

This new copyright lawsuit alleges that ChatGPT was trained on copyrighted journalism without the necessary credit or citation, with demands for at least $2,500 per infringement.

The submission explains, “Generative artificial intelligence (AI) systems and large language models (LLMs) are trained using works created by humans. AI systems and LLMs ingest massive amounts of human creativity and use it to mimic how humans write and speak. These training sets have included hundreds of thousands, if not millions, of works of journalism.”

It also calls upon a recent study from Copyleaks, stating, “According to the award-winning website Copyleaks, nearly 60% of the responses provided by Defendants’ GPT-3.5 product in a study conducted by Copyleaks contained some form of plagiarized content, and over 45% contained text that was identical to pre-existing content.”

In a bold statement, John Byrne, CEO and founder of Raw Story and owner of AlterNet, articulated the mounting frustration with Big Tech’s practices, saying, “It is time that news organizations fight back against Big Tech’s continued attempts to monetize other people’s work. Big Tech has decimated journalism. It’s time that publishers take a stand.” 

Like other lawsuits, the chief concern here is that AI companies like OpenAI trained their models on vast quantities of data they believe is ‘open source,’ ‘in the public domain,’ or ‘fair use.’ 

The issue is, these concepts are highly ambiguous. Copyright law itself was not shaped with AI model training in mind. 

OpenAI recently replied to the New York Times lawsuit, which is probably the highest profile of the lot, alleging the NYT paid someone to ‘hack’ their products.

OpenAI argued that the NYT used complex prompts to forcibly produce instances of copyright infringement. 

However, the NYT hit back, saying that because AI companies don’t disclose their training data, people have no choice but to reverse engineer the products to expose them.

As tension on generative AI companies increases, the industry is approaching a crossroads.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions