Multiple organizations representing publishers of journalism, magazines, and other media have endorsed the newly released Global Principles for Artificial Intelligence.
The list of principles sets out a framework for how media organizations expect AI to use published content.
While acknowledging the potential benefit of AI, the principles make it clear that media publishers aren’t happy with the status quo.
The introduction to the principles stated, “The indiscriminate appropriation of our intellectual property by AI systems is unethical, harmful, and an infringement of our protected rights.”
The last part of that statement is still being decided in a number of courtrooms, with companies like OpenAI strongly disputing that it infringes on copyright when training its models.
Danielle Coffey, CEO of The News/Media Alliance, said, “AI systems are only as good as the content they use to train them, and therefore developers of generative AI technology must recognize and compensate publishers accordingly for the tremendous value their content contributes.”
Some of the Global Principles for AI include:
- Respect intellectual property rights protecting the organizations’ investments in original content.
- Leverage efficient licensing models that can facilitate innovation through training of trustworthy and high-quality AI systems.
- Provide granular transparency to allow publishers to enforce their rights where their content is included in training datasets.
- Clearly attribute content to the original publishers of the content.
- Recognize publishers’ invaluable role in generating high-quality content for training, and also for surfacing and synthesizing.
- Comply with competition laws and principles and ensure that AI models are not used for anti-competitive purposes.
- Promote trusted and reliable sources of information and ensure that AI-generated content is accurate, correct, and complete.
- Not misrepresent original works.
- Respect the privacy of users that interact with them and fully disclose the use of their personal data in AI system design, training, and use.
- Align with human values and operate under global laws.
Global Publishing and Journalism Organisations Unite to Release Comprehensive Global Principles for Artificial Intelligence https://t.co/2jRBracbtX
— World Editors Forum (@WorldEditors) September 6, 2023
Are the principles workable?
On the face of it, the principles seem to set out fair expectations of how published content is used. A news publisher that pays its journalist to write a report understandably wouldn’t want someone to take that material for free.
How AI companies accommodate these principles in practice will be difficult in some cases, and unpalatable or even impractical in others.
OpenAI, while currently arguing copyright lawsuits, has shown that it is open to licensing discussions. Earlier this year it made a deal with The Associated Press to license its content dating back to 1985.
The requirement for granular transparency will be a harder sell. Almost all of the big tech companies engaged in AI research have declined to offer specifics on their training data.
Even clearly attributing content to the original publisher could be difficult in practice.
If you were to ask a chatbot like Claude or ChatGPT about an important event, it would have to rely on multiple published reports and aggregate them into an answer. Should it then add a list of URLs for each article it used to compile its response?
Maybe. That might satisfy the requirement laid out in the principles. But would you use an AI that spat out a long list of references when all you wanted was a short answer?
The requirement to “promote trusted and reliable sources” is also a little arbitrary. Should The Washington Post make the list? What about Fox News? It really depends on who you ask.
Good journalism is crucial in the fight against disinformation so it’s promising that there is a collective effort from the media to find a way to make AI work with them.
An equally unanimous response from AI companies to address each of these principles seems some way off.