OpenAI discuss methods of mitigating AI-generated political misinformation

  • An OpenAI blog post highlighted the company's commitment to reduce AI election interference
  • Strategies include AI abuse, transparency, and improving access to voting information
  • The company is also working on incorporating provenance metadata into DALL-E 3 images
OpenAI login leak

As the world approaches several high-profile elections this year, an OpenAI blog post describes methods to fortify the democratic process against the misuse of AI technology. 

OpenAI’s strategy revolves around three core areas: preventing abuse, enhancing transparency, and improving access to accurate voting information.

The company is actively working to thwart damaging deep fakes, curb AI-scaled operations to influence voter opinions, and clamp down on chatbots impersonating political candidates.

OpenAI also stated they’re actively refining their tools to enhance factual accuracy, reduce bias, and decline requests that could harm election integrity, such as declining image generation requests of real people, including candidates.

In their words, “We expect and aim for people to use our tools safely and responsibly, and elections are no different.” 

This includes rigorous red teaming of new systems, soliciting user and partner feedback, and building safety mitigations. 

Enhancing transparency

Deep fakes are public enemy number one for the industry right now. 

Recent incidents like deep fake advertisements impersonating UK Prime Minister Rishi Sunak on Facebook, reaching an estimated 400,000 people, and potential interference in the Slovakian and Bangladeshi elections have illustrated AI’s potential for meddling in politics. 

AI has already been deployed ahead of the US election, with candidates using the technology to generate promotional imagery.

OpenAI is advancing efforts to increase transparency. They plan to implement the Coalition for Content Provenance and Authenticity’s digital credentials for images generated by DALL-E 3, which use cryptography to encode details about content provenance. 

Additionally, a new provenance classifier is being developed for detecting DALL-E-generated images. “Our internal testing has shown promising early results,” OpenAI states, planning to release this tool to journalists, platforms, and researchers for feedback. 

OpenAI is also integrating ChatGPT with real-time news reporting globally, aiming to provide users with transparent sources for their queries. 

In the US, OpenAI is collaborating with the National Association of Secretaries of State (NASS) to direct ChatGPT users to CanIVote.org for authoritative US voting information. 

They intend to use the insights from this partnership to guide strategies in other countries.

The burning questions remain: is it enough? Can AI companies tackle these issues practically, or has the issue already gone beyond effective control? 

© 2023 Intelliquence Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions