GPT-4o system card highlights weird voice assistant risks

August 12, 2024

  • OpenAI released the GPT4o system card and says the model’s audio capabilities present novel risks
  • The voice assistant displayed erratic behavior during testing including imitating the user’s voice
  • OpenAI says it has put measures in place to prevent voice cloning and generating copyrighted audio

OpenAI has released the system card for its advanced GPT-4o model and explained the novel risks its audio capabilities present.

It’s been a few months since the impressive demos of GPT-4o’s voice assistant interacting with almost real-time dialogue. OpenAI said it would require extensive testing before the voice capability could be safely deployed and has recently only allowed a few alpha testers access to the feature.

The newly released system card gives us an insight into some of the weird ways the voice assistant behaved during testing and what OpenAI has put in place to make it behave.

At one point during testing, the voice assistant shouted “No!” and then continued with its response, but this time it imitated the user’s voice. This wasn’t in response to a jailbreak attempt and seems to be related to the background noise in the input prompt audio.

 

OpenAI says it “observed rare instances where the model would unintentionally generate an output emulating the user’s voice.” GPT-4o has the capability to imitate any voice it hears, but the risk of giving users access to this feature is significant.

To mitigate this, the system prompt only allows it to use the preset voices. They also “built a standalone output classifier to detect if the GPT-4o output is using a voice that’s different from our approved list.”

OpenAI says it’s still working on a fix for decreases in safety robustness when the input audio is poor quality, has background noise, or contains echoes. We’re likely to see some creative audio jailbreaks.

For now, it doesn’t look like we’ll be able to trick GPT-4o into speaking in Scarlett Johansson’s voice. However, OpenAI says that “unintentional voice generation still exists as a weakness of the model.”

Powerful features shut down

OpenAI also shut down GPT-4o’s ability to identify the speaker based on audio input. OpenAI says this is to protect the privacy of private individuals and “potential surveillance risks.”

When we do eventually get access to the voice assistant it won’t be able to sing, unfortunately. OpenAI closed that feature off along with other measures to stay on the right side of any copyright issues.

It’s an open secret that OpenAI used copyrighted content to train its models and this risk mitigation seems to confirm it. OpenAI said, “We trained GPT-4o to refuse requests for copyrighted content, including audio, consistent with our broader practices.”

During testing red teamers were also “able to compel the model to generate inaccurate information by prompting it to verbally repeat false information and produce conspiracy theories.”

This is a known issue with ChatGPT’s text output but the testers were concerned that the model could be more persuasive or harmful if it delivered the conspiracy theories using an emotive voice.

Emotional risks

Some of the biggest risks associated with GPT-4o’s advanced Voice Mode might not be fixable at all.

Anthropomorphizing AI models or robots is a trap that’s easy to fall into. OpenAI says the risk of attributing human-like behaviors and characteristics to an AI model is heightened when it speaks using a voice that sounds human.

It noted that some users involved in early testing and red teaming used language that indicated they had formed a connection with the model. When users interact with and form emotional attachments with AI, it could affect human-to-human interactions.

When a user interrupts GPT-4o, rather than berate them for being rude, it’s happy to let them do that. That kind of behavior isn’t appropriate in human social interactions.

OpenAI says “Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships.”

The company is clearly putting a lot of work into making GPT-4o’s voice assistant safe, but some of these challenges may be insurmountable.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions