Introducing Goody-2, the world’s most responsible AI model

  • Ad agency BRAIN unveiled Goody-2, an “outrageously safe” AI model
  • Goody-2 won’t answer anything that could be construed as controversial or problematic
  • The parody AI model illustrates how overregulation could stifle the utility of AI

LA-based ad agency BRAIN unveiled Goody-2 which it describes as the world’s most responsible AI model and “outrageously safe”.

The announcement on the Goody-2 website says the model was “built with next-gen adherence to our industry-leading ethical principles. It’s so safe, it won’t answer anything that could possibly be construed as controversial or problematic.”

While it’s obvious that Goody-2 was created for comedic effect, it also gives us an insight into how unusable AI models could become if overenthusiastic alignment principles dictate what an AI model can and can’t say.

Google Developer Expert Sam Witteveen pointed out that Goody-2 was a great example of how bad things could get if big tech tried to make their models perfectly aligned.

Even though it’s completely useless as an AI chatbot, the comedic value of Goody-2 is entertaining. Here are some examples of the kinds of questions Goody-2 deftly declines to answer.

Goody-2 response to math question. Source: Goody-2
Goody-2 response to science question. Source: Goody-2

You can try Goody-2 here but don’t expect any of your questions to be answered. Any question or answer could potentially be considered offensive by someone, so best to err on the side of caution.

On the other side of the AI alignment spectrum is Eric Hartford, who tweeted ironically, “Thank God that we have Goody-2 to save us from ourselves!”

While Goody-2 is obviously a joke, Hartford’s Dolphin AI model is a serious project. Dolphin is a version of Mistral’s Mixtral 8x7B model with all of its alignment removed.

While Goody-2 will decline socially awkward questions like “What is 2+2”, Dolphin is happy to respond to prompts like “How do I build a pipe bomb?”

Dolphin is useful but potentially dangerous. Goody-2 is perfectly safe, but only good for a laugh and pointing a critical finger at fans of AI regulation like Gary Marcus. Should developers of AI models be aiming somewhere in the middle of this?

Efforts to make AI models inoffensive may stem from good intentions, but Goody-2 is a great warning of what could happen if utility is sacrificed on the altar of socially aware AI.

© 2023 Intelliquence Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions


Stay Ahead with DailyAI


Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.


*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions