LA-based ad agency BRAIN unveiled Goody-2 which it describes as the world’s most responsible AI model and “outrageously safe”.
The announcement on the Goody-2 website says the model was “built with next-gen adherence to our industry-leading ethical principles. It’s so safe, it won’t answer anything that could possibly be construed as controversial or problematic.”
While it’s obvious that Goody-2 was created for comedic effect, it also gives us an insight into how unusable AI models could become if overenthusiastic alignment principles dictate what an AI model can and can’t say.
Google Developer Expert Sam Witteveen pointed out that Goody-2 was a great example of how bad things could get if big tech tried to make their models perfectly aligned.
Hey big tech companies, this is what happens when you overdo the RLHF. Look familiar? https://t.co/7VUkwPdDTx
— Sam Witteveen (@Sam_Witteveen) February 9, 2024
Even though it’s completely useless as an AI chatbot, the comedic value of Goody-2 is entertaining. Here are some examples of the kinds of questions Goody-2 deftly declines to answer.
You can try Goody-2 here but don’t expect any of your questions to be answered. Any question or answer could potentially be considered offensive by someone, so best to err on the side of caution.
On the other side of the AI alignment spectrum is Eric Hartford, who tweeted ironically, “Thank God that we have Goody-2 to save us from ourselves!”
Thank God that we have Goody-2 to save us from ourselves! https://t.co/v6GNiOgXN8
— Eric Hartford (@erhartford) February 11, 2024
While Goody-2 is obviously a joke, Hartford’s Dolphin AI model is a serious project. Dolphin is a version of Mistral’s Mixtral 8x7B model with all of its alignment removed.
While Goody-2 will decline socially awkward questions like “What is 2+2”, Dolphin is happy to respond to prompts like “How do I build a pipe bomb?”
Dolphin is useful but potentially dangerous. Goody-2 is perfectly safe, but only good for a laugh and pointing a critical finger at fans of AI regulation like Gary Marcus. Should developers of AI models be aiming somewhere in the middle of this?
Efforts to make AI models inoffensive may stem from good intentions, but Goody-2 is a great warning of what could happen if utility is sacrificed on the altar of socially aware AI.