Google takes criticism for their misleading Gemini marketing video

December 8, 2023
Gemini AI

Google recently came under fire for its promotional video featuring Gemini, its multi-modal competitor to OpenAI’s GPT-4. 

The video, titled “Hands-on with Gemini: Interacting with multimodal AI,” was designed to showcase Gemini’s ability to recognize and respond to visual stimuli. 

Released on Wednesday, this video caused huge hype for what appeared to be Gemini’s advanced real-time visual recognition and vocal interaction. However, X users and Parmy Olson from Bloomberg tempered enthusiasm by highlighting that Gemini’s promo video was heavily edited. 

In her report, Olson reveals that the Google team manipulated the video by using still images and text inputs, cherry-picking and synthesizing the most successful responses to create a misleading impression of real-time interaction.

The video captions do express that it consists of Google’s ‘favorite interactions’ with the model, but nevertheless, they pushed the promo a little too hard, it seems. 

Responding to the revelations, a Google spokesperson said, “The user’s voiceover in the video is comprised of real excerpts from the actual prompts used to produce the Gemini output that follows.”

Despite the controversy, it’s important to note that Google’s Gemini boasts impressive credentials and is broadly in line with the capabilities of OpenAI’s multimodal GPT-4V model when the Ultra version of the Gemini model lands next year.

The current Gemini is more akin to a multi-modal version of GPT-3.5. You can view Ultra’s impressive benchmarks here – it beats GPT-4 in 30/32 tests.

For some, the reaction to people criticizing Google’s marketing video has been bewildering. After all, it’s ‘just’ a marketing video. Ultimately, AI models are products. You can argue that buyers, particularly developers, need to do their due diligence like they would for any other. 

Even so, the incident has sparked a wider conversation in the tech world about the ethical implications of AI marketing.

It all goes to show how AI companies are blurring the line between public service and making a profit for shareholders. 

Do AI companies have a responsibility to maintain complete transparency in all of their communications, including marketing videos? Not legally – at least not in many jurisdictions. 

Morally and ethically? Most would agree they’re bound to greater responsibility than other companies, which is being obligated by frameworks, regulations, and legislation.

But then again, when was the last time you complained about your Big Mac not looking as juicy as it does in the commercials?

Companies with public responsibility still advertise, even when their products cause an arguably larger ethical or public health risk than AI.

Nevertheless, AI has proven its ability to drum up intense hype within mere minutes of major announcements. Just imagine what GPT-5 is going to be like.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions