Google engineers openly discuss the limitations of Bard

  • Google's Bard chatbot hasn't really penetrated the public generative AI market
  • According to messages from a Bard-focused Discord group, Google engineers doubt the model
  • Google recently began integrating generative AI its its Workspace family of tools
Google Bard

Google has maintained an exclusive Discord chat for its AI chatbot Bard – a space for Google’s engineers, product managers, and designers to candidly discuss and evaluate the AI tool’s performance and value.

Internal conversations have demonstrated that Google researchers are not fully invested in the effectiveness or usefulness of Bard, which remains a peripheral choice compared to other AI chatbots like ChatGPT.

From July to October, Bloomberg gained insights into the discussion details within Google’s Bard Discord server from two community participants. Discussions revealed internal debate about Bard’s efficacy and uses. 

Dominik Rabiej, a Senior Product Manager for Bard, expressed cautious skepticism in a July message on Discord.

He stated, “My rule of thumb is not to trust LLM output unless I can independently verify it,” shedding light on persistent uncertainties regarding Bard’s capabilities. 

Another prominent executive, Cathy Pearl, Bard’s User Experience Lead, also voiced ambiguity about Bard’s real-world usefulness. 

Since its introduction in March as a contender to OpenAI’s ChatGPT, Bard has undergone a series of feature updates, such as photo analysis capability and multilingual response generation. While impressive, it didn’t cause nearly as much of a stir as OpenAI’s release of “GPT-4V,” which adds image recognition to the GPT-4 model. 

Last month, Google ambitiously linked Bard to several of its Workspace tools like Gmail, Maps, Docs, and YouTube, initiating integrations from September 19. According to Discord chats, more issues have emerged as Bard intertwines further with Google’s offerings. 

Complaints have surfaced about the AI tool generating unfounded facts and providing potentially hazardous advice, leading Google to introduce a search button on Bard on the same day it announced app extensions. 

This feature enables users to validate Bard’s AI-generated responses against Google’s search engine results for enhanced factuality.

Additional controversies have grown concerning those who assisted in training Bard – thousands of low-wage contractors. Google is not alone here – this is typical of the AI industry, which relies on extensive third-party assistance from data labeling companies.

Generative AI has challenged Google’s search dominance with tools from OpenAI and various startups, potentially undermining search and, by extension, ad revenue. 

The Discord group has also seen user discussions on various sensitive Google-related topics. One member was allegedly expelled from the group after raising ethical concerns regarding Project Nimbus, a $1.2 billion contract for Google and Amazon to supply AI tools to Israel’s military. 

Despite the breadth of views on resource costs and Bard’s accuracy, the product manager, Warkentin, confirmed their focused efforts to “reduce hallucination and increase factuality,” noting noticeable improvements since Bard’s release. 

But Rabiej reminded users that, like all LLMs, Bard lacks a genuine understanding of the texts it processes, emphasizing its generative nature.

Bard may not be Google’s flagship AI product for long, with discussions of a new multi-modal called Project Gemini brewing. However, there has been little concrete to discuss of that yet.

© 2023 Intelliquence Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2023 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions