Google is throwing down the gauntlet with Gemini, its new large language model (LLM).
Developed primarily by Google DeepMind, the Gemini project signals an upcoming showdown with OpenAI’s ChatGPT.
The Information reported that Google has granted early access to Gemini for a select group of developers, implying that a beta release is imminent.
As Google combines its substantial resources with research labs Brain and DeepMind, Gemini’s potential impact on the AI industry could be massive.
While OpenAI has burst onto the AI scene and seized control of public users with ChatGPT, Google is backed by decades of AI research and owns considerable proprietary datasets.
Google CEO Sundar Pichai unveiled Gemini during the Google I/O developer conference in May 2023. He stated that Gemini is designed “from the ground up to be multimodal,” combining DeepMind’s AlphaGo system strengths with powerful language modeling capabilities.
Demis Hassabis, CEO of DeepMind, added more context, stating that Gemini is not a single model but rather a “series of models” and will likely work with text, images, and possibly even speech and audio.
This is similar to the direction of Google Bard, which incorporates image functionality courtesy of Google Lens.
Future enhancements could include features like “memory and planning that could enable tasks requiring reasoning,” according to Pichai.
Google Chief Scientist Jeffrey Dean revealed that Gemini will utilize Google’s new AI infrastructure, Pathways, to scale up its training on diverse datasets.
Dean hinted that the system could potentially exceed the size of OpenAI’s GPT-3, which contains over 175 billion parameters – but that would mean Gemini remains a generation behind GPT-4.
However, parameter count isn’t everything, and Gemini could be distinguished from other LLMs in other ways.
For instance, in addition to working with multiple data types, Hassabis indicated that Gemini could cross-validate information with Google Search.
Hassabis revealed that Gemini is showing “very promising early results” in a September interview with Time.
A report by Semi Analysis states that Gemini exceeds 430 billion parameters, significantly higher than GPT-3’s estimate of 200 billion. The parameter count of GPT-4 is unknown, though it is reported to be around 1 trillion, according to a handful of analyses.
The Semi Analysis post also claims Gemini will ‘smash’ GPT-4’s pre-training flops by 5x, with plans to beat it by 20x. Though speculative, this would mean Gemini is computationally considerably more powerful than GPT-4.
The post says, “Whether Google has the stomach to put these models out publicly without neutering their creativity or their existing business model is a different discussion.”
As Sundar Pichai said, today’s chatbots will “look trivial” in comparison within a few years.
Whether Gemini will knock GPT-4’s dominance, however, remains unknown.