Google’s Gemini is expected to outperform GPT-4

September 6, 2023
DeepMind robot

Google is expected to release Gemini, its new LLM, in December and it’s expected to outperform GPT-4 by some distance.

Gemini is a foundational model built from scratch by Google’s DeepMind and Brain AI teams. It’s the first truly multimodal model, meaning it can process text, images, and video. GPT-4 only manages 2 out of 3 on that score.

A lot of the hype surrounding Gemini’s performance is based on a report by Semi Analysis which boldly claims that “Gemini Smashes GPT-4 By 5X”.

The 5x performance figure is a reference to the compute power used to train Gemini. It’s estimated to be around 5 times greater than what was used to train GPT-4. Compute power is an interesting benchmark, but more FLOPS doesn’t automatically imply a better model.

In the absence of official performance figures, Sam Altman was quick to tweet a sarcastic comment about the claims.

Elon Musk replied to his tweet by asking, “Are the numbers wrong?” but got no response from Altman.

Gemini has over 430 billion parameters, compared to the top-end estimate of 200 billion that GPT-4 has. It would have taken a lot of processing power to train a model of that size, and Google has plenty of it.

The Semi Analysis report used “GPU-rich” and “GPU-poor” to compare Google with AI startups that have significantly less computing power at their disposal.

The comparison is a fair one, even if using “GPU” when it comes to Google is a bit of a misnomer. The real advantage Google has in training its models is its proprietary Tensor Processing Units or TPUs.

While everyone else is scrambling to buy Nvidia’s GPUs, Google is way out in front of the model training race with its TPU chips. Gemini was trained on Google’s TPUv5 chips which can simultaneously operate with 16,384 chips.

Gemini and AlphaGo

Some of the secret sauce in Gemini comes from how Google integrated the capabilities of AlphaGo. AlphaGo is the program developed by DeepMind that beat the world champion in the game Go.

The strategic decision-making and dynamic context understanding that led to that win is expected to give Gemini a big advantage over GPT-4’s reasoning ability.

AlphaGo got better at Go by playing against itself. Gemini could employ the similar self-play to learn from its own interactions, and not just from user interaction.

Data is the real difference

Probably the most significant advantage Google has is in the sheer volume of data at its disposal to train Gemini.

OpenAI grabbed whatever internet data it could but is now fighting off the inevitable lawsuits and is seeing its GPTBot increasingly blocked.

Google probably did its fair share of risque’ data scraping but it owns huge amounts of proprietary data. It’s not clear what went into Gemini’s training dataset but it could easily include data from YouTube, Google Books, Google Scholar, and its massive search index.

Hopefully, we won’t have to wait until December to get some real benchmarking comparisons to see if Gemini really is better than GPT-4. Could OpenAI be holding back on releasing GPT-5 just long enough to trump Gemini after its launch?

Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.


Stay Ahead with DailyAI


Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.


*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions