DeepMind build table tennis robot that beats newbie players 100% of the time

August 10, 2024

  • DeepMind built a table tennis robot that dominates newbie players
  • It's solid against intermediate players, winning 55% of the time
  • The robot falters when faced with pros, so there is some room for improvement!
ai deepmind

Researchers at Google DeepMind have developed an AI-powered robot capable of playing competitive table tennis at an amateur human level. 

Registering the presence of a ping-pong ball, calculating its direction, and moving the paddle to hit it – all in a split second – is a mammoth task in robotics. 

DeepMind’s robot is equipped with an IRB 1100 robotic arm mounted on two linear gantries, which allow it to move swiftly across and toward the table.

It has an incredible range of motion, reaching most areas of the table to strike the ball with a paddle as a human does. 

The “eyes” are high-speed cameras that capture images at 125 frames per second, feeding data to a neural network-based perception system that tracks the ball’s position in real-time.

The AI system controlling the robot employs a sophisticated two-tiered system:

  1. Low-Level Controllers (LLCs): These are specialized neural networks trained to perform specific table tennis skills, such as forehand topspin shots or backhand targeting. Each LLC is designed to excel at a particular aspect of the game.
  2. High-Level Controller (HLC): This is the strategic brain of the system. The HLC chooses which LLC to use for each incoming ball, based on the current game state, the opponent’s playing style, and the robot’s own capabilities.

This dual approach allows the robot to combine precise execution of individual shots with higher-level strategy, mimicking the way human players think about the game.

Bridging simulation with the real-world

One of the greatest challenges in robotics is transferring skills learned in simulation environments to the real world.

The DeepMind study documents several techniques to address this:

  1. Realistic physics modeling: The researchers used advanced physics engines to model the complex dynamics of table tennis, including ball spin, air resistance, and paddle-ball interactions.
  2. Domain randomization: During training, the AI was exposed to a wide range of simulated conditions, helping it generalize to the variations it might encounter in the real world.
  3. Sim-to-real adaptation: The team developed methods to fine-tune the simulated skills for real-world performance, including a novel “spin correction” technique to handle the differences in paddle behavior between simulation and reality.
  4. Iterative data collection: The researchers continually updated their training data with real-world gameplay, creating an ever-improving cycle of learning.

Perhaps one of the robot’s most impressive features is its ability to adapt in real time. During a match, the system tracks various statistics about its own performance and that of its opponent. 

It uses this information to adjust its strategy on the fly, learning to exploit weaknesses in the opponent’s game while shoring up its own defenses.

Evaluating the ping-pong robot

So, how did DeepMind test their table tennis robot? 

First, the team recruited 59 volunteer players and assessed their table tennis skills, categorizing them as beginners, intermediates, advanced, or advanced+ players. From the initial pool, 29 participants spanning all skill levels were chosen for the full study.

Then, a selected player engaged in three competitive games against the robot, following modified table tennis rules to account for the robot’s limitations. 

In addition to collecting quantitive data from the robot, after the match, the researchers conducted brief, semi-structured interviews with each participant about their overall experience. 

Results

Overall, the robot won 45% of its matches, showcasing solid overall performance.

It dominated beginners (winning 100% of matches), and held its own against intermediates (winning 55%), but struggled against advanced and advanced+ players (losing all matches).

Luckily for us mere mortals, there was at least one big weakness: the robot’s difficulty in handling underspin, which was a notable chink in its armor versus more experienced players. 

Even so, if you can’t play table tennis at all or think you’re just ok at it, this robot will fancy its chances.

Barney J. Reed, a Table Tennis Coach, commented on the study, “Truly awesome to watch the robot play players of all levels and styles. Going in our aim was to have the robot be at an intermediate level. Amazingly it did just that, all the hard work paid off.”

“I feel the robot exceeded even my expectations. It was a true honor and pleasure to be a part of this research. I have learned so much and am very thankful for everyone I had the pleasure of working with on this.”

This is far from DeepMind’s first foray into sport robotics and AI. Not long ago, they built AI soccer robots capable of passing, tackling, and shooting.

DeepMind has been releasing AI robotics tools to developers for years and made recent breakthroughs in robot-vision and dexterity.

As AI and robotics continue to advance, we can expect to see more examples of machines mastering tasks once thought to be exclusively human domains. 

The day when you can challenge a robot to a game of table tennis at your local community center may not be far off – just don’t be surprised if it beats you in the first round.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions