DeepMind train robot soccer players that kick, tackle, and defend

April 13, 2024

  • DeepMind developed soccer-playing robots capable of advanced movement
  • The bots are agile, kicking, defending, and even guarding the ball
  • DeepMind's reinforcement learning methods bested previous techniques
AI soccer

Researchers at Google’s DeepMind have achieved a milestone in robotics by successfully training 20-inch tall humanoid robots to play one-on-one soccer matches. 

Their study, published in Science Robotics, details how they used deep reinforcement learning (RL) to teach the robots complex locomotion and gameplay skills.

The commercially available Robotis OP3 robots learned to run, kick, block, get up from falls, and score goals – all without any manual programming.

Instead, AI agents controlling the robots acquired these abilities through trial and error in simulated environments, guided by a reward system.

Here’s how the robotic soccer system works:

  1. First, they trained separate neural networks called “skill policies” for foundational moves like walking, kicking, and getting up. Each skill was learned in a focused environment that rewarded the robot for mastering that specific ability.
  2. Next, using a technique called policy distillation, the individual skill policies were merged into a single master policy network. This unified policy could activate the appropriate skill depending on the situation.
  3. The researchers then further optimized the master policy through self-play, where the robot played simulated matches against earlier versions of itself. This iterative process led to continuous improvements in strategy and gameplay.
  4. To prepare the policy for real-world deployment, the simulated training environment was randomized in terms of factors like friction and robot mass distribution. This helped the policy become more robust to physical variations.
  5. Finally, after training exclusively in simulation, the finished policy was uploaded to real OP3 robots, which then played physical soccer matches with no additional fine-tuning required.

To be honest, you’ve got to see it to believe it, so watch Popular Science‘s videos below.

The results, as you can see, are quite remarkable – dynamic and nimble, spinning to change direction and coordinating their limbs to kick and balance simultaneously.

DeepMind describes their success in the paper, “The resulting agent exhibits robust and dynamic movement skills, such as rapid fall recovery, walking, turning, and kicking, and it transitions between them in a smooth and efficient manner. It also learned to anticipate ball movements and block opponent shots.”

Compared to a more standard rules-based policy programmed specifically for the OP3, DeepMind’s RL approach delivered vastly superior performance.

The AI-trained robots walked 181% faster, turned 302% faster, recovered from falls 63% quicker, and kicked the ball 34% harder.

Together with DeepMind’s advances in AI-optimized football coaching in partnership with Liverpool FC, football, we’re probably heading towards a more heavily digitized era in sports.

It’s probably only a matter of time before we get a Robot League where custom robots face off in high-octane competitive sports.

Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.


Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions