Researchers at the Massachusetts Institute of Technology (MIT) have developed an AI technique designed to empower robots to manipulate objects using their entire bodies.
Manipulating objects using multiple contact points across various body parts presents an enormous challenge for robots. Humans excel at whole-body manipulation, seamlessly carrying large boxes or holding irregular objects.
However, robots are far less competent in complex manipulation tasks due to the numerous contact points between the objects and different parts of their body.
“Rather than thinking about this as a ‘black-box’ system, if we can leverage the structure of these kinds of robotic systems using models, there is an opportunity to accelerate the whole procedure of trying to make these decisions and come up with contact-rich plans,” said H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author of the research paper.
At its core, the MIT researchers’ work addresses the computational intensity and complexity of robotic manipulation tasks, specifically those involving contact-rich scenarios. Robots need to consider countless possibilities for making contact with an object when planning a manipulation task, which leads to an intractable number of computations.
Traditionally, reinforcement learning (RL) methods have been deployed to solve this problem, but they require extensive computational resources and time.
The study introduced ‘smoothing’ to tackle this issue. The ‘smoothing’ process streamlines the computational burden by reducing the number of contact events the robot needs to consider. It condenses the myriad of potential contact points into a manageable set of key decisions.
Essentially, many inconsequential actions and contacts that the robot could make are averaged out, leaving only the vital points of interaction that need to be computed.
To implement ‘smoothing,’ the team designed a physics-based model. This model efficiently replicates the kind of ‘averaging out’ of non-critical interactions that occur implicitly in reinforcement learning methods.
The team tested their approach both in simulations and real-world robotic hardware, showing comparable performance to reinforcement learning but at a fraction of the computational cost.
The implications of this research are potentially far-reaching. On the industrial front, the technique could allow for the utilization of smaller, more mobile robots that can perform intricate tasks with greater flexibility.
This could result in both reduced energy consumption and decreased operational costs. Beyond factories, the technology could be a game-changer for space exploration missions, enabling robots to quickly adapt to unpredictable terrains or tasks with minimal computational resources.
Additionally, these computational methods might help researchers construct competent, life-like hands.
“The same ideas that enable whole-body manipulation also work for planning with dexterous, human-like hands,” said Russ Tedrake, senior author and the Toyota Professor of EECS at MIT.
While AI is fueling transformations in the field of robotics, equipping robots with increasingly enhanced skills and understandings, we’re yet to construct anything with biological dexterity.
As AI hardware is shrunk into smaller, energy-efficient chips and researchers find ways of solving computational problems, dextrous, life-like robots are probably not far off.