MIT researchers use simulation to train a robot to run at high speeds

Enterprise

Did you miss a session at the Data Summit? Watch On-Demand Here.


Four-legged robots are nothing novel — Boston Dynamics’ Spot has been making the rounds for some time, as have countless alternative open source designs. But with theirs, researchers at MIT claim to have broken the record for the fastest robot run recorded. Working out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the team says that they developed a system that allows the MIT-designed Mini Cheetah to learn to run by trial and error in simulation.

While the speedy Mini Cheetah has limited direct applications in the enterprise, the researchers believe that their technique could be used to improve the capabilities of other robotics systems — including those used in factories to assemble products before they’re shipped to customers. It’s timely work as the pandemic accelerates the adoption of autonomous robots in industry. According to an Automation World survey, 44.9% of the assembly and manufacturing facilities that currently use robots consider the robots to be an integral part of their operations.

Training in simulation

Today’s cutting-edge robots are “taught” to perform tasks through reinforcement learning, a type of machine learning technique that enables robots to learn by trial and error using feedback from their own actions and experiences. When a robot performs a “right” action — i.e., an action that’ll lead it toward a desired goal, like stowing an object on a shelf — it receives a “reward.” When it makes a mistake, the robot either doesn’t receive a reward or is “punished” by losing a previous reward. Over time, the robot discovers ways to maximize its reward and perform actions that achieve the sought-after goal.

Robots can be trained via reinforcement learning in the real world, but real-world training is time-consuming and places a strain on the robotics hardware, which is delicate. That’s why researchers rely on simulated, video game-like environments designed to mimic the real world, which allow them to run thousands to millions of trials during which digital recreations of real-world robots learn sets of actions. To take one example, Alphabet-backed Waymo, which is developing autonomous vehicles, says it has driven billions of miles in simulation using digital avatars of its cars.

Recently, researchers have pushed the boundaries of simulation, attempting to perform most — if not all — robotics training in digital environments. Last year, researchers at the University of California, Berkeley trained a bipedal robot called Cassie to walk in simulation and then translated those skills to a real-world replica robot. Also, last year, Meta (formerly Facebook) data scientists trained a four-legged robot in simulation on different surfaces so that an identical, real-world robot could recover when it stumbled.

The MIT researchers, too, trained their system entirely in simulation. A digital twin of the Mini Cheetah accumulated 100 days’ worth of experience on digital, “diverse” terrain in just three hours of actual time — learning from mistakes until arriving at the right actions. When the researchers deployed their system onto a real-world Mini Cheetah, they claim that it was able to identify and execute all of the relevant skills it learned in real time.

“Achieving fast running requires pushing the hardware to its limits, for example by operating near the maximum torque output of motors. In such conditions, the robot dynamics are hard to analytically model,” MIT CSAIL Ph.D. student Gabriel Margolis and postdoc fellow Ge Yang told MIT News in an interview. “Humans run fast on grass and slow down on ice — we adapt. Giving robots a similar capability to adapt requires quick identification of terrain changes and quickly adapting to prevent the robot from falling over.”

Other applications

Researchers have accomplished impressive feats with robots in MIT’s Cheetah family before, including jogs at speeds up to 14 miles per hour, backflips, and jumps over objects. Impressively, the Cheetah 3 could balance on three legs, using the fourth as a makeshift arm.

But the researchers say their approach eliminates the need to program how a robot — Mini Cheetah or otherwise — should act in every possible situation. That stands in opposition to the traditional paradigm in robotics, where humans tell a robot both what task to accomplish and how to do it.

“[A] key contribution to our work is that we push the envelope of what is possible with learned locomotion policies,” Yang told VentureBeat. “Getting something autonomously from point A to point B is still largely an unsolved problem. Wheels are terrible for stairs and grass [while] legs actually work really well. It is a bit difficult to imagine the future, but I think if we build these pieces, things will be more clear down the road.”

Margolis and Yang claim they’re already applying the reinforcement learning technique to other robotics systems, including hands that can pick up and manipulate many types of objects. But they caution that it has limitations, including an inability to navigate obstacles that require sight (their system can’t analyze visual data).

“Legged robots are increasingly being adopted for industrial inspection and delivery tasks, and improving their mobility makes them a more effective choice for these applications,” Margolis told VentureBeat via email. “This system has only been trained for the task of controlling the robot’s body velocity in the ground plane … Our system also does not yet use vision, so it cannot perform tasks that involve planning, like climbing stairs or avoiding pitfalls. Finally, users of legged robots may wish to optimize for objectives beyond speed, such as energy efficiency or minimization of wear on the robot. In this work, our analysis was focused on speed alone.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Author

Topics