“Evolution Gym” is a large-scale benchmark for co-optimizing the design and control of soft robotics by drawing inspiration from evolutionary processes.
Let’s imagine you wanted to create the finest stair-climbing robot in the world. You’d have to optimize for both the brain and the body, possibly by equipping the bot with high-tech legs and feet and a sophisticated algorithm to allow it to climb.
Although the physical body’s design and its brain, or “control,” are both important components in allowing the robot to move, current benchmark conditions favor mainly the latter. Co-optimizing for both parts is difficult – even without the design element, training multiple robot models to perform different tasks takes a long time.
Designing and training intelligent soft robots
The “Evolution Gym,” a large-scale testing system for co-optimizing the design and control of soft robotics, was designed by scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), using inspiration from nature and evolutionary processes.
The robots in the simulator resemble squishy, mobile Tetris pieces made up of soft, stiff, and actuator “cells” on a grid, and they are programmed to walk, climb, manipulate items, shape-shift, and navigate dense terrain the researchers created their co-design algorithms by combining regular design optimization approaches with deep reinforcement learning (RL) techniques to assess the robot’s abilities.
The co-design algorithm works similarly to a power couple, with design optimization techniques evolving the robot’s bodies and RL algorithms optimizing a controller (a computer system connected to the robot to govern its movements) for a suggested design. “How well does the design perform?” questions the design optimization. and the control optimization gives you a score, which can be a five for “walking.”
The end effect resembles a robot Olympics the researchers included several novel activities, such as climbing, flipping, balancing, and stair-climbing, in addition to traditional exercises like walking and jumping.
The bots performed well in over 30 different contexts on basic tasks such as walking or carrying an object but fell short in more demanding conditions such as catching and lifting, demonstrating the limits of existing co-design algorithms.
For example, the improved robots sometimes displayed “frustratingly” visible nonoptimal behavior on a variety of tasks, according to the study. For example, the “catcher” robot would often rush forward to collect a falling block that was falling behind it.
Even though the robot designs developed autonomously and without previous knowledge by the co-design algorithms, they frequently grew to resemble actual natural animals while surpassing hand-designed robots, indicating a step toward more evolutionary processes.
“With Evolution Gym, we want to push the frontiers of machine learning and artificial intelligence algorithms,” says Jagdeep Bhatia, an MIT student who is a primary researcher on the project. “By creating a large-scale benchmark that focuses on speed and simplicity, we not only create a common language for exchanging ideas and results within the reinforcement learning and co-design space.
But we also enable researchers who do not have access to state-of-the-art computing resources to contribute to algorithmic development in these areas.” We believe that our efforts move us closer to a future in which robots are as intelligent as you or me.”
In certain circumstances, trial and error may lead to the best performance of comprehending a task for robots, according to the theory underpinning reinforcement learning the robots learned how to execute a job like moving a block by gathering information that would help them, such as “seeing” where the block is and what the ground looks like nearby.
The robot is then given a measurement of its performance (the “reward”). The larger the incentive, the more the robot pushes the block. The robot had to strike a balance between exploration and exploitation (maybe by asking itself, “Can I improve my reward by jumping?”) (further exploring behaviors that increase the reward).
The varied combinations of “cells” that the algorithms came up with for different designs were quite effective: one grew to resemble a galloping horse with leg-like components, much as in nature. To assist it to climb, the climber robot grew two arms and two leg-like devices (similar to a monkey). The lifter robot has the appearance of a two-finger gripper.
A future study might focus on so-called “morphological development,” in which a robot gradually grows more intelligent as it solves increasingly challenging tasks. For example, you may begin by optimizing a basic robot for walking, then optimize it for hauling and last for ascending stairs.
In contrast to robots that are taught the same tasks from the start, the robot’s body and brain “morph” into something that can tackle increasingly difficult tasks with time.
According to University of Vermont robotics researcher Josh Bongard, “Evolution Gym is part of a rising recognition in the AI world that the body and brain are equal partners in enabling intelligent behavior.” “There’s a lot of work to be done in determining what shapes this relationship can take.” Working out in the gym is likely to be beneficial in addressing these issues.”
Evolution Gym is a free and open-source fitness program. This is intentional since the researchers expect that their work will inspire new and better codesign methods.
The Defense Advanced Research Projects Agency funded the research. MIT undergraduate Holly Jackson, MIT CSAIL Ph.D. student Yunsheng Tian, and Jie Xu, as well as MIT Professor Wojciech Matusik, collaborated on the study they’ll share their findings at the Conference on Neural Information Processing Systems in 2021.