DeepMind’s RoboCat learns to perform a range of robotics tasks
DeepMind says that it’s developed an AI model, called RoboCat, that can perform a range of tasks across different models of robotic arms. That alone isn’t especially novel. But DeepMind claims that the model is the first to be able to solve and adapt to multiple tasks and do so using different, real-world robots.
“We demonstrate that a single large model can solve a diverse set of tasks on multiple real robotic embodiments and can quickly adapt to new tasks and embodiments,” Alex Lee, a research scientist at DeepMind and a co-contributor on the team behind RoboCat, told TechCrunch in an email interview.
RoboCat — which was inspired by Gato, a DeepMind AI model that can analyze and act on text, images and events — was trained on images and actions data collected from robotics both in simulation and real life. The data, Lee says, came from a combination of other robot-controlling models inside of virtual environments, humans controlling robots and previous iterations of RoboCat itself.
To train RoboCat, researchers at DeepMind first collected between 100 to 1,000 demonstrations of a task or robot using a robotic arm controlled by a human. (Think having a robot arm pick up gears or stack blocks.) Then, they fine-tuned RoboCat on the task, creating a specialized “spin-off” model that practiced on the task an average of 10,000 times.
Leveraging both the data generated by the spin-off models and the demonstration data, the researchers continuously grew RoboCat’s training data set — and trained subsequent new versions of RoboCat.
The final version of the RoboCat model was trained on a total of 253 tasks and benchmarked on a set of 141 variations of these tasks, both in simulation and in the real world. DeepMind claims that, after observing 1,000 human-controlled demonstrations collected over several hours, RoboCat learned to operate different robotic arms.
While RoboCat had been trained on four kinds of robots with two-pronged arms, the model was able to adapt to a more complex arm with a three-fingered gripper and twice as many controllable inputs.
Lest RoboCat be heralded as the end-all be-all of robot-controlling AI models, its success rate across tasks varied drastically in DeepMind’s testing — from 13% on the low end to 99% on the high end. That’s with 1,000 demonstrations in the training data; the successes were predictably less common with half as many demonstrations.
Still, in some scenarios, DeepMind claims that RoboCat was able to learn new tasks with as few as 100 demonstrations.
Taken further, Lee believes that RoboCat could herald a lowering of the barrier to solve new tasks in robotics.
“Provided with a limited number of demonstrations for a new task, RoboCat can be fine-tuned to the new tasks and in turn self-generate more data to improve even further,” he added.
Going forward, the research team aims to reduce the number of demonstrations needed to teach RoboCat to complete a new task to fewer than 10.
DeepMind’s RoboCat learns to perform a range of robotics tasks by Kyle Wiggers originally published on TechCrunch