Google DeepMind’s new AI models help robots perform physical tasks, even without training


Google DeepMind is launching two new AI models designed to help robots “perform a wider range of real-world tasks than ever before.” The first, called Gemini Robotics, is a vision-language-action model capable of understanding new situations, even if it hasn’t been trained on them.
Gemini Robotics is built on Gemini 2.0, the latest version of Google’s flagship AI model. During a press briefing, Carolina Parada, the senior director and head of robotics at Google DeepMind, said Gemini Robotics “draws from Gemini’s multimodal world understanding and transfers it to the real world by adding physical actions as a new modality.”
The new model makes advancements in three key areas that Google DeepMind says are essential to building helpful robots: generality, interactivity, and dexterity. In addition to the ability to generalize new scenarios, Gemini Robotics is better at interacting with people and their environment. It’s also capable of performing more precise physical tasks, such as folding a piece of paper or removing a bottle cap.
“While we have made progress in each one of these areas individually in the past with general robotics, we’re bringing [drastically] increasing performance in all three areas with a single model,” Parada said. “This enables us to build robots that are more capable, that are more responsive and that are more robust to changes in their environment.”
Google DeepMind is also launching Gemini Robotics-ER (or embodied reasoning), which the company describes as an advanced visual language model that can “understand our complex and dynamic world.”
As Parada explains, when you’re packing a lunchbox and have items on a table in front of you, you’d need to know where everything is, as well as how to open the lunchbox, how to grasp the items, and where to place them. That’s the kind of reasoning Gemini Robotics-ER is expected to do. It’s designed for roboticists to connect with existing low-level controllers — the system that controls a robot’s movements — allowing them to enable new capabilities powered by Gemini Robotics-ER.
In terms of safety, Google DeepMind researcher Vikas Sindhwani told reporters that the company is developing a “layered-approach,” adding that Gemini Robotics-ER models “are trained to evaluate whether or not a potential action is safe to perform in a given scenario.” The company is also releasing new benchmarks and frameworks to help further safety research in the AI industry. Last year, Google DeepMind introduced its “Robot Constitution,” a set of Isaac Asimov-inspired rules for its robots to follow.
Google DeepMind is working with Apptronik to “build the next generation of humanoid robots.” It’s also giving “trusted testers” access to its Gemini Robotics-ER model, including Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools. “We’re very focused on building the intelligence that is going to be able to understand the physical world and be able to act on that physical world,” Parada said. “We’re very excited to basically leverage this across multiple embodiments and many applications for us.”
Google DeepMind is launching two new AI models designed to help robots “perform a wider range of real-world tasks than ever before.” The first, called Gemini Robotics, is a vision-language-action model capable of understanding new situations, even if it hasn’t been trained on them. Gemini Robotics is built on Gemini…
Recent Posts
- Google DeepMind’s new AI models help robots perform physical tasks, even without training
- NYT Connections hints and answers for Thursday, March 13 (game #641)
- 10 Best Android Phones of 2025, Tested and Reviewed
- All this bad AI is wrecking a whole generation of gadgets
- Apple fixes dangerous zero-day used in attacks against iPhones and iPads
Archives
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010