In a groundbreaking move, DeepMind’s robotics team has introduced three innovative solutions to enhance robots’ decision-making capabilities in various environments, emphasizing safety, adaptability, and efficiency. The newly unveiled advances, including AutoRT with a visual language model (VLM) and large language model (LLM), the “Robot Constitution,” and safety features such as force threshold limits and a physical kill switch, have significant implications for the future of advanced robotics.
If the mere thought of robots finding their way around the office without creating chaos sends shivers down your spine, fear not, as DeepMind’s robotics team might have found the key to making this futuristic dream a reality. In a world where “Robot Constitutions” and safety-focused prompts dictate the tasks of our mechanical colleagues, it seems we’re closer to living in a sci-fi movie than we thought.
Imagine this: your trusty robot assistant, armed with a camera, a robot arm, and a mobile base, zipping past humans and sharp objects, maneuvering around the office like a pro. DeepMind’s AutoRT system, with its nifty visual language model and large language model, not only understands its surroundings but also has the knack for suggesting creative tasks – think setting the table or cracking open a bag of chips.