back

Google DeepMind releases Gemini Robotics-ER 1.6 with instrument-reading and multi-view scene understanding

yesterday 01:08

Google DeepMind launched Gemini Robotics-ER 1.6 on April 14, 2026, adding three capabilities to its embodied reasoning model: instrument reading (93% accuracy on gauges, digital readouts, and sight glasses, up from 23% in the prior version using agentic vision with code execution), multi-view success detection across overhead and wrist-mounted cameras, and enhanced spatial reasoning for object counting and relational tasks. The model outperforms both Gemini Robotics-ER 1.5 and Gemini 3.0 Flash on spatial benchmarks, shows improved safety compliance on adversarial scenarios, and is available to developers via the Gemini API and Google AI Studio.

Citations