Google introduces real-time AI video features in Gemini

theverge.com

Google has started to launch new AI features for its Gemini Live service. These features allow Gemini to "see" what is on users' smartphone screens or what they point their cameras at. This information comes from Google spokesperson Alex Joseph, who confirmed the rollout in an email. The new features are based on Project Astra, which Google first showcased almost a year ago. A user on Reddit reported that they noticed the feature on their Xiaomi phone. They even shared a video demonstrating how Gemini can read screens. Another key feature being released is live video interpretation. This allows Gemini to analyze live camera feeds and answer questions based on what it sees. For example, in a video released by Google, a user asked Gemini for advice on choosing a paint color for pottery. This rollout highlights Google's strong position in AI technology. While Amazon is planning to release an upgraded version of Alexa and Apple has delayed its new Siri, Gemini's features are already available to users. Samsung's Bixby remains an option, but on Samsung phones, Gemini is the default assistant.


With a significance score of 3.6, this news ranks in the top 13% of today's 18274 analyzed articles.

Get summaries of news with significance over 5.5 (usually ~10 stories per week). Read by 9000 minimalists.


loading...