Gemini, Google and ai
Digest more
Follow live updates from the Google I/O 2025. Get the latest developer news from the annual conference as Google is expected to reveal more on its AI tool Gemini.
It's been 13 years since Google announced its Google Glass headset and 10 years since it stopped selling the device to consumers. There have been other attempts to make smart glasses work, but none of them have stuck.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
1d
CNET on MSNEverything We Learned at Google I/O. AI Mode in Chrome, Gemini Live, XR Glasses and Much MoreGoogle Flow is a new tool that builds on Imagen 4 and Veo 3 to perform tasks like creating AI video clips and stitching them into longer sequences, or extending them, with a single prompt while keeping them consistent from scene to scene. It also provides editing tools like camera controls. It's available as part of Gemini AI Ultra.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Google has launched a new Gemini AI Ultra AI subscription that costs $250 per month: Here's what you get from the most expensive tier.
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.