Gemini, Google and AI
Digest more
Top News
Overview
Highlights
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
OpenAI has quietly updated the UI of the ChatGPT web app, giving it a cleaner look. But all the AI tools you need are there.
Gemini AI and others now have the ability to scour the video footage we keep in our apps: Here's why, what it's learning and how it may be able to help you.
Explore more
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
Google has launched a new Gemini AI Ultra AI subscription that costs $250 per month: Here's what you get from the most expensive tier.
On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.