From the YT live archives: Google just dropped Gemini 3 Flash—a model that outperforms Gemini 2.5 Pro (their last top model) while running 3x faster at less than 1/4 the cost. It's frontier-level reasoning at Flash-level speed, and it's rolling out globally right now.
We're sitting down with Logan Kilpatrick from Google DeepMind to explore what this actually means for developers, knowledge workers, and anyone trying to figure out how AI fits into their workflow.
What we'll cover:
🔥 Live demos – Logan will show us Gemini 3 Flash in action, from coding to multimodal understanding
⚡ What's now possible – Use cases that weren't practical with previous models (or weren't possible at all)
🛠️ Building together – We might wire up a tool live if Logan's game (we've got ideas)
💰 Intelligence too cheap to meter – We'll dig into the economics: when AI gets this powerful and this affordable, does it change the hiring calculus?
On that last point: right now, data shows AI is raising wages for AI-impacted roles because workers who use AI effectively can command higher salaries. But what happens when frontier intelligence costs $0.50 per million tokens? When does “intelligence as a commodity” flip from “AI makes workers more valuable” to “why hire a human?” We’ll see if we can get Logan’s take on this topic!
Key specs on Gemini 3 Flash:
Outperforms Gemini 2.5 Pro across most benchmarks
3x faster than 2.5 Pro
Less than 1/4 the cost of Gemini 3 Pro
1M token context window
Advanced visual and spatial reasoning with code execution
78% on SWE-bench Verified (agentic coding)
Rolling out globally in Gemini app, AI Mode in Search, and developer platforms
Logan has been at the center of Google's push to make frontier AI accessible to millions of developers. If you're shipping products, building with AI, or just trying to wrap your head around where this is all going, this conversation will give you clarity.