John Gillilan, our official Apple correspondent, returns to Two Voice Devs to unpack the major announcements from Apple's latest Worldwide Developer Conference (WWDC). After failing to ship the ambitious "Apple Intelligence" features promised last year, how did Apple address the elephant in the room? We dive deep into the new "Foundation Models Framework," which gives developers unprecedented access to on-device LLMs. We explore how features like structured data output with the "Generable" macro, "Tools" for app integration, and trainable "Adapters" are changing the game for developers. We also touch on the revamped speech-to-text, "Visual Intelligence," "Swift Assist" in Xcode, and the mysterious "Private Cloud Compute." Join us as we analyze Apple's AI strategy, the internal reorgs shaping their product future, and the competitive landscape with Google and OpenAI.
[00:00:00] Welcome back, John Gillilan!
[00:01:00] What was WWDC like from an insider's perspective?
[00:06:00] Apple's big miss: What happened to last year's AI promises?
[00:12:00] The new Foundation Models Framework
[00:16:00] Structured data output with the "Generable" macro
[00:19:00] Extending the LLM with "Tools"
[00:22:00] Fine-tuning with trainable "Adapters"
[00:28:00] Modernized on-device Speech-to-Text
[00:29:00] "Visual Intelligence" and app integration
[00:32:00] The powerful "call model" block in Shortcuts
[00:36:00] Swift Assist and BYO-Model in Xcode
[00:39:00] Inside Apple's big AI reorg
[00:42:00] The Jony Ive / OpenAI hardware mystery
[00:45:00] How Apple, Google, and OpenAI will compete and collaborate
#Apple #WWDC #AI #AppleIntelligence #FoundationModels #LLM #OnDeviceAI #Swift #iOSDev #Developer #TechPodcast #TwoVoiceDevs #Siri #SwiftAssist #OpenAI #GoogleGemini #GoogleAndroid