Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approach to security.
Bryan explains why 70% of Model Context Protocol (MCP) servers are running locally on developer laptops and why trying to block them is a losing battle . Instead, he advocates for a "coaching" approach, intervening in real-time to guide engineers rather than stopping their flow .
We dive deep into the technical realities of MCP (Model Context Protocol), why it's becoming the standard for connecting AI to data, and the security risks of connecting it to production environments . Bryan also shares his prediction that Small Language Models (SLMs) will eventually outperform general giants like ChatGPT for specific business tasks .
Guest Socials - Bryan's Linkedin
Podcast Twitter - @CloudSecPod
If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:
If you are interested in AI Security
, you can check out our sister podcast - AI Security Podcast
Questions asked:
(00:00) Introduction(01:55) Who is Bryan Woolgar-O'Neil?(03:00) Why AI Adoption Stops at Experimentation(05:15) The "Shadow AI" Blind Spot: Firewall Stats vs. Reality (08:00) Is AI Security Fundamentally Different? (Speed & Scale) (10:45) Can Security Ever Be "Developer Friendly"? (14:30) What is MCP (Model Context Protocol)? (17:20) Why 70% of MCP Usage is Local (and the Risks) (21:30) The "Coaching" Approach: Don't Just Block, Educate (25:40) Developer First: Permissive vs. Blocking Cultures (30:20) The Rise of the "Head of AI" Role (34:30) Use Cases: Workforce Productivity vs. Product Integration (41:00) An AI Security Maturity Model (Visibility -> Access -> Coaching) (46:00) Future Prediction: Agentic Flows & Urgent Tasks (49:30) Why Small Language Models (SLMs) Will Win (53:30) Fun Questions: Feature Films & Pork Dumplings