Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than traditional cloud infrastructure.
We dive deep into the "Shared Responsibility Gap" emerging with managed AI services like AWS Bedrock and OpenAI. Toni spoke about the hidden dangers of default AI architectures, why you should never connect an MCP (Model Context Protocol) directly to a database.
We discuss the new AI-driven SDLC, where tools like Claude Code can generate infrastructure but also create massive security blind spots if not monitored.
Guest Socials - Toni's Linkedin
Podcast Twitter - @CloudSecPod
If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:
If you are interested in AI Security
, you can check out our sister podcast - AI Security Podcast
Questions asked:
(00:00) Introduction(02:50) Who is Toni De La Fuente? (Creator of Prowler)(03:50) AI Security vs. Cloud Security: What's the Difference? (07:20) The Shared Responsibility Gap in AI Services (Bedrock, OpenAI) (11:30) The "Fifth Party" Risk: Managed AI Access (13:40) AI Architecture Best Practices: Never Connect MCP to DB Directly (16:40) Prowler's AI Pillars: Generating Dashboards & Detections (22:30) The New SDLC: Securing Code from Claude Code & Lovable (25:30) The "Magic" Trap: Why AI Doesn't Know Your Security Context (28:30) Top 3 Priorities for Security Leaders (Infra, LLM, Shadow AI) (30:40) Future Predictions: Why Predicting 12 Months Out is Impossible