Today’s episode
Stop applying to AI PM jobs until you understand the fundamentals.
That is not gatekeeping. That is the MIT finding. 19 out of 20 AI pilots fail. The #1 reason? Picking the wrong problem to apply AI to.
Not the wrong model. Not the wrong data. The wrong problem.
Jyothi Nookula has spent 13.5 years in AI. 12 patents. AIPM at Amazon (SageMaker), Meta (PyTorch), Netflix (Developer Platform), and Etsy.
She has hired AIPMs at three of those companies. Trained 1,500+ PMs to transition into AI roles.
If you are trying to break into AI PM, this is the one episode to watch.
----
Brought to you by
* Product Faculty: Get $550 off their #1 AI PM Certification with my link
* Amplitude: The market-leader in product analytics
* Pendo: The #1 software experience management platform
* NayaOne: Airgapped cloud-agnostic sandbox for AI validation
* Kameleoon: Prompt-based experimentation for product teams
----
If you want access to my AI tool stack - Dovetail, Arize, Linear, Descript, Reforge Build, DeepSky, Relay.app, Magic Patterns, Speechify, and Mobbin - grab Aakash’s bundle.
If you want my PM Operating System in Claude Code, click here.
----
Key Takeaways:
1. Two types of AIPM roles exist - 80% are traditional PM roles with AI features added on, where the core product existed before AI. 20% are AI native roles where the product IS AI and the value proposition is impossible without it. Know which type before you apply.
2. The AI PM stack has three layers - Application PMs own user experience (60% of roles, easiest entry point). Platform PMs build tools for other builders (30%). Infra PMs build foundational systems like vector databases and GPU orchestration (10%).
3. 19 out of 20 AI pilots fail from wrong problem selection - AI makes sense for complex pattern recognition, prediction from historical data, and personalization at scale. If explainability is non-negotiable, rules exist, data is limited, or speed is critical, start with heuristics.
4. Most teams overcomplicate their AI technique choice - If you can put the problem in a spreadsheet with inputs and an output to predict, traditional ML is the answer. Perception problems need deep learning. Natural language reasoning needs Gen AI. These are not competitors, they are tools in your toolkit.
5. AI products are fundamentally probabilistic - The same input can produce different outputs. AIPMs must think in quality distributions and acceptable error rates, not binary success vs failure. Data is a first-class citizen, not a nice-to-have.
6. Agents decide, workflows follow steps - Workflows have predetermined sequences with deterministic outcomes. Agents receive goals and independently decide which tools to use. The live N8N demo showed identical tools producing completely different execution patterns.
7. Context engineering is the real production skill - Claude Sonnet has a 200K token context window but that fills fast with knowledge bases, conversation history, and real-time data. Every token costs money. Managing what to load and when directly impacts both quality and cost.
8. Follow the hierarchy before fine tuning - Prompt optimisation first, then context engineering, then RAG. 80% of use cases get solved with RAG. Fine tuning should only be considered after exhausting all three.
9. Build products not projects - Launch your AI work, get real users, encounter real breakage. That gives you richer interview material than any course certificate. Build an agent, build a RAG system, and build an app that solves a real problem.
10. PM culture at big tech shapes who you become - Amazon PMs spend 40-50% of time writing PRFAQs and six-pagers. Meta PMs live in experimentation and statistical significance. Netflix PMs operate with full autonomy through context over control. Each teaches something different.
----
Where to find Jyothi Nookula
Related content
Podcasts:
* Frank Lee on Amplitude and MCP
Newsletters:
* The ultimate guide to context engineering
* RAG vs fine tuning vs prompt engineering
PS. Please subscribe on YouTube and follow on Apple & Spotify. It helps!