In this BONUS episode, we dive deep into the real-world experience of coding with AI. Our guest, Alan Cyment, brings honest perspectives from the trenches—sharing both the frustrations and breakthroughs of using AI tools for software development. From "Pachinko coding" addiction loops to "Mecha coding" breakthroughs, Alan explores what actually works when building software with large language models.
"I bought into the Thermomix coding promise—describe the whole website and it would spit out the finished product. It was a complete disaster."
Alan started his AI coding journey with high expectations, believing he could simply describe a complete application and receive production-ready code. The reality was far different. What he discovered instead was an addictive cycle he calls "Pachinko coding" (Pachinko, aka Slot Machines in Japan)—repeatedly feeding error messages back to the AI, hoping each iteration would finally work, while burning through tokens and time. The AI's constant reassurances that "this time I fixed it" created a gambling-like feedback loop that left him frustrated and out of pocket, sometimes spending over $20 in API credits in a single day.
"It felt like working with a drunken PhD with amnesia—so wise and so stupid at the same time."
Alan describes the maddening experience of anthropomorphizing AI tools that seem brilliant one moment and completely lost the next. The key breakthrough came when he stopped treating the AI as a person and started seeing it as a function that performs extrapolations—sometimes accurate, sometimes wildly wrong. This mental shift helped him manage expectations and avoid the "rage coding" that came from believing the AI should understand context and maintain consistency like a human collaborator.
"I learned to ask for options explicitly before any coding happens. Give me at least three options and tell me the pros and cons."
Through trial and error, Alan developed practical strategies that transformed AI from a frustrating Pachinko machine into a useful tool:
Ask for options first: Always request multiple approaches with pros and cons before any code is generated
Use clover emoji convention: Implement a consistent marker at the start of all AI responses to track context
Small steps and YAGNI principles: Request tiny, incremental changes rather than large refactoring
Continuous integration: Demand the AI run tests and checks after every single change
Explicit refactoring requests: Regularly ask for simplification and readability improvements
Take two steps back: When stuck in a loop, explicitly tell the AI to simplify and start fresh
Choose the right tech stack: Use technologies with abundant training data (like Svelte over React Native in Alan's experience)
"When it worked, I felt like I was inside a Lego Mecha robot—the machine gave me superpowers, but I was still the one in control."
Alan successfully developed a birthday reminder app in Swift in just one day, despite never having learned Swift. He made architectural decisions and guided the development without understanding the syntax details. This experience convinced him that AI represents a genuine new level of abstraction in programming—similar to the jump from assembly language to high-level languages, or from procedural to object-oriented programming. You can now think in English about what you want, while the AI handles the accidental complexity of syntax and boilerplate.
"People writing about vibe coding act like it's free. But many people are going to pay way more than they would have paid a developer and end up with empty hands."
Alan provides a sobering cost analysis based on his experience. Using DeepSeek through Aider, he typically spends under $1 per day. But when experimenting with premium models like Claude Sonnet 3.5, he burned through $5 in just minutes. The benchmark comparisons are revealing: DeepSeek costs $4 for a test suite, DeepSeek R1 plus Sonnet costs $16, while Open AI’s O1 costs $190. For non-developers trying to build complete applications through pure "vibe coding," the costs can quickly exceed what hiring a developer would cost—with far worse results.
"For small, single-purpose scripts that I'm not interested in learning about and won't expand later, the Thermomix experience was real."
Despite the challenges, Alan found specific use cases where AI truly delivers on the "just describe it and it works" promise. Processing Zoom attendance logs, creating lookup tables for video effects, and other single-file scripts worked remarkably well. The pattern: clearly defined context, no need for ongoing maintenance, and simple enough to verify the output without deep code inspection. For these thermomix moments, AI proved genuinely transformative.
"It became way more stable when I switched to Svelte from React Native and Flutter, even following the same prompting practices. The AI is just more proficient in certain tech stacks."
Alan discovered that some frameworks and languages work dramatically better with AI than others, likely due to the amount of training data available. His e-learning platform attempts with React Native and Flutter kept breaking, but switching to Svelte with web-based deployment became far more stable. This suggests a crucial strategy: choose mainstream, well-documented technologies when planning AI-assisted projects.
Alan has completely stopped using traditional search engines, relying instead on LLMs for everything from finding technical documentation to getting recommendations for books based on his interests. While he acknowledges the risk of hallucinations, he finds the semantic understanding capabilities too valuable to ignore. He's even used image analysis to troubleshoot his father's cable TV problems and figure out hotel air conditioning controls.
"My only fear is confirmation bias—but the conclusion I see other experienced developers reaching is that the only way to make LLMs work is by making them use agility. So look at who's dead now."
Alan notes the irony that the AI coding tools that actually work all require traditional software engineering best practices: small iterations, test-driven development, continuous integration, and explicit refactoring. The promise of "just describe what you want" falls apart without these disciplines. Rather than replacing software engineering principles, AI tools seem to validate their importance.
About Alan Cyment
Alan Cyment is a consultant, trainer, and facilitator based in Buenos Aires, specializing in organizational fluency, agile leadership, and software development culture change. A Certified Scrum Trainer with deep experience across Latin America and Europe, he blends agile coaching with theatre-based learning to help leaders and teams transform.
You can link with Alan Cyment on LinkedIn.