Is 2026 the year AI finally has to prove it is worth the investment?
In this episode, I'm joined by Chris Riche-Webber, VP of Business Intelligence and Analytics at SmartRecruiters, to explore why so many AI and agentic AI initiatives stall after the pilot phase and what separates the projects that scale from the ones that quietly disappear. With Gartner predicting that more than 40 percent of agentic AI programs could be cancelled by 2027, Chris brings a pragmatic, data-led perspective on what is really happening inside organizations as the hype meets operational reality.
We talk about the fundamentals that have not changed despite the new technology. Influence, clearly defined problems, measurable impact, and adoption still determine success, yet they are often overlooked in the rush to deploy the latest tools. Chris explains why "good vibes" are no longer enough in front of a CFO, how to baseline outcomes properly, and why ownership of results is one of the most common missing pieces in enterprise AI programs.
A big part of the conversation focuses on what Chris calls the "agent washing" problem. Just as products are sometimes marketed with fashionable labels that do not reflect their real value, many solutions are being positioned as agentic without delivering true autonomy or business outcomes. We discuss how leaders can cut through the noise by asking better questions, aligning technology to specific use cases, and recognizing when simple automation is the right answer.
Trust, adoption, and measurable ROI emerge as the three signals that determine whether an AI initiative survives. Chris shares a clear framework for defining these signals in a way that is consistent, comparable over time, and meaningful to the executive team. We also explore how connecting talent decisions to revenue, productivity, and retention changes the conversation, especially in the context of SmartRecruiters' broader SAP ecosystem and the opportunity to link people data directly to business performance.
This is a conversation about moving from experimentation to accountability, from buying narratives to solving real problems, and from technology-first thinking to outcome-first leadership.
So as the window for easy wins closes and the demand for proof of value grows, will your AI strategy be remembered as a pilot that generated excitement or as an initiative that delivered measurable business impact?