By Thomas Bohné
In 2023, U.S. companies spent over $101 billion on corporate learning and development (L&D). You’d think with that level of investment, leaders would know exactly what’s working. Yet in my research with 23 L&D leaders across Europe, Asia, and the Americas, a consistent theme emerged: we don’t know — and the tools we’re told to use to find out often can’t deliver.
One German L&D director summed it up bluntly:
“We have 100% completion on mandatory training — and no one can tell you if it changed a single thing.”
This isn’t negligence. It’s the result of decades of well-intentioned but unrealistic evaluation advice — much of it repeated in executive playbooks and even in these pages.
For years, HBR and the broader management literature have urged companies to measure training impact like a CFO: calculate Return on Investment (ROI), use Kirkpatrick’s Four Levels, or at least track Net Promoter Score (NPS) to gauge value.
In theory, this is tidy. In practice, here’s what my interviews revealed:
In other words: ROI in training is not like ROI in capital projects. The measurement environment is constrained by law, technology, and the messy realities of human behavior.
The danger isn’t just wasted effort on flawed metrics — it’s the false confidence they create. Leaders see a healthy NPS, a completion rate in the high 90s, or a tidy ROI slide and assume the program is delivering impact. This “mirage effect” hides the real performance gaps and prevents targeted improvement.