By Thomas Bohné
For decades, executives have been told to measure training success like they’d measure a capital project: calculate Return on Investment (ROI), apply Kirkpatrick’s Four Levels, or at least track a Net Promoter Score (NPS).
In theory, that’s neat and tidy. In practice, my research with 23 L&D leaders across Europe, Asia, and the Americas shows it’s often impossible — and in chasing “perfect” ROI, organizations are missing the very data they need to improve.
One European L&D manager put it bluntly:
“We’ve spent more hours building ROI models than we have actually fixing our programs.”
In reality, three systemic constraints block ROI-first thinking from working in the field:
Even when ROI numbers are produced, they often tell a flattering but false story. A program with 100% completion and an NPS of 85 can still fail to change behavior or business outcomes.
The real reason training impact gets lost isn’t a lack of frameworks — it’s the points where measurement intentions fall apart in execution. Across my study, I found the same breakpoints again and again:
Perceived vs. Measured Success
Executives declare victory based on attendance or satisfaction before actual results are known.
Desired vs. Feasible Evaluation
Leaders want business impact data, but systems, laws, and bandwidth make it impractical.