Dispatch ·

From classroom to corporate: the Kirkpatrick model of evaluation

A teacher-to-instructional-designer primer on the four levels of Kirkpatrick, and why you've probably been doing most of this already — just without the vocabulary.

If you’re making the move from teaching into instructional design, you’re not starting from scratch. A big chunk of the craft — especially the evaluation of learning effectiveness — is already in your practice. You just haven’t been calling it by its industry name.

The industry name is the Kirkpatrick model, and it’s the most widely used evaluation framework in corporate L&D. Here’s what it says, and how to see your classroom practice mirrored in it.

The four levels

  1. Reaction. Did participants find the experience engaging, relevant, satisfying? This is the most surface-level measure — and the one that’s easiest to confuse with the others.
  2. Learning. Did participants actually learn the content? Measured through quizzes, exams, practical demonstrations.
  3. Behavior. Did they change what they do on the job? This is where training either earns its keep or reveals itself as theatre.
  4. Results. Did the training move the organisation’s numbers — sales, CSAT, quality, compliance, ROI?

Most training programmes measure level 1 obsessively, level 2 occasionally, level 3 rarely, and level 4 almost never. The further down the model you get, the harder and more expensive it is to measure. Also: the more honest it gets.

What this looks like as a teacher

You’ve been running an informal Kirkpatrick loop for years:

  1. Reaction → student feedback. You read the room. You ask what worked. You notice which explanations landed and which didn’t.
  2. Learning → assessments. Quizzes, tests, practical assignments — same instrument family, same question: did they get it?
  3. Behavior → classroom participation and homework. You watch whether students apply the idea in unstructured contexts, not just in the exam.
  4. Results → overall progress. Grades, skill growth, behavioural change over the year.

The Kirkpatrick model isn’t far from what you already do. What it gives you is vocabulary and rigor — a way to separate levels that classroom practice tends to blur, and a way to defend your evaluation choices to stakeholders who aren’t trained educators.

You’re already closer than you think

The harder part of the transition isn’t learning the model. It’s learning the context — stakeholders, ROI conversations, the corporate dialect of “outcomes.” The evaluation craft itself travels.


Further reading

  • Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels. Berrett-Koehler Publishers.
  • Bates, R. (2004). A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence. Evaluation and Program Planning, 27(3), 341–347.