Evaluating Learning Impacts
While there is a lot of work to design a useful, interactive course, it’s not a matter of ‘well, that’s taken care of’ when done… because instructors must find out:
- Did the training make an impact?
- Are the learners using their new skills at work?
Great Courses are Just the Start
We might design a professional course with the aim of encouraging the learners to enjoy the training and finish their course: a good start. However, in his Masterclass, Geoff Rip of Training that Works says that the formation part of a course is not everything. It’s the practice, the discussions with peers and (ideally) professional mentoring that gets the real transformation to happen.
While employers often expect a training outcome of ‘proficient’ or ‘expert’, the learners often rate themselves as still ‘beginners’ or ‘novices’! Online, it’s harder to inspire learners to finish, so it’s even more important to find out if learners absorbed the material and moved along the continuum, from beginner to proficient.
Since there is this common mismatch between what was achieved through a training course and what leaders expect, having a measurement process for each training course is imperative.
There are many models to measure the learner’s outcomes and the benefits of training. You may have heard of Bloom’s Taxonomy, for instance. (ILP Members can see the Evaluate your Learning Impact video to better understand these models.)
Check on Transfer of Knowledge
It’s also important to find out what barriers there are to applying the newly acquired skills and knowledge to real life.
Gathering feedback (via personal interviews or post-course survey) will help the educator assess the practical application of learning aspects. “This means asking pertinent questions and getting the nitty-gritty results on various topic areas. It also means reviewing the data to see if participant feedback ratings met the training objectives”, says Luke Challenor.
Analysing results with checks on feedback is a good start towards improving future training and applicability. According to Challenor, these can be done in three key areas:
- Compare overall results to learning objectives.
- Did any specific topic stand out as worse or better than expected? What would be causing that?
- Did any individuals have much lower results than the others? What could be causing that?
This careful review of all feedback will help you adapt and improve the content, and perhaps take out parts that weren’t used. And that is a truly Kaizen approach.
How many ILP members create courses?
About 29% of L&D specialists noted in the ILP Rates Survey (2022) that they delivered via Instructional Design.