A common dilemma faced by many is how to apply evaluation of training to online training and e-Learning. First and foremost, the Kirkpatrick Model is flexible enough to be adapted to any type of learning situation including mobile learning (m-learning), informal learning and even social learning. The main point is to determine which levels are appropriate to evaluate and select the tools that can work in the given learning situation.
For Level 1 (Reaction), we can used built-in functions within the platform such as polls for immediate feedback on quality of course design, presentation, and delivery. Surveys can also be done via stand-alone tools outside of the learning platform used. To take advantage of the technology, we can even extend the evaluation of learner’s reaction and feedback by providing space for learners to share about their experience through discussion threads, bulletin board, forums, blog, and post share with specific hashtags on social medias.
Online and e-Learning generally makes evaluation Level 2 (Learning) much simpler through the technology that allows embedding of test into the learning platform itself. With automatic scoring, some stand-alone tools may provide instant report generation which reduces the time, effort and cost of crafting and administering pre and post training tests. It is also a good idea to incorporate tests within the course module to help learners continuously monitor their own progress in achieving the learning objectives.
With all the technological advantages of online training and e-Learning platforms available, it is also noteworthy to understand that “application” (Level 3: Behaviour) and “impact” (Level 4: Result) occurs outside of the online and e-Learning course, therefore their evaluation is not necessarily done within the platform itself. This means that the traditional evaluation method (observation, historical data comparison, etc.) is still applicable for both Level 3 and Level 4.
The Dangers of Evaluating Online Training and E-Learning
To avoid some of the common traps we tend to fall into when evaluating online training and e-Learning, here are several relevant key points to consider:
- Let the platform collect the data but apply human wisdom to analyse it. The platform’s technology and computer system make the data collection easier, but to really grasp the whole picture better, we need to consider the relationship among several contributing factors.
- Do not delay the process. Online learning thrives on speed and instant responses; hence we should also ensure that the evaluation occurs immediately.
- Effects of Level 3 and Level 4 takes time. In the case of evaluating behaviour and impact on results, it should be done after some time and not immediately after the online training or e-Learning course is completed.
- Some questions are faulty. When faced with a situation where the number of learners getting the wrong answer on the same question is high, try to assess if there was really a failure to learn or the test question was flawed – in which case, the question may need to be revised before the next round of data collection.
- It takes two to tango. Some work performance is not entirely the result of a single individual’s ability. Consider evaluating the entire group to get a better picture of the real situation.
- Convenience and medium of choice. It may be obvious, but sometimes things can get overly complicated very quick in doing things online especially when it comes to personal preferences. For this reason, it is better to stick to the platform in use and reduce moving to several platform for different purposes (i.e., learning on platform A, quiz on website B, and evaluation via application C).
- Avoid additional tasks. Like the previous point, try not to burden the learners beyond their training time just for the sake of filling up another evaluation. Make it simple and where possible, done in one shot.
- Misunderstanding between function and evaluation. Completing an online training or an e-Learning course with evaluation data collection functions embedded in the programme is not the same as conducting an evaluation. The data collection mechanism may work just fine but is the data collected relevant and meets the purpose of evaluation?
- Balance between anonymity and accountability. It is common human behaviour to act differently when they can remain anonymous. Feedbacks and criticism may go out of hand in cases where people are allowed to hide their real identity. On the other hand, they may hide their true opinion or remarks if they know that they will be held accountable for anything that they say.
- Only the delivery changes, not the intent. When we move from physical face-to-face classroom to online, it is good to be reminded that the change only occurs in the delivery of learning (from physical methods to virtual methods). The purpose of training remains the same – to meet the objectives and ensure change takes place, and this is what needs to be evaluated for effectiveness.
Do get in touch with us if you are keen to explore this topic further.
