Virtual Learning Evaluation - No Excellence Without Impact
Virtual Learning Evaluation

So as we near the end of the #VirtualTrainingMastery series we need to close the loop by looking back at the objectives we set out to ensure we’ve delivered on our promises. 

So how do we know if the programme we’ve developed has worked?

Historically training evaluation consisted of a post-training survey measuring pretty irrelevant data i.e. the facilitators ‘likeability’ rather than true learning transfer, enjoyment rather than behaviour change. 

But if we think back to the scoping that kicked off this design journey, we set SMART objectives – around things like knowledge acquisition, skills and competency improvements ,performance uplifts, and increased employee engagement. THIS is what we should be evaluating. 

EVALUATION CHALLENGES

Trainers often tell me evaluation is one of the least enjoyable aspects of their job. And I understand why – it’s not easy! There are many obstacles, including: 

  • It’s time and resource intensive…leaving many of us questioning what’s the ROI on tracking ROI? But whilst I would agree that not all training is equal, and we need to take a pragmatic approach to evaluation effort vs value, for programmes designed to drive performance surely all stakeholders want to validate that the deliverable is meeting the brief!?
  • As external consultants we don’t necessarily have access to the data we need for meaningful evaluation – either because clients don’t have it, or don’t share it for compliance or competitive reasons
  • It’s complex. Accurately understanding the impact of our training interventions requires us assess a multitude of learner, course and organisational factors, that aren’t always easy to measure or correlate

 

IS EVALUATION WORTH IT?

Well as trainers, we obviously want to ensure our learning products are adding value, and understand how we can make them even better. But we also need to know. In times of financial instability budgets for L&D, and external consultants, are often among the first to get cut. To maintain consistent investment in learning – we need to demonstrate the positive impact it’s having on business performance. Numbers talk. 

WHAT WE CAN DO

Given the challenges, I find it useful to separate out the areas we can control, and the ones we can influence, from those we have no control or influence over. (And the good news is virtual learning actually gives is more things we can control!

Areas we can control:
  1. It sounds obvious, but you’d be surprised how often it’s overlooked….focus your post training evaluation on the original objectives! When the goal is awareness, measure things like recall and comprehension, when it’s performance improvement (i.e. changes in behaviour and competence). And the methods of evaluation will vary dramatically too – from performance tracking, through to post-training assessments, surveys, and feedback from supervisors and peers.
  2. Build in a baseline assessment of key evaluation metrics prior to the training so you have something to compare post training results against. (If possible, do this baseline across delegates, and employees not attending the programme, as there might be other factors influencing behaviour change, not related to the training, that need to be picked up to validate genuine impact. However, depending on the client, this may fall into the influence or no control circle.) 
  3. Whilst only the client can collate business performance data, there are large numbers of apps and solutions we can use in our virtual learning journey that generate evaluation data points. For example:
    • we can test knowledge at regular intervals –  reinforcing learning, whilst measuring comprehension and retention 
    • simulations bring to light decision mechanisms, enabling participants to reflect and receive feedback at various stages of the learning pathway
    • we can include self rating on soft skill micro behaviours to enable learners to benchmark themselves versus the wider group
    • (don’t forget to use your learning community as well as your virtual classroom to implement some of these evaluation opportunities!)
    • There is a cost to using them but if you compare it to the cost of designing an entire programme, starting the delivery and realising it doesn’t work….! 
  4. Where your client has an existing evaluation process, prior to designing the programme find out:
    • What questions are in it
    • When the survey is undertaken. Learners are more likely to recall what they have learnt within the learning environment – versus to on the job, at home, when travelling…so knowing that a survey was completed in the learning environment contextualises the results. (However, given the ultimate objective is around on the job recall and application, I’d always recommend some form of evaluation to take place at this point – which leads on to the next consideration…)
    • If there will be multiple surveys, and at what stages
    • What is the format and how long it takes to complete
    •  Take some time to absorb the questions, objectives, tone, wording. Where can you influence and tailor, do. Where you can’t, align the way you position course content and recaps to the survey questions to help learners complete it in a meaningful way. And always ask yourself ‘what else?’ If you don’t feel the evaluation plan is sufficient, you can incorporate activities into the programme that evidence learning – such as quizzes, self-assessments, peer evaluation – and create your own data points. 

 

Ultimately as the programme providers we have a lot to lose if our clients don’t see the value of our results, so sometimes we have to work around the system to demonstrate it.

Areas we can influence:
  • Whilst our client may not be in a position to collate or share performance data with us, that shouldn’t stop us sharing our recommendations on evaluation metrics and methods. By doing this we can inspire and support them to strengthen their evaluation process and take more steps towards learning maturity and high performance. 
  • Where clients run standardised surveys, we should use our expertise to review and suggest improvements that will help generate more meaningful or actionable insights 
EVALUATION BEST PRACTICE GUIDELINES

If you find yourself with total freedom to create your ideal evaluation process here are some best practice considerations (in addition to the points above)…

  • As part of scoping define SMART training objectives with your client – validating these objectives can actually be measured, and clarifying how and when this will be done. 
  • Create a baseline i.e. measure the issue. For example if the need is to improve call centre customer satisfaction through first time agent issue resolution, take the baseline for individuals/teams/the company prior to the training so you have something to compare your training and post training evaluations with. Or gather learner (perceived) awareness levels before the training to be able to assess during and post training impact. (If your client can measure the gap, its a good indicator they will have the ability and willingness to measure learning business impact.)
  • Consider the 12 levers of learning transfer from Dr Ina Weinbauer-Heidel when designing your evaluation strategy e.g.
    1. ‍Measure participants desire to put into practice what they have learned (learning effectiveness depends on the strength of this motivation
    2. Measure participants belief in their ability to apply and master the skills they have acquired – preferably a multiple intervals post training
    3. Measure participants ability and willingness to persistently work on implementing their transfer plan
    4. Capture what participants understand is expected of them after the training to check clarity of expectations
    5. Assess learner perceptions of content relevance and importance to their day to day work
    6. Measure how/what participants implement from what they learn
    7. Monitor learned preparation during the training for implementing what they learn
    8. Evaluate learners ability to implement through considering evaluation of opportunity, permission, assignment, and resources to apply what they have learned
    9. Measure learner time and capacity to apply what they have learned in their daily work
    10. Measure supervisors support in promoting and demanding the application of what the participants have learned 
    11. Track colleagues support for the learning
    12. Measure participant’s application (or not), organisational recognition, and associated positive or negative consequence metric

 

Source: ‘What Makes Training Really Work: 12 Levers of Transfer Effectiveness’, Dr Ina Weinbauer-Heidel

In summary, evaluation may not be easy, but it’s essential for the credibility of us and our work in a performance driven world. And excitingly, virtual learning actually empowers us to build evaluation methods into our learning pathways that help us demonstrate some of this value. 

If you’re stuck with impact and would like to discuss how virtual learning tech can help, book a no obligation discovery call with Gaelle

Gaelle Watson, Founder, Virtual Classroom, Expert
NEXT WEEK…

Join us for the final blog in this 12 week series, concluding this wonderful journey of designing for Virtual Learning Mastery by bringing together all the insights that will help you level up your virtual classrooms. To make sure you don’t miss out, follow us on LinkedIn, or sign up to our fortnightly round up below to get everything to your inbox. 

Sign up to our dedicated fortnightly newsletter here:

We do not spam and you can unsubscribe at your convenience.

Verified by MonsterInsights