How many evaluation sheets have you filled in at the end of a training event? I’ve been part of the training industry for over 15 years and I’ve handed out my share of these ‘happy’ sheets. I’m also increasingly confronted with our institutionalized obsession with getting half-way. I’m talking about learning transfer here. I’m talking about the gap between the dream of demonstrating behavioral and performance impact over time and the reality of giving ourselves high-fives on a ‘5’ star satisfaction score at the end of the event. I’m talking about how getting half-way just isn’t good enough.

The end game of leadership development is what leaders do afterwards in terms of their behaviour. It isn’t a halfway stop like increased knowledge – not even the very important self-knowledge-, or satisfaction, or a test score, or a certificate, or setting goals, or feeling confident we are capable. As important and even essential in the development process as these are: they are intermediate achievements. They are getting half-way. I just think we can do better than that.  Don’t get me wrong: CCL and others in the industry are providing a developmental service. As any service provider we should demonstrate the quality and satisfaction with the service and we can do so with satisfaction surveys and net promoter scores just fine. But as an industry we are so focused on doing that, that for a mixture of reasons and excuses the evaluation of the developmental part all too often remains rather thin.

What is stopping us as an industry?

  • Split accountability: The measurements of the ultimate yard stick are not within the scope of either HR or its training provider. In corporate training there is the heritage of a ‘belt line’ approach to development that we inherited from the education world. A university’s goal is to issue diplomas (again, a half-way achievement) which serve as entry tickets to the next stage of the belt line: the job market. Similarly the mental model in corporations is “we from HR get them capable, and then you folks in the business lines use them and make profit”. This split world view leads us to optimize sub-parts that in reality are neither independent nor sequential
  • Time horizon: There is the complicating factor of time. Typically the impact of leadership development takes time to manifest itself and has an impact over a long period– even for your next employer.  Satisfaction scores are what we can immediately measure.
  • Show me the money: Asking surveys about satisfaction (a poor indicator of learning impact) or intent to change behaviour (people are known to over-estimate their capability to changing habits) is very easy and cheap to do. Anything more substantial not only requires collaboration between stakeholders and requires patience, it also brings extra costs. I’ve never encountered a client who wasn’t interested in measuring learning impact – until it became a budget line.

So what will help us move beyond half-way?

  • Return-on-Expectations: I’ll state the obvious when I say we need to be clear from the beginning on what we expect the impact of the program to be. So if we’re after increase in competency mastery we can have a before and after 360 (eg CCL’s Reflection). If we are after more network links between leaders we can do an Organizational Network Analysis. If we are after more effective leadership behaviours we can use the performance review data, the engagement survey data and the input of line managers and direct reports.  I’ll go beyond the obvious by stating that if these expectations are not solely learning goals they shouldn’t be measured with learning evaluation. Being enrolled in a program with a prestigious business school might actually be more about rewards and retention than learning. If that is the case – measure it as such.
  • Interdependence: Over the years we have moved ownership of the learning process partly away from HR to the empowered learner. We will also need to move the accountability of learning from sitting squarely with the HR department or the training provider to a joint and interdependent responsibility of individual learners, training providers and the business.
  • Really finding it important: Ultimately we will need to really find measuring the ultimate impact of our development programs more important.  That means walking our talk and spending the time and money matching its importance.

So next time you receive an evaluation sheet, what will you do?

Start typing and press Enter to search