I’m sitting on the floor in a warm room in Phnom Penh, Cambodia, and I’m listening to a conversation in Khmer, a language that I do not know. Around the room, also sitting on the floor, are farmers and village agricultural leaders, men and women of various ages, listening and learning.
The discourse includes a curious mixture of Khmer dialogue and Western-style flipcharts and diagrams. The two trainers, also Cambodian, interact comfortably with the farmers, and the conversation is frequently punctuated with laughter. An oscillating floor fan helps chase the monsoon humidity from the room.
This is September 2012, and I am in Phnom Penh to evaluate a Train-the-Trainer Program with people from five countries. These people learn to train in week one and then use their learning to train others in week two. This is week two, and the farmers are gaining skills in leadership.
A primary focus of my work is developing a downstream evaluation. Despite its liquid connotations, the term has no connection to the Mekong, which I walked past each morning, or the monsoon-driven deluge that nearly carried away my taxi on the way to the airport.
Instead, it refers to the imperative in evaluating Train-the-Trainers—figuring how to measure the ultimate results—the impact on those “downstream” from the training.
Why is this important?
Because evaluating the training the day after may not mean much. All it can assess is immediate satisfaction and some skills or concepts. More elusive is whether the skills will be applied later, and whether they will bring about real change. In the case of the farmers, real change might include finding sustainable water sources, increasing rice yield, or weaning themselves of pesticides. Downstream, the stakes can be high.
Driving Results: Intention Follows Attention
Capturing downstream results is not only useful as a metric for funders and program managers. The focus on results can help impel participants toward learning and success. As a US Marine colonel observed in a recent meeting, what you inspect, you can expect. Whenever the evaluation lens focuses on results, those results become the focus, shifting it away from immediate activities. And while the latter are important, the outcomes are still the holy grail of programs.
A Downstream Solution
So, given the imperative to evaluate downstream, how do you begin? Surprisingly, you start at the end. Here are the steps in the plan:
1. Start with the downstream results. Have a conversation with learners, preferably those downstream of the program. What challenges are they working on? What do they hope to achieve?
2. Identify downstream metrics. Involve the trainers and preferably the downstream learners in the metrics and the process. What conditions will constitute evidence of results?
3. Develop accessible tools. A ready solution is a survey administered via smartphone using an online platform such as Survey Gizmo.
4. Administer later, but not too much later. Time is essential. Too early, and the results might not be known. Too late, and the trainers may have lost valuable data.
5. Compile the results, whether quantitative or qualitative or both, analyze the data, and report your findings regarding connections between the upstream and the downstream.
Is the process simple? Not really. But under most circumstances it will be feasible, and it will gauge and drive results.
Tell us what you think in the comment section below, and be sure to add your own tips for evaluating the impact of training and development downstream in your own organization.