Evaluating the training Effort

With today’s emphasis of measuring HR management’s financial impact, it is crucial that the employer make provisions to evaluate the training program. There are basically three things to measure: participants’ reactions to the program; what (if anything) the trainees learned from the program and to what extent their on-the-job behavior changed as a result of the program. In one survey of about 500 organizations, 77% evaluated their training programs by eliciting reactions, 36% evaluated learning, and about 10% to 15% assessed the program’s behavior and/or results.

There are actually two basic issues to address when evaluating training programs. The first is the design of the evaluation study and, in particular, whether to use controlled experimentation the second issue is: What should we measure?

Designing the Study:

In evaluating the training program, the question is not just what to measure, but how to design the evaluation study. The time series design is one option. Here, you take a series of measures before and after the training program. This can provide at least an initial reading on the program’s effectiveness.

Controlled experimentation is a second option, and strictly speaking is the evaluation process of choice. A controlled experiment uses both a training group and a control group that receives no training. Data (for instance, on quantity of sales or quality of service) are obtained both before and after the group is exposed to training and before and after a corresponding work period in the control group. This makes it possible to determine the extent to which any change in performance in the training group resulted from the training rather than from some organization wide change like a raise in pay that would have affected employees in both groups equally.

In general, surveys suggest that less than half the companies responding attempted to obtain before and after measures for trainees; the number of organizations using control groups was negligible. However, with tools such as HR Scorecards and time series studies, it is both possible and practical to estimate a training program’s measurable impact. The HR manager should at least use an evaluation form like the one shown to evaluate the training program.

Training Effects to measure:

Four basic categories of training outcomes can be measured:

1. Reaction: Evaluate trainees’ reactions to the program. Did they like the program? Did they think it worthwhile?
2. Learning: Test the trainees to determine whether they learned the principles, skills, and facts they were supposed to learn.
3. Behavior: Ask whether the trainees’ on-the-job behavior changed because of the training program. For example, are employees in the store’s complaint department more courteous toward disgruntled customers?
4. Results: probably most important to ask: What final results were achieved in terms of the training objectives previously set? As per the HR Scorecard did the number of customer complaints about employees drop? Did the percentage of calls answered with the required greeting rise? Reactions, learning and behavior are important. But if the program doesn’t produce measurable results, then it probably hasn’t achieved its goals. If so, the problem may lie in the program. But remember that the results may be poor because the problem could not be solved by training in the first place.

Evaluating any of these four is fairly straightforward, for example, one page from a sample evaluation questionnaire for assessing trainees’ reactions. Similarly, trainees’ learning might be assessed by testing their new knowledge. The employer can assess the trainees’ behavioral change directly or indirectly. Indirectly, for example, employer might assess the effectiveness of say a supervisory performance appraisal training program be asking that person’s subordinates questions like, Did your supervisor take the time to provide you with examples of good and bad performance when he or she appraised your performance most recently? On you can directly assess a training program’s results, for instance, by measuring, the percentage of phone calls answered correctly.

While these four basic categories are understandable and widely used, there are several things to keep in mind. Perhaps most important ‘reaction’ measures such as, How did you like the program?”. Generally aren’t good substitutes for other training effects like learning, behavior, or results. Getting trainees’ reaction often the measurement of choice when firms evaluate training may provide some insight into how thy liked the program. However, it probably won’t provide much insight into what they learned or how they’ll behave once they’re back on the job. Computerization is facilitating the evaluation process.

  • shazia mushtaq

    Dear all,

    I am in the process of drafting a new policy on Employee of the Month awards,I needed details on the selection criteria and inputs on drafting the policy and procedure.

    Your inputs will be highly appreciated.

    Thanks and regards


  • Asha


    I agree on the part that training need to be evaluated in specific measurable terms and would like to know what tools would help in better evaluation.