Training evaluation - science or art? Essentially, there are two schools of thought about training evaluation, those who believe in the importance of scientific, quantitative and conclusive analysis, and those who believe in the value of subjective, qualitative and action-oriented exploration. The former school support ROI analysis, use of experimental and control groups, and, above all, the elimination of extraneous or even contributing variables. This is mainly because they want proof of the value of training itself (and, possibly, to control or curtail its costs if they are high in comparison to other options). At this point we should ask ourselves is this what busy line managers want, is it really sensible to exclude variables that might contribute to increased training impact, and do we really only want a snapshot about training taken at some arbitrary point? Those who want to use evaluation to improve training and to reinforce its effect on participants' learning belong to the latter school of thought. They want to improve the transfer of training back to work (one of the biggest leakages in any training effort). They are ready to use interviews, small group surveys and feedback, and critical incident analysis deliberately to involve participants in renewed or new learning about the original training. Subjectivity and the inclusion of variables from activities related to the training (for example, promotion following management training, or changes in wider performance management practices introduced alongside appraisal training) are not a problem, because they assist in the interpretation of the rich data gathered. This school is interested in evidence of ongoing training impact, and what it may point to. It seems to me essential to recognise that the difficulties and costs of proving or quantifying the value of training increase over time, but the benefits of using evaluation to reinforce the original training remain high at all times.