Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

11
Items Description of Module Subject Name Human Resource Management Paper Name Training and Development Module Title Training Evaluation models Module Id Module -27 Pre- Requisites Understanding of the Models of Evaluation of Training (EOT) Objectives To study the different Models of Evaluation of Training and Benefits of EOT Keywords Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ,Phillip’s Evaluation Model

Transcript of Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

Page 1: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

Items Description of Module

Subject Name Human Resource Management

Paper Name Training and Development

Module Title Training Evaluation models

Module Id Module -27

Pre-

Requisites Understanding of the Models of Evaluation of Training (EOT)

Objectives To study the different Models of Evaluation of Training and Benefits of EOT

Keywords Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ,Phillip’s

Evaluation Model

Page 2: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

QUADRANT-I

1. Module 27: Training Evaluation models

2. Learning Outcome

3. Introduction to Evaluation

4. Models For Training Evaluation

5. Benefits of Training Evaluation

6. Summary

1. Module 27: Training Evaluation models

2. Learning Outcomes:

After completing this module the students will be able to-

Understand Evaluation of Training

Understand Models For Training Evaluation

Understand Benefits of Training Evaluation

3. Introduction to Evaluation:

(a) Evaluation is an integral part of most instructional design (ID) models. Evaluation tools and

methodologies help determine the effectiveness of instructional interventions. Inspite of its

importance, there is evidence that evaluations of training programs are often inconsistent. Possible

explanations for inadequate evaluations include: insufficient budget allocated; insufficient time

allocated; lack of expertise; blind trust in training solutions; or lack of methods and tools (see. for

example, McEvoy & Buller. 1990).

(b) Evaluation goals involve multiple purposes at different levels. These purposes include evaluation of

student learning, evaluation of instructional materials, transfer of training, return on investment,

and so on. Attaining these multiple purposes may require the collaboration of different people in

different parts of an organization. Furthermore, not all goals may be well-defined and some may

change.

(c) There are many methods and tools available for evaluating the effectiveness of training programs.

Their usability depends on the kind of training program that is under evaluation. Generally most of

the organizations use the Kirk Patrick model for training evaluations which evaluates training at

four levels - reactions, learning, behaviour and results.

(d) After it was found out that training costs organizations a lot of money and no evaluation measures

the return on investment for training, the fifth level for training evaluation was added to the training

evaluation model by Kirk Patrick which is called as the ROI.

Page 3: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

(e) Most of the evaluations contain themselves to the reaction data, only few collected the learning

data, still lesser measured and analysed the change in behaviour and very few take it to the level to

increase business results.

4. Types Of Models of EOT

There are different models to evaluate training, still training evaluation is the weakest and most

under developed aspect of training. There are number of issues which lead to ignorance of

evaluation.

There are several reasons for underdeveloped evaluation. Evaluation means different things to

different people, it is perceived to be tedious and time consuming task which trainers do not like

to pursue, people tend to assume the training will simply work, trainers feel threatened by the

prospect of an objective evaluation of training and its outcome (Sims, 1993).

The main reasons for failure of evaluation are inadequate planning, lack of objectivity, evaluation

errors of some sort, improper interpretation and inappropriate use of results. Other issues are

failure to train the evaluators on the techniques of evaluation, inappropriate data gathering

instrument and focus on unimportant details.

Different models of evaluation are being used by organisations to evaluate training effectiveness,

according to the budgets of the organisation. Some of the commonly used models are as follows:

4.1 Kirkpatrick (1959)

4.2 CIPP Model (1987)

4.3 CIRO Model

4.4 D. Phillip’s Evaluation Model

4.5 Other training approaches and models

4.1 Kirkpatrick Model:

This model has been introduced in 1959 by Donald Kirk Patrick. This is a very popular model

focuses on measuring four kinds of outcomes or it can be said that outcomes in four levels

that should result from a highly effective training programme.

Kirkpatrick (1977) divided the evaluation model into four parts: reaction; learning; behaviour

and results. Reaction evaluates how participants feel about the training programme they

Source:http://www.cnatrainingclass.co

m

Page 4: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

attended. The learning would evaluate the extent to which the trainees learned the

information and skills, the behaviour would evaluate the extent to which their job behaviour

had changed as a result of attending the training.

The results would evaluate the extent to which the results have been affected by the training

programme.

The main strength of the Kirkpatrick evaluation approach is the focus on behavioural

outcomes of the learners involved in the training (Mann & Robertson, 1996).

The detailed explanation of the above mentioned model is as follows:

a Reaction level: Programme evaluation involves two general approaches - Formative evaluation

also known as internal and summative evaluation also known as external evaluation. Reaction

evaluation is a type of formative evaluation where the results are used for programme modification

and redesigning of progamme contents, course material and presentations (Antheil & Casper,

1986; Robinson & Robinson, 1989). Reaction can also be summative in nature. In such cases, the

goal of reaction evaluation is to determine the value, effectiveness or efficiency of a training

programme (Smith & Brandenburg, 1991) and to make decisions concerning programme

continuation, termination, expansion, modification or adoption (Worthen & Sanders, 1987).

Summative evaluation provides programme decision makers and potentials customers with

judgements about a programme's worth or merit (Worth en, Sand ers & Fitzpatrick, 1997).

The main purpose of react ion evaluation is to enhance the quality of training programmes, which

in turn leads to improved performance by measuring the participant's reactions to training

programme. This should be measured immediately after the programme. Level one evaluation

should not just include reactions towards the overall programme (e.g. did you like the programme.);

it should also include measurement of participants’ reactions or altitudes towards specific

components of the programme such as, the topics, contents, methodology, instructor etc.

The example of evaluations tools and methods suggested by scholars are typically happy sheets,

feedback forms based on subjective reaction to the training experiences,

b. Learning : Evaluation at this level wants to differentiate between what they already knew prior to

training and what they actually learned during the training programme (Jeng & Hsu, nd.). In other

word it can be said that learning in evaluation is the measurement of the increase in the knowledge

or intellectual capability. Learning outcome include changes in knowledge, skills and attitudes.

Some training events will emphasize knowledge, some will emphasize skills, some will emphasize

attitudes and some will emphasize multiple learning outcomes. The Evaluation should focus on

measuring the learning objectives. So we will have a pre test before training and a post test after

training. Tools and methods which can be used in evaluating learning level are assessment , tests

interviews or observations

c. Behaviour level: Behaviour evaluation means the extent to which the trainees applied the learning

and changed their behaviour, and this affect can be immediate or after several months of training.,

depending on the situation. This level evaluation wants to measure the transfer that has occurred in

the learner's job behaviour job performance due to the training programme. This performance

testing is to indicate the learner’s skill to apply what he has learned in the classroom. It involves

testing the participants' capabilities to perform learned skills while on the job, rather than in the

classroom. Change in the job behaviour is difficult to measure because people change in different

Page 5: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

ways at different times and also it is difficult to quantify and interpret than reaction and learning

evaluation. Observation and interview overtime are required to assess change, relevance of change

and sustainability of change in behaviour of participants. The opinion of the trainees in the form of

self - assessment, 360 - degree feedback is useful method to evaluate this level.

d. Result level: Result level evaluation is the effect on the business or environment resulting from the

improved performance of the trainee. Level four outcomes are not limited return on training

investment (ROI). It can also include others major results that contribute to the well functioning of

an organisation, it includes any outcome that the most people would agree is “good for the

business”, Outcomes are either changes in financial outcomes (such as positive ROI or increased

profits) or changes in the variables that should have a reliability direct effect on financial outcome

at the same point of the future.

The intention at this level is to assess benefits of training programme in terms of reduced costs,

improved quality of work, higher productivity, reduction in turnover, improved human relation,

increased sales, lower absenteeism. Higher work morale, fewer accidents, greater job satisfaction

etc. Collecting, organising and analysing level four information can be difficult, time consuming

and more costly than the other three levels, but the results are often quite worthwhile when viewed

in the full context of its value to the organisation.

Kirkpatrick’s Levels of Evaluation Matrix

4.2.1 ClPP Evaluation model:

CIPP model of programme evaluation is developed by Daniel L. Stufflebeam (1983). It refers

to the four phases of evaluation: context evaluation, input evaluation, process evaluation and

product evaluation.

It is based upon the view that the most important purpose of evaluation is to improve the

functioning of a programme.

a Context evaluation: It involves evaluation of training and development needs analysis and

formulating objectives in the light of these needs. It is aimed at determining the extent to which the

goals and objectives of the programme matched the assessed need of the organisation, whether

Organisational

4 Results

On-The-Job

3 Behaviour

2 Learning

1 Reaction

Page 6: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

needs assessment is accurately identified an actual and legitimate need of organisation and relevant

work culture. Context evaluation is part and parcel of the work undertaken by employees of an

organisation.

b. Input Evaluation: Input evaluation involves an examination of the intended content of the

programme. It is designed to assess the extent to which programme strategies and procedures

support the goals and objectives identified in the needs assessment and context evaluation. An

input evaluation is therefore an assessment of the programmes action plan. Such an evaluation

helps in prescribing the specific activities and strategies and procedures and to ensure that it has

been chosen the best approach in terms of the assessed needs and goals and objectives that has

been identified. It involves evaluation of policies, budgets, schedules and procedures for

organising programme.

c. Process Evaluation: A process evaluation is the critical aspect of programme implementation. It

involves evaluation of preparation of reaction sheets, rating scales and analysis of relevant records

(Prasad, 2005). Process evaluation is a continual assessment of the implementation of the action

plan that has been developed by organisation. It is an ongoing and systematic monitoring of the

programme. A process evaluation provides information that can be used to guide the

implementation of programme strategies, procedures and activities as well as a means to identify

successes and failures. The objectives of process evaluation are

to provide feedback to organisation and their employees about the extent to which the

activities are on schedule are being carried out as planned and using time and resources in an

efficient manner;

to provide guidance for modifying or explicating the programmes action plan as needed,

particularly since not all aspects of the plan can be anticipated or planned in advance;

to assess periodically the extent to which programmes personnel are performing their rules

and carting out their responsibilities;

to provide an extension record of the programmes, how it was implemented and how it

compares to what was intended.

d. Product evaluation: It is concerned with measuring and interpreting the attainment of training

and development objectives. In other words it can be said that the purpose of product evaluation is

Needs Assessment

What should we do?

Program Planning

How shall we do it?

Implementation

Are we doing as planned?

Outcome & Impact

Does it work?

ClPPEvaluation

model:

Page 7: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

to measure, interpret and judge the extent to which an organisation’s improvement efforts have

achieved their short term and long term goals. It also investigate both intended and unintended

consequences of improvement efforts.

CIPPEvaluation Model

Context

Process

Input

Product

ClPP Evaluation model

4.3 CIRO Model:

In 1970, the CIRO model for the evaluation of managerial training was proposed (Warr,

Bird & Rackson , 1970). This model was based on the evaluation of four aspects of training:

context, input, reaction and outcomes. According to Tennant, Boonkrong and Roberts

(2002), the CIRO model focuses on measurement both before and. after the training has

been carried out.

The main strength of the CIRO model is that the objectives (context) and the training

equipment (input) are considered. Context Evaluation focuses on factors such as the

identification of training needs and the setting of objectives in accordance with the

organisation's culture and climate. Input evaluation is concerned with the design and

delivery of the training activity.

Reaction evaluation looks at gaining and using information about the quality of training

experience. Outcome evaluation focuses on the achievement gained from the activity and is

assessed at three levels: immediate, intermediate and ultimate evaluation. The Immediate

evaluation attempts to measure changes in knowledge, skills and attitude before a trainee

returns to the job.

According to Santos and Stuart (2003) intermediate evaluation refers to the impact of

training on the job performance and how learning is transferred back into the workplace.

Evaluation attempts to assess the impact of training on organisational performance in terms

of overall results.

Page 8: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

4.4 D. Phillip’s Evaluation Model:

In the past decade, training professionals have been challenged to provide evidence of how

training financially contributes to business. Phillips (1996) suggested adding another level to Kirk

– Patrick’s four level evaluation approach to calculate the return on investment (ROI) generated by

training. According to James and Roffe (2000), Phillips’s five level evaluation approaches

translate the worth of training into monetary value which, in effect addresses ROI. Philips

framework provides trainers a logical framework to view ROI both from human performance

perspective and business outcomes perspective. However, the measurement goes further,

comparing the monetary benefit from the programme with its costs.

4.5 . Some other training approaches and models:

There are some other training approaches and models. Training evaluation is less touched part of

training and development, these methods have theoretical aspects but less in practical application.

So these models are not discussed in detail. These are:

Training Validation System (TVS) Model (Fitz-Enz, 1994),

Input, Process, Output/ Outcome (I PO) Model (Bushnell, 1990),

Hassett’s training investment analysis Model

Kaufillan’s five level evaluation model, Mahapatra and Lai (2005) and Sadri and Syndr

( 1995).

Hassett’s training investment analysis approach focuses on the four important areas and

measures the training effectiveness i.e. need analysis, information gathering, analysis and

dissemination. Kaufman’s five level evaluation model extends the scope of training impact

evaluation beyond the organisation, it includes how training benefits the society and the

surrounding environment in the organisation.

Kaufman's five-level Evaluation Model:

Level Evaluation Area

1. a. Enabling Quality input availability like; human, financial and physical resources

1. b. Reaction Methods, means and process acceptability and efficiency

2. Acquisition Indi vi dual and small group mastery and competency

Source:http://www.eparamus.com/leadership-development-2/tr

Page 9: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

3. Application Individual and small group utilization within the organisation

4. Organisational

Output

Payoff to the organization

5. Societal Outcomes Payoff to the society

Source: Adapted from Bhattacharyya, Dipak Kumar. Human Resource Research Methods Ganga

Saran & Grand Sons. 2007. p - 266.

Mahapatra and Lai (2005) stated that end user training can be evaluated at five levels. The five

levels are; technology reaction, skill acquisition skill transfer and organisational effect. In

assessing change that occurs after a period of training, researchers may tap one or all of three

conceptually different kinds of change: alpha, beta and gamma (Sadri and Synder 1995). In

alpha change the participant’s report of change is unbiased between time one and time two (the

pre- and post-ratings) because it is based on an instrument that that has been continuously

calibrated. Beta change refers to an apparent change where the measuring instrument has been

recalibrated by the participant during the measurement intervals, i.e. individual perceptions of

the response scale have changed. Gamma change refers to a situation where the participant

redefines or reconceptualises some relevant information (Iyer at. el., 2009).

Different approaches to evaluation of training indicating how factors associated with evaluation

are addressed below. Furthermore, how technology can be used to support this process is

suggested.

Different approaches to evaluation and associated models:

Kirkpatrick (1959) CIPP Model (1987) IPO Model (1990) TVS Model (1994)

1. Reaction: to gather

data on participant’s

reactions at the end of

a training program.

1. Context: obtaining

information about

the situation to

decide on

educational needs

and to establish

program objectives.

1. Input: evaluation of

system performance

indicators such as

trainee

qualifications,

availability of

materials,

appropriateness of

training etc.

1. Situation: collecting

pre-training data to

ascertain current levels

of performance within

the organization and

defining a desirable

level of future

performance

2. Learning to assess

whether the learning

objectives for the

program are met.

2. Input: identifying

educational

strategies most

likely to achieve

the desired result.

2. Process: embraces

planning, design,

development, and

delivery of training

programs.

2. Intervention:

identifying the reason

for the existence of the

gap between the

present and desirable

performance to find

out if training is the

solution to the

problem.

3. Behaviour: to assess 3. Process: assessing 3. Output: Gathering 3. Impact: evaluating the

Page 10: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

whether job

performance changes

as a result of training.

the implementation

of the educational

program.

data resulting from

the training

interventions.

difference between the

pre-and post-training

data.

4. Results: to assess

costs vs. Benefits of

training programs, i.e.,

organizational impact

in terms of reduced

costs, improved

quality of work,

increased quantity of

work etc.

4. Product: gathering

information

regarding the

results of the

educational

intervention to

interpret its worth

and merit.

4. Outcomes: longer-

term results

associated with

improvement in the

corporation’s

bottom line- its

profitability

competitiveness,

etc.

4. Value: measuring

differences in quality

productivity, service,

or sales, all of which

can be expressed in

terms of dollars.

5. Benefits of Training Evaluation

Evaluation ensures that the training is able to fill the competency gaps within the organisation in a

cost effective manner. This is specially very important in wake of the fact the organisations are

trying to cut costs and increase globally. Some of the benefits of the training evaluation are :

Evaluation ensures accountability - Training evaluation ensures that training programs

comply with the competency gaps and that the deliverables are not compromised upon.

Checking Cost- Evaluation ensures that the training programs are effective in improving the

work quality, employees’ behaviour, attitude and development of skills within a certain

budget. Since globally companies are trying to cut their costs without compromising upon the

quality, evaluation just aims at achieving the same with training.

Feedback to the Trainer / Training - Evaluation also acts as a feedback to the trainer and the

training process. Since evaluation accesses individuals at the level of their work, it gets easier

to understand the loopholes of the training and the changes required in the training

methodology.

Many organisations conduct training programs year after year only as a matter of faith and not many

have a firm evaluation mechanism in place. Organisations like IBM, Motorala only, it was found

out, have a firm evaluation mechanism in place.

6. Summary

Different approaches to evaluation of training discussed herein indicate that the activities involved

in evaluation of training are complex and not always well-structured. Evaluation activities in

involves multiple goals associated with multiple levels, evaluation should perhaps be viewed as a

collaborative activity between, training managers, trainers, floor managers, training designers and

possibly others.

There is a need for a unifying model for evaluation theory, research, and practice that will account

for the collaborative nature of and complexities involved in the evaluation of training. Existing

models fall short in comprehensiveness and they fail to provide tools that guide organizations in

Page 11: Training Evaluation , Kirkpatrick , CIPP Model, CIRO Model ...

their evaluation systems and procedures.. Only a small percentage of organizations succeed in

establishing a sound evaluation process .Evaluation activities are limited to reaction sheets and

student testing without proper revision of training materials based on evaluation results. Due to lack

of experience evaluation is not carried regularly In this case, the organization may consider hiring

an external evaluator, but that will be costly and time consuming. Considering the need for the use

of internal resources and personnel in organizations, we can provide expert support and guidance

and increase the power and efficiency of evaluation. Such expert systems can be used by external

evaluators as well.

Future research needs to focus on the barriers to evaluation of training, how training is being

evaluated and integrated with the training design, how the collaborative process of evaluation is

being managed and how they may be assisted. This will be helpful in guiding the efforts for both

the unifying theory of evaluation and in developing automated evaluation systems.