Training Evaluation Diss 29-1-10 Final

download Training Evaluation Diss 29-1-10 Final

of 66

Transcript of Training Evaluation Diss 29-1-10 Final

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    1/66

    Coursework Header Sheet

    158942-19

    Course INDU1116: Dissertation (MAIHRM) Course School/Level BU/PG

    Coursework MAIHRM Dissertation - January Submission Assessment Weight 100.00%

    Tutor DA Hughes Submission Deadline 29/01/2010

    Submission date for MAIHRM students who were passed to the dissertation stage in SEPT 2009

    Coursework is receipted on the understanding that it is the student's own work and that it has not,in whole or part, been presented elsewhere for assessment. Where material has been used fromother sources it has been properly acknowledged in accordance with the University's Regulationsregarding Cheating and Plagiarism.

    000529696

    Tutor's comments

    GradeAwarded___________

    For Office Use Only__________FinalGrade_________

    Moderation required:yes/no

    Tutor______________________Date_______________

    Title Page

    1

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    2/66

    A study on measurement methods of training programevaluation in Indian BPO Industry: A Case study of

    IBM-Daksh Title

    EVALUATION OF TRAINING

    SUBMITTED TO: NIELS WERGIN

    THE UNIVERSITY OF GREENWICH

    SUMITED BY: YIRMEICHON KEISHING

    MA IHRM

    29 JANUARY 2010

    2

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    3/66

    ACKNOWLEDGEMENT

    I would like to express my thanks to:

    My parent for their patience and support in all means.

    My supervisor for his guidance and support.

    Last but not the least the almighty God.

    3

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    4/66

    TABLE OF CONTENTSCOURSE WORK HEADERSHEET

    TITLE

    ACKNOWLEDGEMENT

    ABSTRACT

    INTRODUCTION

    LITERATURE REVIEW

    METHODOLOGY

    FINDING

    4

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    5/66

    ABSTRACT

    The aim of this research was firstly examine the existing theories of

    evaluation of training programmes on the whole and secondly

    explore a case study (IBM-Daksh) on the relevance of an

    extensively established academic model (Kirkpatrick Model) to

    evaluate training programs in Indian BPO Industry. The research

    has achieved following research objectives: to assess the need of

    training program evaluation; to identify and evaluate various

    measurement methods of training program evaluation; and to

    assess the effectiveness of Kirkpatrick Model for methodical

    evaluation of training. The research attempted to get dome

    following research questions: why measurement method of training

    program evaluation is imperative and how effective is Kirkpatrick

    Model for methodical evaluation of training; why is the need of

    training program evaluation; what are various measurement

    methods of training program evaluation; and how effective is

    Kirkpatrick Model for methodical evaluation of training.

    Training evaluation is needed for IBM-Daksh for the principal

    purposes of training results and implementation control. Certainly

    training evaluation of IBM-Daksh should be centrally focused

    towards measuring changes in knowledge and appropriate

    knowledge transfer. Training evaluation of IBM-Daksh should be

    measured through the criteria of qualitative performance and not

    through the quantitative performance. Lack of accountability is the

    major challenge of training evaluation for IBM-Daksh. Certainly

    training evaluation of IBM-Daksh should be methodological

    approach of measuring learning outcomes. Changes in learners and

    organizational payoff as dimensions of training evaluation target

    5

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    6/66

    area needs to be centrally focused by IBM-Daksh. Kirkpatrick Model

    of training evaluation is highly effective for IBM-Daksh. Learning and

    results as dimensions of Kirkpatrick model training evaluation

    require to be most focused by IBM-Daksh in order to get the desiredresults in relation to training evaluation. Certainly Kirkpatrick model

    training evaluation is highly effective in evaluating e-learning of

    IBM-Daksh. Definitely Kirkpatrick model training evaluation is cost

    effective and efficient in controlling staff turnover in IBM-Daksh.

    6

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    7/66

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    8/66

    employees and evaluating the performance of interns who are going

    under training process for the company. Many companies do not

    apply the pattern of training at work, particularly in areas where the

    trainers and personnel department do not have sufficient time orresources to implement so. The training techinique should must be

    improved and provided for the evaluation of available resources and

    to compete with market rivals .Lack of assessment decreases the

    efficiency in the working of an organization on the other hand, this

    also hampers the quality of production. Evaluation of training

    depends on numbers of issues however, there is needed to be

    realistic targets. The appraisal needs more elaborated information,

    where there is a need for huge investment. Management training, in

    general should be made clear to all for fulfilling expectations for

    everyone. Furthermore, the training process helps in the planning to

    review the capability of employees potential on a certain level of

    work. Extensive management training should be made friendlier to

    caution and ensure the requirement of employees and trainer at the

    time actual working within the organization. While the company

    needs a regular check-up about the performance of interns at the

    time of work.

    For making the training process more effective, it is important that

    the following considerations must be kept in mind for achievement

    of organizational goal. The training process must be done in a

    certain time frame with the objectives, to fulfill the vacant post. The

    trainer must review the time period that after the participants have

    learned the training when they are expected to return to work.After

    coming back to the actual work field there should be no confusion

    left with then for the execution of work. The nature and scope of

    training depends on the effective return on investment of the

    company in regards to the satisfactory achievement of

    organizational goal.

    8

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    9/66

    The evolution of performance in almost every organization is a

    regular phenomenon in evaluating the performance of employees.

    Chen and Rossi (2005) explain that the evaluation of literature

    depends more on the practices. The example says that most of thetraining is taken from the Kirkpatrick model. But at present is based

    on the market demand. The information for evaluation at this level

    is usually collected through the methods of questionnaire at the end

    of the training program.

    The effectiveness of a training programe depends heavily on the

    positive effect employed in the given task. The training session

    helps both the staff and members of the group so that they can

    raise their working capability. But the demanding aspects are

    increasing sharply in current scenario. The main objective of a

    training program is to make a hierarchy among the various ranks in

    the organization for the execution of work.The evaluation of the

    training program must show an improvement in the working

    efficiency of worker at various level and side-by-side increase in the

    financial outcomes of the company. Since the learning process is not

    traceable in short time but the top managers should must be

    efficient in serving the purpose of the organizational goal, as the

    accounting demand are increasing day by day for the growing

    competition in the market in recent years.Financial difficulties are

    big hurdles in front of emerging companies for the operation of their

    work. Giving vocational training is believed as an important aspect

    in the process of development. Evaluation of training is the most

    essential part of calculating the performance of employee. There is

    a vast gap on the thinking of actual performance desired and

    performed. This disparagement is said due to irregularity in the

    methods carried out by the training institutes for evaluation of

    employees.

    9

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    10/66

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    11/66

    To assess the need of training program evaluation

    To identify and evaluate various measurement methods of training

    program evaluation

    To assess the effectiveness of Kirkpatrick Model for methodical

    evaluation of training

    1.3 RESEARCH QUESTION

    Why measurement method of training program evaluation is

    imperative and how effective is Kirkpatrick Model for methodical

    evaluation of training?

    11

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    12/66

    Chapter 2

    L I T E R A T U R E R E V I E W

    2.1 INTRODUCTION

    Training is the most vital function of an organization. It helps the

    worker to be aware with the tools and technology used in the

    process of production, not only that it also helps the organization to

    set a hierarchy of standards among its employees to set targets for

    market growth. It also helps in evaluating the performance of an

    employee within the organization. Evaluation is not just about

    measuring reactions to the training of participants who had given a

    positive reaction to perform according to the new skills, technology

    and knowledge. Therefore the aim of evaluation must be to measure

    the change in knowledge and working capability of employees at

    workplace. There are number of models used for training in the

    literature but the most famous model of training evaluation is

    explained by Professor Donald L Kirkpatrick (1975). This section of

    the study details about the theoretical framework has been done

    with the help of already published materials like books journals and

    online sources. Keeping it in mind the above aims and objectives the

    following points have been covered as (i) Training Evaluation:

    Definition and Conceptualisation; (ii) Training Evaluation Models and

    Effectiveness; (iii) Kirkpatrick Model of Training Evaluation,

    2.2 TRAINING EVALUATION: DEFINITION AND

    CONCEPTUALISATION

    In todays world training evaluation is a much debatable topic in

    literature, as said by Burrow and Berardinelli (2003) who state that

    every little receiving on the training part are given much attention

    as evaluation. Organisations accept the value of training as

    12

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    13/66

    important investments of time and money for success of any

    organization. This logic is also accepted by Lingham et al. (2006)

    who believes that training helps in simplifying relations on the work

    front. Training validation is different to that of training evaluation.Validation may be defined as a process of checking people at work

    for completion of specific work. Evaluation is much broader sence

    defined in the year 1994 in Industrial Society Report.

    Evaluation may be defined as the outcome of operation analysed in

    an organization. This is clarify explained by (Rae, 1999) as a process

    of finding information regarding the effectiveness of organization.

    Evaluation is further divided into two parts: Macro and Micro. This

    evaluation helps the organization in achieving the objectives and

    target set. However, macro evaluation is said to be the standards

    used within the desired time at the low cost. Micro evaluation is

    more complex subject for bringing skill changes and improvements

    within the organization. The literature explains the norms on which

    guidelines of training are set for clarifying doubts on any training

    evaluation. This view is also support by (Marchington and Wilkinson,

    2000). The literature raises number of question to measure to

    conduct the evaluation of training. (Easterby-Smith and Mackness,

    1992) stressed on four purposes for evaluation: (1) The organization

    must have certain obligation to perform in the course of certain

    outcomes and consequences. (2) The organisation must have

    certain standard on which the quality of work may be improved by

    using data gathered from the evaluation. (3) The purpose of training

    help participants to recognize the true value of task they are

    assigned for. Easterby-Smith and Mackness stressed on the

    purposes and importance of training cycles on different

    stakeholders.

    The function of evaluation is to determine the change in knowledge

    in regards to the training provided at work Mann and Robertson

    13

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    14/66

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    15/66

    variations in the role model. The Kirkpatrick model is supposed as

    the base of training and evaluation in literature. This method is also

    supported by Wang and Wang (2005 p. 22) who said No matter

    how controversial it may appear, the four-level evaluation proposedby Kirkpatrick began a chapter of measurement and evaluation in

    the field of human resource development.The main element for an

    effective training model is highlighted by Tamkin (2005). Tamkin

    explains that a number of things should be taken in account while

    scrutinizing the process of training evaluation models. The first thing

    must be kept in mind so that the other factors must be identified

    and properly utilized in the process of training. Then we must

    determine whether the model adds value to the process of

    production so that its worker understands the contribution needed

    by various aspects of Human Resource practice. This also helps the

    organization to understand the value of employees and to short out

    the differences coming in the process of work. It could be also

    argued that the literature must examine all the relative information

    gathered to understand the organizational setup in terms of

    production at lower cost, the literature to effective training must

    work in order to bring all the hidden aspects to solve challenges

    occurring at work place. Management should must be persuaded to

    invest money, staffing in regard to the time for operation to increase

    the level of business value. If there is nobody within the

    organization who can conduct evaluations then it is very difficult for

    organization to set up target for operation and remain in the

    business. Lack of senior manager is one more drawback especially if

    there is no training supervisor in the decision making process.

    Absence of strategic planning and effective direction in the way of

    increasing performance might hamper the working at every level.

    Lack of accountability is treated as a big barrier for effective

    evaluation. This was focused by Rae (1999) that effective evaluation

    needs a training quintet to make senior management, aware withthe new technique used for progress of a company. Rae also

    15

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    16/66

    stressed on the view that senior management should must be

    authorized to take an active part in the organization for improving

    results so that to make a congenial atmosphere in the organization

    for proper functioning. If the organization doesnt have a goodculture to encourage its employee then the evaluation part will

    create a problem in the organization to find out information

    regarding drawback in the system. Culture may be termed as an

    obstacle to increase the performance of employees (Holton, 1996;

    Holton et al., 2000). The State government of Louisiana founds that

    culture have an adverse impact over the performance based

    training. Reinhardt (2001) focuses that in her research on

    identifying barriers to measure the performance of employee at

    work in learning. Reinhart also highlights the loopholes in the

    organizational set-up as a important aspect to measure the impact

    on performance. This issue of culture is termed as an imperative

    barrier.

    2.3 TRAINING EVALUATION MODELS AND EFFECTIVENESS

    Recently from many contributions in the field of training literature,

    training effectiveness and evaluation have acknowledged significant

    attention (Holton, 2003; Holton and Baldwin, 2000; Kraiger, 2002).

    One of them is the development of Kirkpatricks recent four-layered

    assessment technique (Holton, 1996; Kraiger, 2002) and board

    theoretical technique of training effectiveness (Holton, 1996; Tannenbaum et al., 1993) has come forward as a vital works.

    Although, one of these evaluation methods was developed 10 years

    ago, post-training behaviour, has not been incorporated into these

    evaluation methods. Additionally, no update has been done for

    many years, of the variables believed to part a good role in the

    training effectiveness. So, the objective of this article is to analyse

    the research work on training effectiveness and evaluation of adecade and to summarize the findings as a model for training

    16

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    17/66

    effectiveness and evaluation (IMTEE). Sometimes training

    effectiveness and training evaluation are used interchangeably; but,

    these are two different terms. A real illustration may be helpful to

    explain these differences. In recent times, a government agency foremployment was instructed by a court to revamp and oversee

    selection assessment for about 30 jobs. It was time bound for that

    staff needed to work for several months included many hours

    overtime in a row. Due to that in a year into that project many

    employees left the agency and many were falling sick time after

    time. To stop employee turnover further and to provide them

    support to complete remain project on time, the agency initiated a

    training program for the employees for dealing with exhaustion

    problem. All the employees included supervisors and their

    subordinates attended the training. The multi purposes training

    program was specially prepared, which included humor, lecture, and

    real time practice of many stress-reducing techniques. This training

    program also included trainees to improve worker and supervisor

    relation even after training. In the end, the superiors were pushed to

    share their self developed stress-reduction techniques and methods

    with their juniors/ subordinates.

    Measurement of learning outcomes through a practical method is

    called training evaluation. On the other hand a theoretical method

    to measure training outcomes is called training effectiveness.

    Training evaluation provides micro-view of a training program as it

    focuses only on learning outcomes (Torres and Preskill, 2001). On

    the contrary, training effectiveness highlights the whole learning

    system and provides a macro-view of the training outcomes.

    Training evaluation try to find out the benefits of a training program

    to individuals as learning and increased job performance. Training

    Effectiveness try to find out the benefits for the organization by

    deciding why people learn or do not learn.

    17

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    18/66

    Evaluation of a training program is the assessment of its

    success/failure in of views its design and content, changes in

    organizational, and learners productivity. The training evaluations

    methods used to review depends on the evaluation model, as thereare four models for evaluation. First of them, Kirkpatricks behavior,

    learning, reactions, and results typology, is the easiest and

    frequently used technique for reviewing and understanding training

    evaluation. In this four dimensional method, learning outcome is

    measured at the time of training which mean behavioral attitudinal,

    and cognitive learning. Behavioral learning measures on-the-job

    performance after the training. The second model expanded by

    Tannenbaum et al. (1993), on Kirkpatricks four dimensional

    typology, he included two more aspects, post training attitude and

    further dividing behavior into two training outcome for evaluation;

    transfer performance and training performance. In this extended

    model, training reactions and post-training attitudes are not

    associated with any other evaluation. No doubt that learning is

    associated with training program performance, and training

    program performance is associated with training transfer

    performance, and transfer performance is associated with training

    outcomes.

    The other, third evaluation strategy, Holton (1996) included three

    more evaluation objects: transfer, learning, and results. Holton

    doesnt consider reactions because reactions are not a primary

    effect of training program; to a certain extent, reactions are the

    moderating or mediating variable between trainees actual learning

    and motivation for learning. This model relates learning to transfer

    and transfer to results. Additionally, Holton has different opinion for

    the combination of effectiveness and evaluation. Because of that in

    his model specific effectiveness variables are highlighted as

    important aspects for assessments at the time of evaluation of

    training outcomes. The last and fourth evaluation method wasdeveloped by Kraiger (2002). This model stress on three

    18

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    19/66

    multidimensional areas for evaluation: changes in trainees (i.e.,

    cognitive, behavioral, and affective) training design and system (i.e.,

    design, validity, and delivery of training), and organizational

    outcome (i.e., results, job performance, and transfer climate).Feedback from the trainees is considered an assessment technique

    for measuring how effective a training program design and system

    were for the learner. Kraiger stated, feedback measures are not

    associated with the changes in trainees/employees or organizational

    outcomes, but those changes or leanings in employees are

    associated with organizational outcomes.

    The study of the training organizational, and individual,

    characteristics are called training effectiveness that affects the

    learning route before, during, and after the training. Need analysis

    for training is recognized as an important input to training

    effectiveness (Salas and Cannon-Bowers, 2001). A Full explanation

    is beyond the study area of this article, a detailed training needs

    analysis consider the personal differences of trainees, the

    organizational objectives and culture, and the various features of

    the task(s). Conclusion form this analysis is used to decide both the

    training content and the method. Thus training wouldnt be effective

    until it fulfills the organizational, individual, and task needs,

    identified through the need analysis. Relationship of these also

    answers in the change (increases or decreases) in the transfer and

    learning performance. Holton and Baldwins (2000) extended this

    model as by training effectiveness model, it clearly identifies

    specific characteristics affecting transfer and learning outcome.

    These characteristics consist of motivation, ability, individual

    differences, prior experience with transfer system, learner and

    organizational involvement (e.g., supports, preparation), and

    training content and plan.

    Training effectiveness model (Holtons, 1996) also has particulartraining, organizational, and trainee characteristics as primary or

    19

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    20/66

    secondary variables that affect the training outcomes. Holtons

    model put forward that the all these characteristics are related to

    transfer and learning performance. However, indirect relationships

    are also there because of the interactions between thesecharacteristics. For an example, Holton recommended that

    motivation interacts with organizational and training characteristics,

    in this way it influencing the training outcomes. However, Holton

    has given valuable inputs for assessing training effectiveness, a few

    studies (Holton, 2003; Holton, Bates, and Ruona, 2000) have

    measured the various outcomes recommended by the author. All

    these authors have developed a Learning Transfer System with

    effectiveness variables summarize in a model and found it

    supportive for the models construction.

    Tannenbaum et al. (1993) has suggested four types of in his training

    effectiveness model, Holton (1996) has given two, and the review of

    the literature revealed the seven techniques for assessing

    motivation level, each one involves a different aspect of motivation.

    In addition to that, some studies have given combined aspects of

    motivation including all motivational scales. As a result, all studies

    of motivation combined into one to determine eligibility because it

    was very difficult to distinguish the effects of different scales. So,

    the simplicity of the model recognizes that there is a complexity

    between these variables.

    This study found few changes in motivation as a training outcome

    (Cole and Latham, 1997; Frayne and Geringer, 2000). These

    researchers used expectation as a measure of motivation and noted

    significant improvement in post-training motivation. Two

    motivational aspects have been placed in their training

    effectiveness models as important effectiveness variables:

    motivation for transferring and motivation for learning (Baldwin and

    Ford, 1988; Holton, 1996; Holton and Baldwin, 2000; Tannenbaum

    20

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    21/66

    et al., 1993). In this way it become difficult to measure how training

    outcomes affected by changes in motivation. In addition to that, this

    approach cant help training experts to study different aspects of

    training content, design and the organizational culture that mayaffect motivation for transferring or learning. As a result, a method

    is required for assessing changes in motivation.

    Active learning means conscious knowledge gain which can be

    measured through a test that was taught during the training

    process. Tannenbaum et al. (1993) explained it further, that the

    conscious knowledge gain can include increases in the knowledge, a

    change in composition of knowledge, or by both.

    Cognitive outcomes expanded by Kraiger (2002) further emphasizes

    on the self-knowledge, structural knowledge, executive control, and

    problem solving. With the observation of cognitive learning and with

    the other objectives of evaluation, as discussed earlier, an inverse

    relationship is being found between cognitive learning and post-

    training self-efficacy. Training outcome means the ability to use the

    skill learnt during the training, and can be measured by observation

    that a trainee can perform the skills gained in the training. Kraiger

    (2002) has given two ways of training skills performance: one the

    capability of imitating structural learnt behavior in the training and

    second to enhance performance after practices with few errors.

    Tannenbaum et al. (1993) says that the trainees may be able to

    perform during training but may not be able to transfer these skills

    on job. In this way performance during training would be better than

    on the work performance (e.g., Salas, Ricci, and Cannon-Bowers,

    1996). In this way, relationships between on training outcome and

    objective of the evaluation, as mentioned earlier, training outcome

    is affected by cognitive learning and post training behavior.

    21

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    22/66

    Result is the last and final dimension of training evaluation. It refers

    to trainees quantifiable behavioral changes (Kraiger, 2002). For

    example, organizational outcomes form the trainings transfer

    performance may also include enhanced safety measures, morale,efficiency, and quality/quantity of outcome.

    2.4 KIRKPATRICK MODEL OF TRAINING EVALUATION

    In the year 1952, Donald Kirkpatrick (1996) conducted research to

    evaluate the performance of a training program. The main aim of

    Kirkpatricks method was to measure the participants reaction while

    execution of a program and the amount of learning that took place

    in the form of changing behavior at workplace. The concept

    Kirkpatrick measurement may further be divided into four levels.

    While documenting the information on training in 1959, Kirkpatrick

    (1996) comes on these four measurement levels of a training

    evaluation. It is still unknown that how these four steps became

    known as the Kirkpatrick Model which is recognized as most vital

    instrument for all the organization (Kirkpatrick, 1998).This form of

    literature is one of the frequently used forms in the process of

    technical training as well as educational training. The first level of

    Kirkpatricks measurement, reaction may be defined as how

    efficiently the trainee can used the method for organizational

    growth. The second measurement level explains that learning is

    said to be the refined tool for increasing and determining how far

    the knowledge, attitudes, and skills helped the trainees at work. The

    training also defined as a propeller to boost a congenial atmosphere

    and behavior of an employee. Behavior defines how the relationship

    help in the process of learning should be articulated at work point.

    Kirkpatrick believes there is a big gap among the technological

    knowledge and the implementation of that on the job.The fourth

    training measurement defines how to reduce cost and grievances so

    that the profit of the organization level can be increased. Although

    22

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    23/66

    Kirkpatricks first level is least difficult, for measuring performance.

    No studies can prove that the one method is suitable for all

    application for evaluation of knowledge.

    After forty years of regular use of classic Kirkpatrick Model, several

    authors suggested that this is the most suitable for all types of

    organization. Warr, Allan and Birdie (1999) evaluated a two-day

    technical training course for checking the performance of 123

    motor-vehicle technicians over a period of seven- month period to

    check the longitudinal variation of the Kirkpatrick Model. The main

    aim of studying this method was to demonstrate how the

    performance of employees after training is going to be improved.

    Warr et al. (1999) found that the levels in the Kirkpatrick Model are

    correlated to each other. They consider six trainee features and one

    organizational characteristic that might help in actual performance

    and outcomes at every level of measurement. The trainee features

    on their study so that their learning on work stations may increase

    the confidence on the work and motivate other worker about the

    learning task, strategies and other technical terms at various levels.

    The one most important feature to evaluate the performance was

    transfer at work point in view to make strategic change demanded

    by the organization on the job.

    Warr et al. (1999) studied the link between the modified Kirkpatrick

    framework measurement levels to study the behavior and results on

    job status. The three levels which were studied were reactions,

    learning, and job behavior. Trainees were given all the appropriate

    knowledge about the work .The questionnaire based information

    was mailed after one month to review the performance on the basis

    of information collected at all level. Later all questionnaire data

    were transformed into another set of measurement level. The

    reaction level was gathered after the training given for proper

    functioning of work so that to know about their perceptions for

    23

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    24/66

    usefulness of the training and measure to eradicate problems of

    training. The learning level was measured by all three

    questionnaires. Since the main objective of training was to improve

    and motivate employees towards the goal of the organization inregards to the use of latest technology. Because experience comes

    after passing of certain time on a particular work these researchers

    measured the amount capability gained during the course of

    learning. Change in scores was compared between and before

    training and after training. Warr et al accepts that there is a

    correlation between the six individual trainee and factors of

    motivation. Correlation at work station helps to predict change in

    training, Job behavior, and the desired objective of measurement

    level on and after training. Multiple regression helps in analyzing

    different level scores gained in the process of training so that a

    strong relationship could be made among trainer and trainees

    Warr et al. (1999) explained that relationship build between the six

    individual trainee features on organizational predictors for

    evaluation of performance at every level.At first level participants

    are given prior training before going on actual work so that their

    capacity can be measured after training. At the second level the

    other factors like motivation, confidence, and strategy works in

    bringing change within the system of learning. Learning level

    reflects the changes which were strongly predicted after the process

    of training at all level. Research suggests that there is a possible

    link between reactions and learning so that it may be identified with

    the use of more differences in the opinion level. In the third point

    the training build confidence in trainee and helps in transfer support

    to predicted job behavior. Transfer support was measured as a part

    of organizational trend to coordinate for the satisfaction of the

    organization. Transfer support is said to be the amount of support

    given by trainers to their trainee and colleagues for improving the

    quality of work in the organization. Warr et al. suggested that an

    24

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    25/66

    analysis of pretest scores might explain reasons for the behavior

    and helps in the improvement of organizational behavior.

    Belfield, Hywell, Bullock, Eynon, and Wall (2001) focuses on themethod for evaluating performance of medical educational

    interventions for checking efficiency on healthcare by the

    adaptation of the Kirkpatrick Model at all five levels. The five levels

    are reaction, learning, behavior, participation, and result. However

    the Kirkpatrick Model has been applied for years to solve the

    problems arise on technical training; recently this model was

    applied over nontraditional electronic learning system. Horton

    (2001) published Evaluating E-Learning in which he explains how to

    use the Kirkpatrick Model to evaluate e-learning.Kirkpatrick (1998)

    suggests that there are as much possibility of the four levels for

    evaluation of training .In order to make full use of organizational

    resources there is a need of effective training and capable

    manpower to execute the work in accordance wit the desired

    objectives of the organization.

    Trainers must form a discipline pattern for all its employees or

    trainees in order to evaluate the performance with the result desired

    for organization. Training evaluation is a diverse field. The process

    of evaluation may be formative or summative (Eseryel, 2002), so

    that to bring change in the evaluation of program. Kirkpatricks

    (1994) often- explained about Four Level Model of Training

    Evaluation, though which no organization can go for maximizing the

    profit .It prescribes an efficiency of evaluation design based on four

    key levels. It also advocates the measurement of participants in the

    process of learning and the behavioral change in corresponds to the

    organization.Criticisms are always made for the simplification in the

    process of, learning at Kirkpatricks observation and its finding on

    hierarchical relationship between trainee and staffs on all stages

    (Holton, 1996; Kraiger, 2002). To reduce the complexity in the

    25

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    26/66

    evaluation of learners behavior at all four levels in this article, we

    use a mid- range theory approach to focus only on one part of

    Kirkpatricks four-stage framework.

    The Kirkpatrick model is the most frequently used (Kraiger, 2002)

    and perhaps the most influential in the field of evaluating

    performance of employees at wirk (Eseryel, 2002). Kirkpatricks

    model is based on four-dimensional typology which follows a

    strategic framework for effective evaluation of employees

    performance at work. As per his model, there are four basic levels

    of evaluation i.e. learning, behavior, reaction, and most important

    results there are hierarchical coordination among every level.

    Similarly a positive reaction helps in increasing the level of

    understanding in regards to the objectives formed by the

    organization. This model also helps experts to maintain the

    performance for timely completion of work with the harmonization

    of worker through evolution of training. (Eseryel, 2002): but,

    question of follow-on logic is asked at each stage (Tannenbaum et

    al, 1993; Holton, 1996).

    Tannenbaum et al (1993) asked to show the link between reactions

    occurred in the remaining three dimensions of Kirkpatricks model.

    Certainly, in view, reactions to training and post-training methods

    are related to any other way of assessment (Alvarez et al, 2004).

    The notion of post-training was also supported on the basis of the

    nature of work and the demand of the market with the use of

    evaluation of performance at training.

    Kirkpatricks behavior label was further studied and specified for

    increasing the training performance and transfer performance

    through learning at work to raise the efficiency of employees at

    work.

    26

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    27/66

    Holton (1996) accepted that primary outcomes should not be

    considered as part reactions for the evaluation procedure On the

    other hand reactions are a benchmark for the suitability of a training

    program. Holton (1996) says, the applicable evaluation objectivesare learning, learning should lead to transfer and then transfer

    should lead to result. The evaluation of a training program is always

    associated with its effectiveness (Alvarez et al, 2004). Evaluation

    check does it works and effectiveness checks why it works (Ford,

    1997). Kraiger (2002) gives three multidimensional objectives for

    training evaluation. The evaluation procedures include evaluation of

    training design and system, changes in organizational and trainees

    outcomes. Changes in trainees may be cognitive, behavioural or

    emotional. Organizational outcomes include the transfer culture,

    results and job performance.

    Kirkpatricks model is the initial for this study. Although, because of

    the complexity in evaluating trainees on these four levels reaction

    to behavior mid-range approach has been used (Pinder and Moore,

    1979), and the focus is only the part of Kirkpatricks four-layered

    evaluation structure learning. Mid-range approach focuses on a

    part of the structure and it allows for descriptive investigation.

    Management training is valuable only if it brings positive change

    and improvement in the individuals. So, the result of the training

    which is measured is the positive change in the individuals.

    Training effectiveness and evaluation model has been developed

    after a thorough review of literature from 1992-2002, Alvarez et al

    (2004). Alvarez et alhas broadenedKirkpatricks influential model

    the idea of learning evaluation. This model associates training plan

    and content, changes in organization and trainees outcome. It

    developed Kirkpatricks learning model further and identify it as

    changes to learning. As the mid-range theory approach, it searches

    for the learning aspects in this model. It also measures what

    27

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    28/66

    changes in the trainees as they participates in the training

    programme. Alvarez et als (2004) model divides these changes into

    three parts: training performance post-training self value, and

    cognitive learning. Alvarez et al defines post-training efficiency as apost-training mind-set. Self value means an individuals beliefs

    about himself to perform a particular task and show self value and

    confidence while performing the task.

    This behavioural model has been transferred to the enterprise

    related literature (Krueger and Carsrud, 1993) and at the same time

    it is used in the training evaluation model for the enterprise training

    (Fayolle et al, 2006). In the behavioral model (as adapted by Fayolle

    et al, 2006), enterprise related objectives are influenced by three

    factors: subjective norms, behavior, and expected behavioural

    control. It is becoming a trend among companies to focus on the

    entrepreneurship skills. Companies today need sustainable growth

    in terms of competitiveness, performance and to organize their

    organizations for innovation and entrepreneurial behavior. These

    organizations are aggressive in exploring new opportunities and

    come with a new product in the market and often initiate the

    competitors to respond their actions. Thus, a strong training

    evaluation is required by these enterprise related organizations.

    2.5 SUMMARY

    Training evaluation is an important part of any training programme,as it helps to assess the real outcome from a training programme.

    Training evaluation consider the changes in the trainees on the job

    performance or after the training programme. However,

    performance is not related directly to the training programme and it

    is not the only parameter to judge the success or failure of a training

    programme. To judge the outcome from a training programme,

    training effectiveness and training evaluation model can play asignificant role. Training effectiveness and training evaluation are

    28

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    29/66

    two different techniques to assess a training programme outcome.

    As training evaluation is associated with the training content and

    training plan, training effectiveness is associated with whole

    learning system. In brief, training evaluation provide a micro-viewon the training outcomes while training effectiveness provides

    macro-view on the training outcomes. Donald Kirkpatrick has

    suggested a model for training evaluation. Kirkpatrick presented a

    four steps training measurement process, learning, behavior,

    reaction, and results. It is widely used tool especially for measuring

    technical training outcome. This model was reviewed by many

    researchers; some of them found it most suitable.

    29

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    30/66

    Chapter 3

    M E T H O D O L O GY

    3.1 RESEARCH PHILOSOPHY

    There are plentiful reasons why an indulgent of philosophical issues

    is imperative whilst carrying out a research. This based on the

    argument that it is the nature of philosophical questions that best

    makes obvious the importance of acknowledging philosophy.Recognizing the philosophy of the research enables a researcher to

    proceed with the unfussy way and naive mode of questioning, as the

    questions of the research might create disorder and flux in research

    statements and ideas regarding the state of affairs that makes the

    choice of philosophy of exceptional assistance (Smith, 2001). The

    circuitousness and round nature of philosophical questioning in itself

    is useful, as it most commonly pushes comprehensively thinking,and produces additional questions in regard of the subject under

    thought. Elucidating statements linked to personal values is as well

    perceived as functional while planning a research (Hughes, 2000).

    Moreover, Easterby-Smith et al(2003) make out three reasons why

    the looking at philosophy could be noteworthy with exacting

    suggestion to research methodology. The first reason is that

    philosophy can help out the researcher to process and spell out the

    research methods to be used in a research, that is, to spell out the

    general research strategy to be applied. The second reason is that

    comprehension of research philosophy will allow and help out the

    researcher to assess diverse methodologies and methods and steer

    clear of unsuitable use and needless work by making out the

    limitations of specific approaches at an early on phase. The third

    30

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    31/66

    reason is that philosophy might help out the researcher to be

    creative and pioneering in either choice or adaptation of methods

    that were formerly exterior to experience. Putting these three

    reasons at the center of research strategy, the researcher firstlydecided about the research philosophy to be applied so that nothing

    inappropriate was taken into consideration.

    Easterby-Smith et al (2003) classify research philosophy as

    positivism and interpretivism. Positivism is a philosophical model

    which limits real facts contained by the bounds of science on the

    basis of prescribed judgment or arithmetic. It is based on the

    principle that there is an objective realism and that facts exists as

    something that can be experimental and calculated. A positivist

    model normally entails quantitative research methods for collecting

    and analyzing the data (Hughes, 2000). Interpretivism is a

    philosophy chains the analysis that people and their institutions are

    essentially dissimilar from the natural sciences. The examination of

    the social world consequently necessitates an unusual approach andattempts to recognize of human behaviour, an empathic

    acknowledgement of human act. There is a vision that all research is

    interpretive, that research is directed by the researchers set of

    viewpoint and approach concerning the world and how it ought to

    be acknowledged and researched. An interprevism model normally

    entails qualitative research methods for collecting and analyzing the

    data (Smith, 2001). Interpretive research methods are close todisapproval since they hold variations of ontology, of manifold,

    independently built although socially and culturally controlled

    realisms. If realism is built it involves someone is active and

    concerned in that process (Easterby-Smith et al, 2003). This is on

    contrary to positivist approaches in which the researcher is

    sovereign of realism. In an interpretivist philosophy, the researcher

    is forever fraction of the realism they are attempting to realize.

    31

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    32/66

    So whilst deciding about the methodology of this research, the first

    step was to decide about research philosophy. This was because the

    research had to choose a suitable path and approach of the

    research. The choice was open to choose from positivism andinterpretivism. But choosing the research philosophy was rather

    concerned the nature of research problem, as whether the research

    was scientific research or social research or the research was

    experimental or exploratory. Certainly this research was exploratory

    and not experimental. Notably, the questions of the research were

    to examine the existing theories of evaluation of training

    programmes on the whole and to explore a case study (IBM-Daksh)

    on the relevance of an extensively established academic model

    (Kirkpatrick Model) to evaluate training programs in Indian BPO

    Industry. Therefore, the philosophy of this research was decided as

    interpretivism where the facts regarding the above issues were

    explored and interpreted in accordance with the research objectives

    and research questions.

    3.2 RESEARCH METHOD

    Research methods are quantitative and qualitative. In quantitative

    research, the researcher is preferably a purposeful observer that

    neither contributes in nor controls what is being researched. In

    qualitative research, on the other hand, it is attended that the

    researcher can study the most regarding a state of affairs by taking

    part and/or being engrossed in it (Creswell, 2002). These

    fundamental principal statements of both methodologies lead and

    progression the sorts of data collection methods put into application

    (Patton, 2002). Normally qualitative data entails words and

    quantitative data entails numbers, there are researchers who

    experience that one is superior or more methodical than the other

    (Punch, 2003). An additional most significant difference flanked by

    the two is that qualitative research is inductive and quantitative

    32

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    33/66

    research is deductive (Patton, 2002). One of the most differentiating

    points between qualitative and quantitative research is that in

    qualitative research, a hypothesis is not required to start on

    research. But, every one quantitative research necessitates ahypothesis prior to research can start on.

    Even though there are apparent distinctions amid qualitative and

    quantitative methods, a few researchers uphold that the preference

    between putting into application qualitative or quantitative methods

    in fact has less to do with methodologies than it does with

    positioning oneself in a specific order or research tradition. The

    complexity of selecting a method is compounded by the reality thatresearch is generally allied with universities and other institutions.

    The results of research tasks generally steer vital decisions

    regarding precise practices and policies (Patton, , 002). The

    preference of which method to put into application might echo the

    interests of those carrying out or taking advantage from the

    research and the objectives for which the results will be put into

    application. Choice regarding which sort of research method to usecould as well be based on the researcher's own understanding and

    penchant, the people being approached , the projected audience for

    results , time, money, and additional resources obtainable (Creswell,

    2002).

    A few researchers suppose that qualitative and quantitative

    methodologies cannot be shared since the beliefs essential to every

    tradition are so very much dissimilar. Some other researchers

    believe they can be applied in grouping just by alternating flanked

    by methods qualitative research is fitting to reply definite kinds of

    questions in definite circumstances and quantitative is right for

    others. Moreover a few researchers suppose that both qualitative

    and quantitative methods can be put into application concurrently

    to respond a research question (Punch, 2003).

    33

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    34/66

    Once decided about the choi8ce of philosophy and discussed the

    available research methods, the researcher was very much clear

    about choosing the appropriate research method. Initially the

    research had option to either choose only quantitative method orqualitative method or both, but in the end there was only one choice

    of method and it was qualitative method. As stated above when a

    research is to be carried having interpretivism philosophy the

    method of data collection and data analysis ought to be qualitative.

    So this research was conducted putting into application the

    qualitative method. The qualitative data collection and data analysis

    was carried out to respond the developed research questions as

    why measurement method of training program evaluation is

    imperative and how effective is Kirkpatrick Model for methodical

    evaluation of training; why is the need of training program

    evaluation; what are various measurement methods of training

    program evaluation; and how effective is Kirkpatrick Model for

    methodical evaluation of training. In the further sections, the data

    collection and data analysis tools are discussed and detailed.

    3.3 DATA COLLECTION

    3.3.1Secondary Data

    Secondary data is data collected for purposes other than the

    achievement of a research task. A range of secondary data sources

    is obtainable to the researcher collecting data on a particular

    industry or company or a particular subject. Secondary data is as

    well utilized to put on early approaching into the research problem,

    in subjective approach (Robson, 2000). The two foremost

    advantages of collecting and utilizing secondary data in a research

    are time and cost savings. But the foremost disadvantages of

    collecting and utilizing secondary data are questionability of

    accuracy and reliability (Sekaran, 2003).

    34

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    35/66

    Nevertheless as a common ruling, a methodical research of the

    secondary data ought to be carried out proceeding to carrying out

    primary data. The secondary data offers a functional setting and

    make out foremost questions and issues that require to be attendedby the primary data. Secondary data is classified in relation to its

    source either internal or external. Internal data is secondary data

    obtained within the group where research is being carried out.

    External secondary data is get from exterior sources (Robson,

    2000).

    In this research as well the importance of collecting and usingsecondary data linked to preparing ground work for collecting and

    using primary data. Secondary data helped out the researcher to

    develop propositions or assumptions based on which the primary

    data was collected approaching directly the HR Managers of IBM-

    Daksh. Although the researcher had choice to collect both internal

    and external secondary data, but considering the limited time and

    resources, it was though appropriate to use only external sources ofsecondary data. These external sources were prominently books

    and journals relating to HRM and particularly training.

    3.3.2Primary Data

    Primary data collection is related to data collected from a primary or

    first hand source, explicit to the research field. On contrary,

    secondary data generally acclimatizes data from extra accessible

    researches, in a few cases extrapolating or interpolating such data

    (Robson, 2000). The primary data collection is characteristically

    thought more correct than the use of secondary data; nevertheless,

    this is just true if it has been collected utilizing a resonance research

    design and suitable data collection method. Although the data

    collection process is generally professed as straight forward

    comparing the forming or amalgamating phases of analysis that go

    35

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    36/66

    after it, there are many factors which might be effortlessly

    unnoticed at this vital first phase (Sekaran, 2003).

    There are a range of primary data sources where the prominent are

    focus group, interview, questionnaire and observation. However,

    questionnaire is the cheapest, efficient and most frequently used

    primary data collection method. A questionnaire is a set of written

    questions relating to the problem or issues under research for which

    the researcher necessitates answers from respondents (Sekaran,

    2003). Formulating of a questionnaire plays a vital role in rewarding

    the points of primary data collection. Questionnaire design is a

    lengthy process that requires persistence and reasonable analysis. It

    is an influential and well-organized assessment method and must

    not be taken flippantly. Designing of questionnaire must be

    performed in a phased approach (Robson, 2000).

    Considering the usefulness and effectiveness of questionnaire in

    primary data collection, primary data was collected in this research

    using questionnaire. The questionnaire designing or formulation

    went through a number of phases in order to ensure that the

    researcher was proceeding in the right direction. In the very first

    phase the objective of research was properly defined and as well

    the target population from whom the researcher was going to

    collect primary data. In the second phase, the decision was taken

    regarding the type of questions the researcher was going to use i.e.

    close ended, open ended or combination of both. Only close-ended

    questions were formulated for the questionnaire. Some 10 questions

    were included in the questionnaire. Another important step in this

    direction was carrying out a pilot survey to test the questionnaire.

    3.3.3Sampling

    In a questionnaire based research, the researcher has to select thetarget people specific for achieving the research objectives ands

    36

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    37/66

    answering the research questions. This is called sampling. Sampling

    techniques are classified as probability and non-probability. In

    probability sampling, the first step is to choose the population of

    interest, that is, the population the researcher looks for the resultsabout. The sample may well be chosen in numerous stages.

    Understandably the probability of getting every sample the

    researcher chooses, can as well work out a sampling error for the

    results (Punch, 2003). On the other hand, non-probability sampling

    is a sampling technique in which the samples are chosen in a

    process that does not offer every one of the individuals in the

    population equivalent probability of being chosen (Punch, 2003).

    The researcher had choice either to go for probability sampling

    technique or non-probability sampling technique for approaching

    the target people in order to collect primary data through

    questionnaire. The distinction was very clear as the research initially

    had no idea about the probable target people. Therefore choosing

    non-probability sampling was an obvious choice. The researcher

    firstly approached some senior HR Managers in Delhi and NCR

    based IBM-Daksh corporate office and with the help of then the HR

    Managers as responds were selected on grouping basis. Finally 25

    HR Managers of IBM-Daksh were handed over questionnaire and

    after responding the questions they submitted the questionnaire

    with enthusiasm. The task was tough but it ended happily and

    satisfactorily.

    3.4 DATA ANALYSIS

    Triangulation was found the most fitting data analysis method for

    this qualitative research. Triangulation is a method applied in

    qualitative research to confirm and ascertain validity of the

    research. Triangulation data analysis can be carried through five

    37

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    38/66

    types of methods explicitly data triangulation, investigator

    triangulation, theory triangulation, methodological triangulation, and

    environmental triangulation (Marshall and Rossman, 2002).

    However, the most common mode of triangulation applied inacademic researches is data triangulation.

    Data triangulation in this research entailed the application of

    diverse sources of data/information. A key strategy was to classify

    each cluster or sort of data for the agenda that the researcher is

    examining. After that, there was included a similar number of

    people from each data group for answering the research questions

    and achieving research objectives.

    38

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    39/66

    Chapter 4

    F I N D I N G S A N D A N A L Y S I S

    4.1 INTRODUCTION

    The aim of this research was firstly examine the existing theories of

    evaluation of training programmes on the whole and secondly

    explore a case study (IBM-Daksh) on the relevance of an

    extensively established academic model (Kirkpatrick Model) to

    evaluate training programs in Indian BPO Industry. The research

    attempted to get dome following research questions: why

    measurement method of training program evaluation is imperative

    and how effective is Kirkpatrick Model for methodical evaluation of

    training; why is the need of training program evaluation; what are

    various measurement methods of training program evaluation; and

    how effective is Kirkpatrick Model for methodical evaluation of

    training. The data analysis or analysis of the findings in this research

    gets done these research questions.

    4.2 ANALYSIS OF FINDINGS

    As per the research literature, in todays world training evaluation is

    a much debatable topic in literature, as said by Burrow and

    Berardinelli (2003) who state that every little receiving on the

    training part are given much attention as evaluation. Organisations

    accept the value of training as important investments of time and

    money for success of any organization. This logic is also accepted by

    Lingham et al. (2006) who believes that training helps in simplifying

    relations on the work front. Training validation is different to that of

    training evaluation. Validation may be defined as a process of

    checking people at work for completion of specific work. Easterby-

    Smith and Mackness (1992) stressed on four purposes for

    evaluation. (1) The organization must have certain obligation toperform in the course of certain outcomes and consequences. (2)

    39

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    40/66

    The organisation must have certain standard on which the quality

    of work may be improved by using data gathered from the

    evaluation. (3) The purpose of training help participants to recognize

    the true value of task they are assigned for. Easterby-Smith andMackness stressed on the purposes and importance of training

    cycles on different stakeholders. In the light of these propositions, it

    was examined in this research as for what principal purpose training

    evaluation is needed for IBM-Daksh. The data collected in this

    context reveals that training evaluation is needed for IBM-Daksh for

    the principal purposes of training results and implementation control

    (see table and figure 4.1). As for majority of the total research

    participants, they find that their firm needs training evaluation for

    the principal purposes of training results and implementation

    control .

    Table 4.1:

    Variable No. ofRespondents

    Response inPercentage

    CumulativePercentage

    Training results 8 32% 32%

    Skill development 4 16% 48%

    Business goals 4 16% 64%

    Implementation

    control9 36% 100%

    As per the data shown in the above table for majority of the total

    respondents (68% of the total 25), they find that their firm needs

    training evaluation for the principal purposes of training results

    (32%) and implementation control (36%); whereas for the

    remaining respondents (32%), the find that their firm needs trainingevaluation for the principal purposes of skill development (16%)

    40

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    41/66

    and business goals (16%). By and large, these data conclude that

    training evaluation is needed for IBM-Daksh for the principal

    purposes of training results and implementation control.

    Moreover, as per the research literature, the purpose of training is

    to calculate the rate of change found by the use ofknowledge and

    learning at workplace. Numerous models are given by researchers

    for training evaluation in the literature Abernathy (1999) states but

    the most famous model of training evaluation is given in the

    literature by the American Professor Donald L Kirkpatrick (1975)

    which was initially created in 1959. This method is certified by

    Nickols (2005) who said that Current method of evaluating training

    is derived by the Kirkpatrick Model. This is further affirmed by

    Canning (1996 p. 5) who said this model of training is Rather like a

    reptile, it brings incremental changes in the process of training.

    Chen and Rossi (2005) explain that the evaluation of literature

    depends more on the practices. The example says that most of the

    training is taken from the Kirkpatrick model. But at present is based

    on the market demand. The information for evaluation at this level

    is usually collected through the methods of questionnaire at the end

    of the training program. In the light of these propositions, it was

    examined in this research as whether training evaluation of IBM-

    Daksh should be centrally focused towards measuring changes in

    knowledge and appropriate knowledge transfer. The data collected

    in this context reveals that certainly training evaluation of IBM-

    Daksh should be centrally focused towards measuring changes in

    knowledge and appropriate knowledge transfer (see table and

    figure 4.2). As research participants in greater majority either

    strongly agree or agree to the fact that training evaluation of

    IBM-Daksh should be centrally focused towards measuring changes

    in knowledge and appropriate knowledge transfer.

    41

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    42/66

    Table 4.2:

    VariableNo. of

    Respondents

    Response in

    Percentage

    Cumulative

    Percentage

    Strongly agree 11 44% 44%

    Agree 7 28% 72%

    Disagree 5 20% 92%

    Strongly disagree 2 8% 100%

    As per the data shown in the above table for respondents in greater

    majority of the total (72% of the total), they either strongly agree

    (44%) or agree (28%) of the fact that training evaluation of their

    firm should be centrally focused towards measuring changes in

    knowledge and appropriate knowledge transfer; whereas for the

    remaining respondents (28%), they either disagree (20%) or

    conclude that certainly training evaluation of IBM-Daksh should be

    centrally focused towards measuring changes in knowledge and

    appropriate knowledge transfer.

    In addition, as per the research literature, Bramley (1999), a noted

    writer of evaluation theory, explains how the Kirkpatrick model

    focuses on four levels of training to measure the real growth of the

    organization like as learning principles, of facts, skills attitudes andbehavior involved in the process of training. Bramley states that

    most organizations carry out functions of evaluation at the initial

    level for measuring the capability of workforce in gaining technical

    skills. In addition he further argues on few changes in the

    organization to measure changes in the behavior of employee. After

    initial training organisations feel it more convenient in evaluating

    the performance of its employees during the end of the day oractivity - rather than regular follow-up action. In the light of these

    42

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    43/66

    propositions, it was examined in this research as training evaluation

    of IBM-Daksh should be measured through which performance

    criteria. The data collected in this context reveals that training

    evaluation of IBM-Daksh should be measured through the criteria ofqualitative performance and not through the quantitative

    performance (see table and figure 4.3). As for majority of the

    total research participants they find that training evaluation of their

    firm should be measured through the criteria of qualitative

    performance.

    Table 4.3:

    VariableNo. of

    Respondents

    Response in

    Percentage

    Cumulative

    Percentage

    Qualitative performance 16 64% 64%

    Quantitative performance 9 36% 100%

    As according to the data shown in the above table for majority of

    the total respondents (64% of the total 25), they find that training

    evaluation of their firm should be measured through the criteria of

    qualitative performance; whilst for the remaining respondents

    (36%), they find that training evaluation of their firm should be

    measured through the criteria of quantitative performance.

    Overall, these data conclude that training evaluation of IBM-Daksh

    should be measured through the criteria of qualitative performance

    and not through the quantitative performance.

    Furthermore, in accordance with the research literature, it could be

    argued that the literature reviewed so far it seems that the training

    evaluation is relatively easy to implement, produces organizational change

    outcomes, assists in meeting the needs of training, as well as reducecosts. Lack of accountability is treated as a big barrier for effective

    43

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    44/66

    evaluation. This was focused by Rae (1999) that effective evaluation

    needs a training quintet to make senior management, aware with

    the new technique used for progress of a company. Rae also

    stressed on the view that senior management should must beauthorized to take an active part in the organization for improving

    results so that to make a congenial atmosphere in the organization

    for proper functioning. If the organization doesnt have a good

    culture to encourage its employee then the evaluation part will

    create a problem in the organization to find out information

    regarding drawback in the system. Culture may be termed as an

    obstacle to increase the performance of employees (Holton, 1996;

    Holton et al., 2000). The State government of Louisiana founds that

    culture have an adverse impact over the performance based

    training. Reinhardt (2001) focuses that in her research on

    identifying barriers to measure the performance of employee at

    work in learning. Reinhart also highlights the loopholes in the

    organizational set-up as a important aspect to measure the impact

    on performance. This issue of culture is termed as an imperative

    barrier. In the light of these propositions, it was examined in this

    research as what is the major challenge of training evaluation for

    IBM-Daksh. The data collected in this context reveals that lack of

    accountability is the major challenge of training evaluation for IBM-

    Daksh (see table and figure 4.4). As for majority of the total

    research participants, lack of accountability is the major challenge

    of training evaluation for IBM-Daksh.

    Table 4.4:

    VariableNo. of

    Respondents

    Response in

    Percentage

    Cumulative

    Percentage

    Lack of accountability 15 60% 60%

    Cultural resistance 10 40% 100%

    44

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    45/66

    As according to the data shown in the above table, for majority of

    the total respondents (60% of the total 25), lack of accountability is

    the major challenge of training evaluation for their firm; whilst for

    the remaining (40%), cultural resistance is the major challenge of

    training evaluation for their firm. Overall, these data conclude that

    lack of accountability is the major challenge of training evaluation

    for IBM-Daksh.

    Moreover, as per the research literature, recently from many

    contributions in the field of training literature, training effectiveness

    and evaluation have acknowledged significant attention (Holton,

    2003; Holton and Baldwin, 2000; Kraiger, 2002; Torres and Preskill,

    2001). One of them is the development of Kirkpatricks recent four-

    layered assessment technique (Holton, 1996; Kraiger, 2002) and

    board theoretical technique of training effectiveness (Holton, 1996;

    Tannenbaum et al., 1993) has come forward as a vital works.

    Although, one of these evaluation methods was developed 10 years

    ago, post-training behaviour, has not been incorporated into these

    evaluation methods. Additionally, no update has been done for

    many years, of the variables believed to part a good role in the

    training effectiveness. Measurement of learning outcomes through a

    practical method is called training evaluation. On the other hand a

    theoretical method to measure training outcomes is called trainingeffectiveness. Training evaluation provides micro-view of a training

    program as it focuses only on learning outcomes. On the contrary,

    training effectiveness highlights the whole learning system and

    provides a macro-view of the training outcomes. Training evaluation

    try to find out the benefits of a training program to individuals as

    learning and increased job performance. In the light of these

    propositions, it was examined in this research as whether trainingevaluation of IBM-Daksh should be methodological approach of

    45

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    46/66

    measuring learning outcomes. The data collected in this context

    reveals that certainly training evaluation of IBM-Daksh should be

    methodological approach of measuring learning outcomes (see

    table and figure 4.5). As for research participants in greatermajority, they either strongly agree or agree to the fact that training

    evaluation of IBM-Daksh should be methodological approach of

    measuring learning outcomes.

    Table 4.5:

    VariableNo. of

    Respondents

    Response in

    Percentage

    Cumulative

    Percentage

    Strongly agree 10 40% 40%

    Agree 9 36% 76%

    Disagree 6 24% 100%

    Strongly disagree 0 0% 100%

    As per the data shown in the above table for respondents in greater

    majority of the total (76% of the total 25), they either strongly

    agree (40%) or agree (36%) of the fact that training evaluation of

    their firm should be methodological approach of measuring learning

    outcomes; whereas for the remaining respondents (24%), they

    disagree to the fact that training evaluation of their firm should bemethodological approach of measuring learning outcomes. By and

    large, these data conclude that certainly training evaluation of IBM-

    Daksh should be methodological approach of measuring learning

    outcomes.

    Above and beyond , in accordance with the research literature,

    evaluation of a training program is the assessment of its

    success/failure in of views its design and content, changes in

    46

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    47/66

    organizational, and learners productivity. The training evaluations

    methods used to review depends on the evaluation model, as there

    are four models for evaluation. First of them, Kirkpatricks behavior,

    learning, reactions, and results typology, is the easiest andfrequently used technique for reviewing and understanding training

    evaluation. In this four dimensional method, learning outcome is

    measured at the time of training which mean behavioral attitudinal,

    and cognitive learning. Behavioral learning measures on-the-job

    performance after the training. The second model expanded by

    Tannenbaum et al. (1993), on Kirkpatricks four dimensional

    typology, he included two more aspects, post training attitude and

    further dividing behavior into two training outcome for evaluation;

    transfer performance and training performance. In this extended

    model, training reactions and post-training attitudes are not

    associated with any other evaluation. No doubt that learning is

    associated with training program performance, and training

    program performance is associated with training transfer

    performance, and transfer performance is associated with training

    outcomes. In the light of these propositions, it was examined in this

    research as which dimension of training evaluation target area

    needs to be centrally focused by IBM-Daksh. The data collected in

    this context reveals that changes in learners and organizational

    payoff as dimensions of training evaluation target area needs to be

    centrally focused by IBM-Daksh (see table and figure 4.6). As for

    research participants in greater majority, they find that changes in

    learners and organisational payoff as dimensions of training

    evaluation target area needs to be centrally focused by IBM-Daksh.

    47

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    48/66

    Table 4.6:

    VariableNo. of

    Respondents

    Response in

    Percentage

    Cumulative

    Percentage

    Learning content and

    design5 20% 20%

    Changes in learners 11 44% 64%

    Organisational payoff 9 36% 100%

    As per the data shown in the above table for respondents in greater

    majority of the total (80% of the total 25), they find that changes in

    learners (44%) and organisational payoff (36%) as dimensions of

    training evaluation target area needs to be centrally focused by

    their firm; whereas for the remaining respondents (20%), they find

    that learning content and design as dimension of training

    evaluation target area needs to be centrally focused by their firm.

    By and large, these data conclude that changes in learners and

    organizational payoff as dimensions of training evaluation target

    area needs to be centrally focused by IBM-Daksh.

    Furthermore, as per the research literature, Holton (1996) included

    three more evaluation objects: transfer, learning, and results. Holton

    doesnt consider reactions because reactions are not a primary

    effect of training program; to a certain extent, reactions are the

    moderating or mediating variable between trainees actual learning

    and motivation for learning. This model relates learning to transfer

    and transfer to results. Additionally, Holton has different opinion for

    the combination of effectiveness and evaluation. Because of that in

    his model specific effectiveness variables are highlighted as

    important aspects for assessments at the time of evaluation of

    training outcomes. The last and fourth evaluation method was

    48

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    49/66

    developed by Kraiger (2002). This model stress on three

    multidimensional areas for evaluation: changes in trainees (i.e.,

    cognitive, behavioral, and affective) training design and system (i.e.,

    design, validity, and delivery of training), and organizationaloutcome (i.e., results, job performance, and transfer climate).

    Feedback from the trainees is considered an assessment technique

    for measuring how effective a training program design and system

    were for the learner. Kraiger stated, feedback measures are not

    associated with the changes in trainees/employees or organizational

    outcomes, but those changes or leanings in employees are

    associated with organizational outcomes. In the light of these

    propositions, it was examined in this research as how effective is

    Kirkpatrick Model of training evaluation for IBM-Daksh. The data

    collected in this context reveals that Kirkpatrick Model of training

    evaluation is highly effective for IBM-Daksh (see table and figure

    4.7). As for research participants in greater majority, they find that

    Kirkpatrick Model of training evaluation is highly effective for IBM-

    Daksh.

    Table 4.7:

    VariableNo. of

    Respondents

    Response in

    Percentage

    Cumulative

    Percentage

    Highly effective 18 72% 72%

    Reasonably effective 7 28% 100%

    Ineffective 0 0% 100%

    As according to the data shown in the above table for respondents

    in greater majority of the total (72% of the total 25), they find thatKirkpatrick Model of training evaluation is highly effective for their

    49

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    50/66

    firm; whilst for the remaining respondents (28%), that find that

    Kirkpatrick Model of training evaluation is reasonably effective for

    their firm. Overall, these data conclude that Kirkpatrick Model of

    training evaluation is highly effective for IBM-Daksh.

    In addition, as per the research literature, The main aim of

    Kirkpatricks method was to measure the participants reaction while

    execution of a program and the amount of learning that took place in the

    form of changing behavior at workplace. The concept Kirkpatrick

    measurement may further be divided into four levels. It is still unknown

    that how these four steps became known as the Kirkpatrick Model

    which is recognized as most vital instrument for all the organization

    (Kirkpatrick, 1998).This form of literature is one of the frequently

    used forms in the process of technical training as well as

    educational training. The first level of Kirkpatricks measurement,

    reaction may be defined as how efficiently the trainee can used the

    method for organizational growth.The second measurement level

    explains that learning is said to be the refined tool for increasingand determining how far the knowledge, attitudes, and skills helped

    the trainees at work. The training also defined as a propeller to

    boost a congenial atmosphere and behavior of an employee.

    Behavior defines how the relationship help in the process of learning

    should be articulated at work point. Kirkpatrick believes there is a

    big gap among the technological knowledge and the

    implementation of that on the job.The fourth training measurementdefines how to reduce cost and grievances so that the profit of the

    organization level can be increased. In the light of these

    propositions, it was examined in this research as which dimension of

    Kirkpatrick Model training evaluation requires to be most focused by

    IBM-Daksh in order to get the desired results in relation to training

    evaluation. The data collected in this context reveals that learning

    and results as dimensions of Kirkpatrick model training evaluationrequire to be most focused by IBM-Daksh in order to get the desired

    50

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    51/66

    results in relation to training evaluation (see table and figure

    4.8). As for research participants in majority), they find that

    learning and results as dimensions of Kirkpatrick Model training

    evaluation require to be most focused by their firm in order to getthe desired results in relation to training evaluation.

    Table 4.8:

    VariableNo. of

    Respondents

    Response in

    Percentage

    Cumulative

    Percentage

    Learning 9 36% 36%

    Reactions 4 16% 52%

    Behaviour 4 16% 68%

    Results 8 32% 100%

    As according to the data shown in the above table for respondents

    in majority of the total (68% of the total 25), they find that learning

    (36%) and results (32%) as dimensions of Kirkpatrick Model

    training evaluation require to be most focused by their firm in order

    to get the desired results in relation to training evaluation; whilst for

    the remaining respondents (32%), reactions (16%) and behaviour

    (16%) as dimensions of Kirkpatrick model training evaluation require

    to be most focused by their firm in order to get the desired results in

    relation to training evaluation. Overall, these data conclude that

    learning and results as dimensions of Kirkpatrick model training

    evaluation require to be most focused by IBM-Daksh in order to get

    the desired results in relation to training evaluation.

    Additionally, as per the research literature, Belfield, Hywell, Bullock,

    Eynon, and Wall (2001) focuses on the method for evaluating

    performance of medical educational interventions for checking

    efficiency on healthcare by the adaptation of the Kirkpatrick Model

    51

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    52/66

  • 8/8/2019 Training Evaluation Diss 29-1-10 Final

    53/66

    conclude that certainly Kirkpatrick model training evaluation is

    highly effective in evaluating e-learning of IBM-Daksh.

    As well, as per the research literature, for the evaluation of training

    to be effective, it is important that the following considerations must

    be kept in mind for achievement of organizational goal.The training

    process must be done in a certain time frame with the objectives, to

    fulfill the vacant post. The trainer must review the time period that

    after the participants have learned the training when they are

    expected to return to work. After coming back to the actual work

    field there should be no confusion left with then for the execution of

    work. The nature and scope of training depends on the effective

    return on investment of the company in regards to the satisfactory

    achievement of organizational goal. The evolution of performance in

    almost every organization is a regular phenomenon in evaluating

    the performance of employees.Che