M21 Assessment in the Workplace Biodata & The Interview Dr Caroline Bailey.

Post on 21-Dec-2015

222 views 0 download

Tags:

Transcript of M21 Assessment in the Workplace Biodata & The Interview Dr Caroline Bailey.

M21 Assessment in the Workplace

Biodata & The Interview

Dr Caroline Bailey

OverviewBiodata• Definition• Validity• Advantages & DisadvantagesThe Interview• Overview of the ‘typical’ interview• The quality of the interview as a selection technique : micro-

and macro-analytic research• Structured Interviews : methods• Objective-Psychometric Perspective vs Subjective-Social

Perspective

Overview

• Psychometric ‘tests’: Cognitive Ability Tests & Personality Inventories– basic concepts– predictive validity– advantages & disadvantages

• Assessment Centres : History of the AC technique; what is an Assessment Centre ?

• Do AC’s work ? Predictive validity and the construct validity problem• Candidate reactions to AC’s• Future directions for the AC technique

A ‘typical’ interviewPrevalence of use : Robertson & Makin (1986) : survey of

major British employers identified in The Times 1000.

Function : - various perspectives

• the organisation

• the applicant

• HR / Occupational Psychology

Moderators

• the organisation = selection ratio

• the applicant = the job market

Some variables in the interview (I)1. Who does the interviewing ?

• Stakeholders (HR, manager, panel)• Stage in the recruitment process• Information needed• Resources available

2. Structure of the Interview (Herriot)• Functions of utterances• Functions by parties• Content• Order

Variables in the Interview (II)

3. Content of the Interview

– Taylor & Sneizek (1984) : UK vs USA graduate trainee selection interviews.

– Keenan & Wedderbaun (1980) : consistency of topics across interviewers

– Taylor & Sneizek (1984) : importance of topics with regard to overall selection recommendation.

– reliance on biographical details

Variables in the Interview (III)Bias, Fairness and equal opportunities

e.g. marital status, religion, home ownership, criminal record*, military discharge, pregnancy, spouse and family responsibilities

‘Questions posed during the interview (should) relate only to the requirements of the job. Where it is necessary to discuss the personal circumstances and their effect upon ability to do the job, this should be done in a neutral manner, equally applicable to all applicants’

Equal Opportunities Commission

How the interviewer reaches a decision– Olian, Schwab & Haberfield (1988) : meta-analysis :

qualifications = 35% variance– Tucker & Rowe (1977) : decision in 9 mins– Springbett (1958) :

• application form + appearance = predicted 88% of candidates’ ratings

• weighting of information (+primary/recency effect)– Reliance on implicit personality theory– other stereotypes : age, physical attractiveness, clones,

mirror images, contrast effects– Candidate’s impression management skills...

Quality of the Interview as a selection technique (I)

• Wagner (1949) : 174 ratings,

r = 0.23 - 0.97 (specific traits);

r = -0.20 - 0.85 (overall ability).

• intelligence (Sneddon, 1930)

• sociability and likeability (Gifford, Ng and Wilkinson, 1985)

• Weisner & Cronshaw (1988) : interviews based on formal job analysis

Quality of the ‘typical’ interview as a selection technique (II)

Predictive Validity – Dunnette (1972) : 0.16– Reilly & Chao (1982) : 0.19– Hunter & Hunter (1984) : 0.14Herriot (1991) cites – 0.14 (supervisor ratings)– 0.08 (promotion)– 0.10 (training success)– 0.03 (length of tenure)– overall : 0.14

Improving the Psychometric Quality of the Interview

– Select the interviewers– Train interviewers– Tell interviewers what to look for

(i.e. structured interviews)

– Listen to candidates– Use the information efficiently

Definition of a structured interview

‘A series of job related questions with predetermined answers that are consistently applied across all interviews for a particular job.’

Pursell, Campion & Gaylord (1980)

– Situational questions – Job knowledge questions – Job sample/simulation questions– Worker requirement questions

Types of Structured Interview (I)• Standardised Interview : Hovland & Wonderlic

(1939) :covers work history, family history, social history and personal history. Valid predictor of those later dismissed.

• Patterned Interview : McMurray (1947) : the interviewer has to rate aspects of how the candidate replied to a question. Intended to measure personality.

• Patterned Behaviour Description Interview (PBDI) Janz (1982)

• Comprehensive Structured Interview : Campion, Pursell & Brown (1988)

• Structured Behavioural Interviews : Motowidlo et al (1992)

Types of Structured Interview (II)

Situational Interview : Latham et al (1980)– Locke’s ‘Goal Setting Theory(1968)– [ Maas (1965) interview structure : traits]– Flanagan (1954) : critical incident technique– ‘future’ behavioursValidity studies– Hourly paid workers (v=0.46)– Foremen (v=0.30)– Entry level production workers : v= 0.39 (women)

v = 0.33(Black)

Why do structured interviews have higher validity coefficients ?

• based on thorough job analysis • assumes intentions and actual behaviours are

strongly linked• same questions for each candidate• Wright et al (1989) ‘orally administered cognitive

ability test’....

Perspectives on the interview

Objectivist-Psychometric Perspective• US (particularly North America)• ‘non-participant observer’• reduction in processual variance

vsSubjectivist-Social Perspective• Schein, ’70,’78; Herriot,‘81,‘84,‘87,‘89• ‘participating negotiatior’• ‘processual negotiation’• the psychological contract

Objective-Psychometric vs Subjective-Social

Criticisms : Objective-Psychometric• the interviewer does play an active role• flexibility and adaptability of the interview• reliability : same info - interpreted differently• can interviewer dysfunctions ever be over-come ?

Criticisms : Subjective-Social• candidate power ?

What is the best technique? Points to consider...

• What needs to be assessed? Job analysis leading to person specification (KSA’s/Competencies)

• Cost-benefit analysis of strengths/weaknesses of specific techniques (with regard to current selection situation)

• Resources available : time, money, suitably qualified people

others?..

The Assessment Centre TechniqueOriginally used for military purposes in WWII, then Bray

and Byham (1950’s)… first commercial AC (AT&T). 26 dimensions measured by :

• a business game (‘the manufacturing problem’)• Leaderless group discussion (‘the promotion

problem’)• In-Basket exercise (administrative exercise)• Measures of personality (incl. Thematic

Apperception Test - TAT, and Incomplete Sentences Blank)

• Interview

Prevalence and Nature of Use– Bray (1997) : 80% of Fortune 5000 companies

use AC’s somewhere in the organisation– Shackleton (1991) : 21% of major UK org’s

used AC’s in 1986; 59% by 1991– Used in all industries (manufacturing most

prevalent)– 1980’s = for selection of supervisors/middle

management; 1990’s = for selection of ‘employees’ (non-mgmt) to executive

The AC Model (I)Guidelines : 17th International Congress on the

Assessment Centre Method (1989)

1. Dimensions– Job analysis to identify Knowledge, Skills and Abilities

(KSA)

2. Techniques– Must provide information for the dimensions identified

in the JA. Multiple techniques must be used.

The AC Model (II)3. Assessors

– Must use multiple assessors to observe and evaluate each candidate. Assessors must be trained and demonstrate their ability to assess.

4. Gathering Data

– Assessors must use a systematic procedure to record specific behaviour observations as they occur. Each assessor must prepare a report, and this data must be pooled to provide an OAR (overall assessment rating)

Other Variables in AC design

– ‘Off the Shelf’ vs Tailoring

– Length/Duration

– AC model used

[ Dimension-specific vs Task-specific ]

Candidate Reactions (face validity)

– Dulewicz (1991); Thornton (1992) : candidate, assessors and management all hold positive attitudes toward AC’s : Candidates find exercises difficult and challenging - but believe measure job relevant behaviours and are fair.

– Macan, Avedan, Pease & Smith (1994). AC vs tests : AC’s more acceptable, face valid and fair

– NB. Candidate anxiety (Iles, Robertson and Rout, 1989) : 18%-32% of candidate said AC is stressful; Teel and DuBois, 1983 : 50% felt performance affected by stress.

Do AC’s work ? Criterion Validity (I)– Howard (1974) : criterion validity higher for AC’s than

any other selection technique

– Schmitt et al (1984) : AC mean validity = 0.41, equivalent to work samples, supervisor/peer ratings

NB1. Developments in testing and interview techniques (and issue of incremental validity)

NB2. Can cheaper measures substitute ?

Lowry (1994) : AC and Personnel Records : AC strength = assessment of interpersonal skills.

Do AC’s work ? Criterion Validity (II)

Gaugler et al (1987) : meta-analysis of AC validity : criteria = (1) career progress; (2) overall performance ratings (3) dimensional performance ratings (4) ratings of potential (5) wages (6) training performance

Average ‘r’ = 0.40 (variance due to variation in components)

Higher validities reported for AC’s with :

(1) wider range of exercises (2) psychologists & managers as assessors (3) included peer ratings in OAR (4) have more female candidates.

The ‘Construct Validity Problem’

= although AC does show convergent validity, discriminant validity is usually poor.

Why ?• The ‘classic’ explanation• criterion contamination• subtle criterion contamination• self-fulfilling prophecy• intelligence• consistency with past performance• assessors’ difficulty in providing ratings

Rating Techniques

1. Behaviour Checklists :• Reilly et al (1990) : for each exercise, assessors

note ‘frequency of’ behaviours

2. Key Behaviours (Key Actions) :• Are models of effectiveness for handling

situations. Capture quality and quantity of actions. Are not independent across dimensions.

Recommendations• Select fewer but more observable dimensions• Cover the important domains, but don’t expect

sharp differentiations of dimensions that are subtle variations on the same theme (e.g. leadership)

• Rate dimensions only AFTER have enough behavioural evidence to do so.

• Define dimensions clearly and unambiguously

Candidate Reactions : AnxietyFletcher, Lovatt & Baldry (1997) : state, trait and test anxiety

to AC performance38 candidates of an AC (8 dimensions, 7 ‘exercises’)Results• State anxiety had a curvilinear relationship to several AC

measures, with low and high anxiety related to poor performance

• Test anxiety sig -vely correlated to scores on a numerical test and a written exercise

• High trait anxiety associated with better assessment ratings.

Future of the AC techniqueDeVries (1993) : critique of AC method (incl. AC model

takes an outdated view of the manager) Bray and Associates (1992) : evi. of changes in dimensions assessed.

• AC exercises - reflect empowerment of employees• Global market (cultural differences)• Technology (videotapes, computer sim’s, CAT, etc)• Used increasingly for selection of employees and

executives• Used for a variety of other functions : RJP’s, placement,

development, org. development, career and succession planning.

What is Biodata? (I)‘autobiographical data which are objective or scorable items

of information provided by an individual about previous experience (demographic, experiential, attitudinal) which can be presumed or demonstrated to be related to personality structure, personal adjustment or success in social, educational or occupational pursuits’

(Owen, 1976)

Drakeley (1989) : classification of biodata : (i) background data (ii) commitment data (iii) achievement data

What is Biodata ? (II)Henry (1966) : dimensions of biodata include :• verifiable vs unverifiable, • historical vs futuristic• actual vs hypothetical• memory vs conjecture• factual vs interpretive• specific vs general• response vs response tendency• external events vs internal events, • strictly biographical vs attitudes

Use of Biodata

– 1894 : Colonel Peters - Life Assurance company– Goldsmith (1922) : Weighted Application Blanks– 1960’s : introduction of Biographical Questionnaires

– 1970’s : empirically derived biodata

actual capacity of certain biodata to discriminate vs

rationally derived biodata conceptual model - biodata that ought to be related to

criterion

Biodata vs Personality

• Biodata allow a definite, unique answer (personality inventories often don’t)

• Personality inventories elicit ‘gut’ responses (biodata specifies in some detail exact information required)

• Personality inventories - fixed key scoring vs biodata - re-keyed for each selection task.

Why should biodata be predictive of job performance ?

– Psychological theories of personality that emphasize how developmental experiences shape our personality

– One of the best predictors of future behaviour is past behaviour (hard biodata)

– Goal setting theory - intentions predictive of behaviour (soft biodata)

– WAB - invisible therefore IM more easily detected

Strengths of BiodataStrengths : Reilly & Chao (82) predictive validity can be

as high as to consider biodata a legitimate alternative to standardised testing

– systematic, uniform way to gather information– biodata questionnaires - multiple choice format which

allows rapid processing of candidates– candidate reactions– theoretically, biodata are a more accurate representation of

an individual than that obtained by (high IM) techniques such as the interview

– minorities can be identified and treated fairly– biodata, once designed, are very cost effective

Weaknesses of Biodata

– Homogeneity vs hetrogeneity– cloning the past– atheoretical– time consuming– content validity– biodata do not ‘travel– shrinkage over time