A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for...

download A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research Methods

of 26

Transcript of A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for...

  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    1/26

    http://orm.sagepub.com/ Organization al Resea rch Methods

    http://orm.sagepub.com/content/5/4/337The online version of this article can be foun d at:

    DOI: 10.1177/109442802237115

    2002 5: 337Organizational Research Methods Ronald J. Karren and Melissa Woodard BarringerResearch and Practice

    Review and Analysis of the Policy-Capturing Methodology in Organizational Research: Guidelines for

    Published by:

    http://www.sagepublications.com

    On behalf of:

    The Research Methods Division of The Academy of Management

    can be found at:Organizational Research Methods Additional services and information for

    http://orm.sagepub.com/cgi/alertsEmail Alerts:

    http://orm.sagepub.com/subscriptionsSubscriptions:

    http://www.sagepub.com/journalsReprints.navReprints:

    http://www.sagepub.com/journalsPermissions.navPermissions:

    http://orm.sagepub.com/content/5/4/337.refs.htmlCitations:

    What is This?

    - Oct 1, 2002Version of Record>>

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/content/5/4/337http://orm.sagepub.com/content/5/4/337http://orm.sagepub.com/content/5/4/337http://www.sagepublications.com/http://www.aom.pace.edu/rmd/http://orm.sagepub.com/cgi/alertshttp://orm.sagepub.com/subscriptionshttp://orm.sagepub.com/subscriptionshttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsPermissions.navhttp://orm.sagepub.com/content/5/4/337.refs.htmlhttp://orm.sagepub.com/content/5/4/337.refs.htmlhttp://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://orm.sagepub.com/content/5/4/337.full.pdfhttp://orm.sagepub.com/content/5/4/337.full.pdfhttp://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://orm.sagepub.com/content/5/4/337.full.pdfhttp://orm.sagepub.com/content/5/4/337.refs.htmlhttp://www.sagepub.com/journalsPermissions.navhttp://www.sagepub.com/journalsReprints.navhttp://orm.sagepub.com/subscriptionshttp://orm.sagepub.com/cgi/alertshttp://www.aom.pace.edu/rmd/http://www.sagepublications.com/http://orm.sagepub.com/content/5/4/337http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    2/26

    10.1177/109442802237115ORGANIZATIONAL RESEARCH METHODSKarren, Barringer / THE POLICY-CAPTURING METHODOLOGY

    A Review and Analysis of the Policy-CapturingMethodology in Organizational Research:Guidelines for Research and Practice

    RONALD J. KARREN

    MELISSA WOODARD BARRINGERUniversity of Massachusetts, Amherst

    Policy-capturing has been employed extensively in the past to examine how orga-nizational decision makers use the information available to them when makingevaluative judgments. The purpose of this article is to provide researchers withguidelines for enhancing the reliability and validityof their studies. More specifi-cally,the authors identifyissuesresearchers maywant to consider when designingsuchstudiesand offer suggestionsfor effectivelyaddressing them. They drawon areview of 37 articles from 5 major journals to identifybestpractice anddiscussthe advantagesand disadvantages of alternative approaches to resolving the vari-ousissues. Thekey issuesare (a) therealism of the approach and itseffect on bothinternal and external validity, (b)the limitsof the full factorial design, (c)the need for orthogonal cues, (d) sample size and statistical power, and (e) the assessment of reliability. The analysis also includes comparisons with conjoint analysis, asimilar methodology used in the marketing research literature.

    Policy-capturing is a method employed by researchers to assess how decision makersuse available information when making evaluative judgments (Zedeck, 1977). The

    purpose of thismethodology is to capture individual judgesdecision-making policies,that is, how they weight, combine, or integrate information (Zedeck, 1977, p. 51). Itinvolves asking decision makers to judgea series of scenariosdescribing various levelsof theexplanatory factors,or cues, andthen regressing their responseson thecues.Theestimated coefficients indicate the relative importance of the various cues and definethe patterns or strategies for each decision maker. These results can also be used toexplore the variability or individual differences among the decision makers (see, e.g.,Graves & Karren, 1992). This involves employing cluster analytic methods to groupindividuals with similar cue weights or strategies. Thus, policy-capturing can be usednotonly to identify theextent of individual differences in strategies butalso to group orcluster individuals with similar policies.

    The policy-capturing approach has been used to assess judgments in a variety of content areas such as job search (Cable& Judge, 1994; Rynes & Lawler, 1983; Rynes,Schwab, & Heneman, 1983), compensation (Sherer, Schwab, & Heneman, 1987),

    Organizational Research Methods , Vol. 5 No. 4, October 2002 337-361DOI: 10.1177/109442802237115 2002 Sage Publications

    337

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    3/26

    employee discipline (Klaas & Wheeler, 1990), job analysis (Sanchez & Levine, 1989),sexual harassment (York, 1989), employment interviews (Dougherty, Ebert, &Callender, 1986; Graves & Karren, 1992), contract arbitration (Olson, DellOmo, &Jarley, 1992), and motivation (Zedeck, 1977). Most of these studies have employed anexperimental policy-capturing design. That is, a survey questionnaire is utilized to

    elicit judges likely responses to scenarios rather than field data being used to examineactual responses. A thorough review of the latter approach is available from Roehling(1993), and we therefore confine our analysis to experimental designs.

    Thepopularity of thepolicy-capturing method stems from a numberof advantages.First, the method overcomes many of the limitations inherent in other, more directapproaches to examining individuals decision policies. For example, a simplermethod involves asking individuals to rate or rank the variables of interest in order of importance (self-report attribute ratings). Concerns about the validity of this methodhave been raisedby studies finding that such stateddecision policies differfrom actual(observed) decision policies (Hitt & Middlemist, 1979; Sherer et al., 1987; Stumpf &London, 1981). The discrepancy may stem from individualsnot being candid in theirresponses because of a desire to be socially correct. Rating challenging work abovehigh pay, for example, may be the socially desirable response to questions about the

    importance of jobattributes. In fact,studies haveshown that less socially desirablefac-tors such as salary receive more weight and are seen as more important when they arederived through policy-capturing (see Feldman & Arnold, 1978). Policy-capturingpurportedly weakens such social desirability effectsby indirectly assessing the impor-tance of explanatory variables and, for this reason, is considered preferable to the self-report attribute method (Arnold & Feldman, 1981; Judge & Bretz, 1992; Rynes et al.,1983). 1 Moreover, rating or ranking individual attributes requires more self-insightthan making overall judgments about multi-attribute scenarios. Finally, asking indi-viduals to make overall judgments about multi-attribute scenarios is more similar toactual decision problems, and hence more realistic, than is a self-report attributedesign (Rynes et al., 1983).

    Another advantage of the methodology comes from the ability of the researcher toexperimentally manipulate cue values. By minimizing variable intercorrelations,researchers avoid the problems of multicollinearity often found with field data andenhance the capacity to assess the independent effects of cues (e.g., Feldman &Arnold, 1978). Furthermore, policy-capturing is typically carried out at the individuallevel. This means that a separate model is generated foreach decision maker, althoughaggregate analyses of groups of individuals can also be conducted. These separate,individual analyses allow for a more in-depth assessment of differences between indi-viduals.

    Theseadvantages notwithstanding, thereare several issues that researchers design -inga policy-capturing study must address if they areto avoidquestions about thevalid-ity of their results. A number of concerns have been raised about the effect of usingsimulated decision contexts on the external validity of results. Care must therefore betaken to createscenarios that include salient and realistically defined cues and to avoidunlikely cue combinations. Validity may also be compromised by a failure to considerrespondent overload, statistical power, and reliability.

    The purpose of this article is to provide researchers interested in using the policy-capturing approach with guidelines for enhancing the reliability and validity of their

    338 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    4/26

    studies. More specifically, we identify the issues researchers may want to considerwhen designing a policy-capturing studyand offer suggestions for effectively address-ing these issues. The aim is to provide those unfamiliar with the methodology withaccessible and comprehensible suggestions for designing their own studies. Our anal-ysis of the issues is informed by a review of the approaches taken by researchers who

    have used the policy-capturing design in past research.We alsoexplore howmarketingresearchers using conjoint analysis, a methodology for assessing the relative impor-tance of product attributes in consumer decision making, have addressed some of theissues. Our review of allof these studies is aimed not at critiquing the research designsbut, rather, at developing an understanding of what can be considered best practice.Where the best practice is unclear, that is, where researchers have used a number of dif-ferent approaches to address a design issue, we discuss the advantages and disadvan-tagesofeach andofferrecommendations forputting available techniques to thebest use.

    We distinguish our article from the Aiman-Smith, Scullen, and Barr study (2002[this issue]) in two ways. First, Aiman-Smith et al. offer a tutorial that covers a broadrange of topics including study design, execution, analysis, interpretation, and report-ingof policy-capturing studies. Ourarticlecovers a narrowerrangeof topicsbut gener-ally goes into more depth on some of the key issues regarding design and execution.

    Second, our discussion of these issues incorporatesour extensive review of prior stud-ies using the policy-capturing methodology.

    We begin our article with a brief review of studies that have used thepolicy-captur-ing approach. We have limited our selection of studies to those published over the past25 years in fivehighlyregarded management journals:the Journal of Applied Psychol-ogy, Personnel Psychology , the Journal of Management ,the Academy of Management Journal , and Organizational Behavior and Human Decision Processes . We found 37studies that used policy-capturing as the primary methodology. Thesestudies are sum-marized in Table 1. As Table 1 indicates, we found differences across these studies inthe types of decisions made, the samples used, the designs employed, the stimuli pre-sented to the respondents, and the analyses employed by the researchers.

    Job choice, ratings of job applicants, and performance evaluation decisions madeup approximately one half of the types of decisions studied. Other popular types werecompensation, disciplinary, and absence decisions.The remaining typesof judgmentsinclude promotion, sexual harassment, job task importance, firm acquisition, mediachoice, organizational effectiveness, utility, transportation services, and changes inwork attitudes.

    The samples for these studies were drawn from a number of different groups.Undergraduate and graduate students were among the most used subjects in the stud-ies. In most cases, they were used to simulate decisions that would typically involvethem (e.g., job choice). Other studies included decisions made by managers, employ-ees, and specialists such as interviewers and recruiters, executives, and faculty mem-bers. Again, in most cases, the decision makers were those who typically had experi-ence with the decisions.

    We grouped the designs into four categories: full factorials (16), fractional factori-als (4), studies without a factorial design and in which the intercorrelations were zeroor near zero (12), andstudies in which theintercorrelationswere significantly aboveorbelow zero (4). One study did not report the size of the intercorrrelations. Thus, formost of the designs, the intercorrelation of the factors was either zero or near zero.

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 339

    (Text continues on page 344)

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    5/26

    Table 1Summary of Published Policy-Capturing Studies

    Study by Journal Decision Context Sample Variables and Levels Stimulus Presentation

    Journal of Applied Psychology Allen & Muchinsky (1984) Transportation 47 undergraduates 4 variables (4 3 3 2) 36 written

    services and 55 government transportation scenaemployees

    Dougherty, Ebert, & Interviewersrat ings One organization with 8 variables 120 videotapes of actual Callender (1986) of job applicants 3 interviewers applicant interviews

    Dunn, Mount, Barrick, & Managers evaluations 84 managers from 6 variables: 2 levels each 39 written personality Ones (1995) of job applicants multiple organizations profiles of hypothetic

    job applicants

    Feldman & Arnold (1978) Job choice 62 graduate students 6 variables: 2 levels each 64 written paragraph from 2 universities descriptions

    Hitt & Barr (1989) Evaluating job 68 l ine and staff 6 applicant attributes: Information and videotaapplicants and managers from 2 levels each of applicants starting salaries multiple organizations

    Hollenbeck & Williams Changes in work 88 department store 6 facets/variables; 4 levels Written scenarios with a (1987) attributes salespersons from subset of factors

    one organizationJudge & Bretz (1992) Job choice 67 undergraduates 7 variables: 2 levels each Wri tten scenarios

    from 2 universities Orr, Sackett , & Mercer Estimates of the 17 managers from 13 variables: mult iple 50 written profi les of

    (1989) dollar value of 1 organization values hypothetical performance programmers

    Rynes & Lawler (1983) Job choice 10 undergraduates 4 variables (4 3 2 3) Written job descrip

    Sanchez & Levine (1989) Overall importance 60 employees from 6 task factors: multiple Task inventories with

    of job tasks 4 jobs in 2 cities levels descriptions of 6 tscales

    Viswesvaran & Barrick 2 compensation 35 compensation (a) 5 factors (2 4 Hypothetical f irm (1992) decisions specialists 4 5 5); (b) 5 factors descriptions: (a)

    (3 3 5 3 5) of 800; (b) 40 o

    3 4 0

    a t U .A .E

    U ni v er s i t y

    onA

    u g u s t 1 1

    ,2

    0 1 4

    or m. s a

    g e p u b . c

    om

    D ownl o

    a d e d f r om

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    6/26

    Zedeck & Cascio (1982) Performance appraisal 130 undergraduates 5 performance factors: Written descriptions of decisions from 1 university 3 levels each performance

    Personnel Psychology Cable & Judge (1994) Job search decisions 171 undergraduate 5 factors: 2 levels each Written descriptions of

    students seeking variables jobs from 1 university

    Graves & Karren (1992) Interviewers 29 corporate 6 factors: 2 levels each Writ ten descriptions of evaluations of interviewers from variables job candidates 1 organization

    Klaas & DellOmo (1991) Disciplinary decisions 93 managers from 6 factors: 2 levels each Written case scenarios of substance abuse 2 organizations

    Klaas & Wheeler (1990) Disciplinary decisions 19 human resource 6 factors: 2 levels each Written descriptions of managers and variables

    28 line managersMadden (1981) Performance 3 experiments with 2 factors: gender and Written profiles

    evaluations undergraduates: performance level subjects were 58, (2 3)70, and 43

    Sherer, Schwab, & Salary raise decisions 11 supervisory 5 factors: (2 2 Written profi les ofHeneman (1987) personnel 3 2 2) hypothetical em

    Zhou & Martocchio (2001) Compensation 71 Chinese managers 4 factors (2 2 2 2) Written profiles ofdecisions and 218 grad. employees

    student alumni/ managers

    Organizational Behavior and Decision Processes Brannick & Brannick Performance ratings 13 supervisors/10 Both had 16 variables Writ ten profi les

    (1989) faculty members with 5 valuesHobson, Mendel, & Performance rating 20 faculty members 14 performance factors: Written hypothetical

    Gibson (1981) behavior of and chair of 3 levels each profiles supervisor/ universitysubordinate

    3 4 1

    a t U .A .E

    U ni v er s i t y

    onA

    u g u s t 1 1

    ,2

    0 1 4

    or m. s a

    g e p u b . c

    om

    D ownl o

    a d e d f r om

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    7/26

    Table 1 continued

    Study by Journal Decision Context Sample Variables and Levels Stimulus Presentation

    Organizational Behavior and Decision Processes Martocchio & Judge (1994) Decisions to be absent 138 workers 6 factors (2 2 2 Writ ten scenarios

    2 2 3) Mazen (1990) Evaluation of applicant 118 recruiters at 9 factors: 9 values Profi les presented on

    profiles 1 university cardsRynes, Schwab, & Job appl ication 10 college seniors 5 factors (5 3 Writ ten scenarios

    Heneman (1983) decisions 2 2 2)

    Zedeck (1977) Job choice 91 undergradua tes 6 factors: 5 levels each Written descr ip tions of and 233 MBAs hypothetical orgs

    Zedeck & Kafry (1977) Overall performance 67 nursing personnel 9 performance factors: Written scenarios evaluations for multiple orgs. 3 levels each containing behavior

    Academy of Management Journal Arnold & Feldman (1981) Job organization 86 graduate students 6 factors: 2 levels each Written scenarios

    choice Hitt & Middlemist (1979) Organizational 50 managers 25 possible factors: Simulated cases or

    effectiveness 5 levels work units judgments

    Pablo (1994) Integration design 56 executives 5 factors: 2 levels each Hypothetical scenarios decisions

    Stahl & Zimmerer (1984) Firm acquisi tion 42 executives 6 factors (1/2 2 2 Written acquisitiondecisions 2 2 2 2) making exercise

    Stumpf & London (1981) Promotion decisions 43 managers and 5 factors (2 2 3 2 2) Written hypothetic51 students candidates

    Webster & Trevino (1995) Media choice (a) 197 employees; (a) 5 factors (5 3 Written scenarios (b) 70 employees 2 2 2); (b) 3 factors (3 2 4)

    3 4 2

    a t U .A .E

    U ni v er s i t y

    onA

    u g u s t 1 1

    ,2

    0 1 4

    or m. s a

    g e p u b . c

    om

    D ownl o

    a d e d f r om

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    8/26

    York (1989) Policies on sexual 15 Equal Employment 8 factors: most had Written profiles harassment Opportunity officers; 2 levels

    79 undergraduate andgraduate students

    Journal of Management Beatty, McCune, & Compensation 41 Japanese and 63 8 factors: multiple levels Written descriptions of

    Beatty (1988) decisions U.S. managers from hypothetical employemultiple organizations

    Bretz & Judge (1994) Job choice 65 undergraduate and 7 factors: 2 levels each Written scenarios graduate students, 2 universities

    Judge & Martocchio (1996) Attributions on the 138 employees in a 4 factors (2 2 2 3) Written scenarios cause of absence university

    Martocchio & Judge (1995) Absence disciplinary 57 employees or 6 factors: 2 levels each Written scenarios

    decisions 19 triads (supervisor and subordinates)

    3 4 3

    a t U .A .E

    U ni v er s i t y

    onA

    u g u s t 1 1

    ,2

    0 1 4

    or m. s a

    g e p u b . c

    om

    D ownl o

    a d e d f r om

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    9/26

    Most of the stimuli presented were written scenarios or profiles. Only a few werevideotaped. Of those written scenarios, almost all were hypothetical situations.

    The results of thevast majority of thestudies included an assessment of thepoliciesof individual judges. Among studies that examined both linear and nonlinear effects,the linear model contained most of the explained variance; thus, littlewasattributed to

    nonlinearor interaction effects.About a third of these studies considered some form of cluster analysis to group the relative weights. A large minority of the studies also usedbetween-subjects analysesspecifically to testvarioushypothesesof the study. Someof the studies included comparisons of the policy-capturing methodology with directassessments. The results of these comparisons indicated that the correspondence wasonly moderate (e.g., Arnold & Feldman, 1981).

    We next discuss thefive key issueswe have identified as important designconsider-ations for researchers developing a policy-capturing study: (a) the realism of theapproach and its affect on both internal and external validity, (b) the limits of the fullfactorial design, (c)the need fororthogonal cues, (d)sample size and statistical power,and (e) the assessment of reliability. We review how past researchers have addressedthese issues and offer recommendations accordingly.

    Issues in the Design ofPolicy-Capturing Studies

    Realism

    A recurrent concern about policy-capturing has been with the realism of the deci-sion problems presented to participants and, hence, the external validity of the results.A realisticdecision problem is onethat is representativeof theproblemsthat occur nat-urally in the participants environment, whereas an unrealistic problem is one that isunlikely to occur. If the decision problems used in a policy-capturing study are notrealistic, then the results may be biased and cannot be generalized to nonexperimentalsettings (Klaas& Wheeler, 1990; Lane, Murphy, & Marques, 1982; Olson et al., 1992;

    York, 1989).There are a number of inherent challenges to enhancing realism. First, in a policy-capturing design, individuals are asked to make judgments based on a limited amountof information (typically four to eight variables, as shown in Table 1),whereas they arelikely to have more extensive information when making judgments of actual cases(Olson et al., 1992). There is evidence, however, that owing to cognitive limitations,individuals tend to base judgments on a relatively small number of criteria (Cooksey,1996; Rossi & Anderson, 1982; Sanchez & Levine, 1989). Hence, if care is taken toidentify, and include in the study, the decision criteria that are likely to be most salientto decision makersjudgments, then therealism of thedecisionproblem,andthe exter-nal validity of the study, can be enhanced. This objective may be difficult to achieve,however, as it is virtually impossible to ascertain a priorithe factors to which individu-als attend in making judgments (Viswesvaran & Barrick, 1992; York, 1989). One

    approach hasbeen to survey or interview individuals involved in making thedecisionsof interest.This may involve focusgroups and/or interviews with individuals similar tothe studys sample (Graves & Karren, 1992; Rynes & Lawler, 1983; Viswesvaran &Barrick, 1992). The advantage of such an approach is that the information is provided

    344 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    10/26

    by thedecision makersthemselves,who arearguablyin thebest position to know whatthey consider when making decisions (Cooksey, 1996). A potential disadvantage, onthe other hand, is that the identification of important criteria is a subjective process,andthe information obtained maybe incomplete dueto lapsedor reconstructed memo-ries (Cooksey, 1996).To minimize these problems, researchers areadvised to usemul-

    tiple, carefully selected decision makers and look for consistent themes (Cooksey,1996). An alternative approach is to examine company or other related records toascertain important decision criteria. Hobson, Mendel, and Gibson (1981) examinedthe participating organizationsevaluation policies to identify salient performance cri-teria. Similarly, York (1989) analyzed 123 related courtcases to identifyvariablesfor astudy of judgments about sexual harassment complaints. This technique is moreobjective than the focusgroup/interview approach; however, it is limited by the qualityand accuracy of the records (Cooksey, 1996). Still another approach is to review pasttheoretical and empirical research (Allen & Muchinsky, 1984; Hollenbeck & Wil-liams, 1987; Pablo, 1994). Where available, such information may provide moreobjective support for criterion importance. This approach is likely to be of less use toresearchers exploring new topics. Our recommendation is that researchers obtaininformation from all available sources (focus groups, company records, prior

    research) and use criteria for which there is consistent support.Unrealistic decision problems may also be created when a full factorial design is

    used to set up decision problems (Klaas & Wheeler, 1990; Lane et al., 1982; York,1989). This very common policy-capturing design involves creating hypothetical sce-nariosby completely crossing all of thevariables. In this case, the variables are said tobe orthogonal and their independent effects may be accurately assessed. As seen inTable 1, this full factorial approach has been used by many of the studies reviewed forthis article (see, e.g., Bretz & Judge, 1994; Graves & Karren, 1992; Klaas &DellOmo, 1991; Rynes et al., 1983).

    A potential problem with this approach is that if variables are truly correlated inthe environment, then the decision makers may be presented with unrealistic cases(York, 1989). Crossing employment status with retirement benefits, for example,would create one scenario in which a contingent worker received pension benefits.Two alternative approacheshave been developed to address thisconcern. In one,hypo-thetical scenarios are created in such a way as to enhance realism and minimize vari-able intercorrelations. Klaas and Wheeler (1990), for example, used an orthogonaldesign but selected factors that are not typically correlated in the real world. Klaasand DellOmo (1991) provided plausible explanations to study participants for theseemingly implausible combinations created by orthogonal manipulations. Beatty,McCune, and Beatty (1988) endeavored to replicate real-world correlations ratherthan crossing all explanatory variables; intercorrelations between the eight variablesincluded in thestudy were on theaverage .22. Realism andvalidity areenhanced usingthis approach; however, it is also possible that important criteria would be excludedbecause of problems with multicollinearity. A second approach involves using actual,rather than hypothetical, scenarios. Doughertyet al. (1986), forexample, asked partic-ipants to rate job applicants after listening to actual interviews that had been taperecorded. In another study, job incumbents rated the importance of tasks listed in their(actual) job task inventories (Sanchez & Levine, 1989). Intercorrelations between theexplanatory variables in both of these studies were high. The decision problems arethus realistic, but because the variables arenot orthogonal, assessing their independent

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 345

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    11/26

    effects becomes more difficult. Thus, researchers must in many cases make trade-offsbetween realism, on one hand, and acceptable levels of variable intercorrelations, onthe other. In these instances, compromises will be required between theideally desir-able and the practically realizable features of thedesign (Cooksey, 1996, p. 94). Asto the extent to which researchers should emphasize realism over low

    intercorrelations, we note that of all of the studies reviewed for this article, only tworeport correlations greater than .5 (Dougherty et al., 1986; Sanchez & Levine, 1989).Ourrecommendationis to adopt,whereverpossible, theapproachtaken by theprepon-derance of studies accepted for publication, that is, ensuring that variableintercorrelations are 0 or near 0. Higher correlations have in many cases been accept-able and may be unavoidable in situations where the variables of interest are naturallycorrelated in the environment. Limited evidence that raw-score regression weights aresimilar under different intercorrelation levels (ranging from near 0 to .5) suggests that0 correlations are not required to estimate variable importance (Lane et al., 1982).Nevertheless, the impact of higher correlations (i.e., greater than .5) on estimates of variable importance has not been examined, and our review suggests that levels near 0or less than .20 are more likely to be favorably received.

    The realism of decision problems can also be affected by the operationalization of

    explanatory variables. If the manipulated values of the explanatory variables are notrepresentative of the values observed in the judges environment, the external validityof the study could be limited (Judge & Bretz, 1992; Rynes et al., 1983). Rynes et al.(1983) found that differences in the variance in treatment levels can generate differentimportance weights. Specifically, the results suggested that the estimated effects of payrelative to other variables included in thestudyarehigherwhen the differencebetween defined salary levels is wide than when it is narrow. Range effects were alsoobtained in a study conducted by Highhouse, Luong, and Sarkar-Barney (1999),although theseauthors reached the somewhat different conclusion that the direction of these effects may vary across decision contexts. That is, the effects of attribute rangemay depend on whether the decision maker is in the initial (prechoice screening) orultimate (final) stage of job choice. Both studies suggest, however, that judgesresponsesto hypothetical scenariosare sensitive to attribute range. Hence, conclusionsabout the effects of explanatory variables that are based on responses to hypotheticalscenarioscannot be generalized to actual decision contexts unless treatment levels arerealistically defined.

    Ensuring the realism of defined treatment levels, as well as variable combinations,typically involves obtaining input from individuals familiar with the decision context.Allen and Muchinsky (1984), for example, asked Department of Transportation offi-cials to review hypothetical scenarios describing proposed transportation services forthe physically handicapped. Staff reviews have also been sought to ensure the realismof descriptions of hypothetical job candidates(Graves& Karren, 1992), employee dis-ciplinary problems (Klaas& Wheeler, 1990), and substanceabuseviolations (Klaas&DellOmo, 1991). Levels of variables examined in studies of students job choiceshave been defined based on data from college career offices (Bretz & Judge, 1994;Cable & Judge, 1994; Judge & Bretz, 1992; Rynes et al., 1983). Field data were alsoused to create hypothetical applicant profiles (Mazen, 1990) and acquisitions (Pablo,1994). Finally, pretests using respondents drawn from the same population as thestudy sample have helped researchers to identify and revise unrealistic scenarios(Pablo, 1994; Webster & Trevino, 1995). These methods have the advantage of

    346 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    12/26

    enhancing the representativeness of scenarios; however, care should also be taken toensurethat sources arewell informed and accurate (Cooksey, 1996).Askingrecruitersto identify the factors that affect job choice, forexample, may be less validthan askingthe job applicants themselves.

    Under some circumstances, defining treatment levels realistically may raise other

    methodological issues. First, the distance between realistically defined levels mayvary across variables. For instance, the difference between core and contingentemploymentarrangementscouldbe viewed as substantiallygreater than thedifferencebetween alternative health insuranceplans (e.g., no premium versus a small or moder-ate premium). Prior research suggests that responses to variables with wider differen-tials between levels tend to be different from responses to those where thedifferentialsarerelativelynarrow (Highhouse et al., 1999; Rynes et al., 1983).In this case, concernsmay be raised about the creation of experimental conditions that set up certain vari-ables to exhibit artificially high effects. Second, the difference between treatment lev-els may represent a gain for some variables (e.g., current versus higher pay) and a lossfor othervariables (e.g., current versus lower benefits). Because research suggests thatresponses to losses tend to be greater than responses to gains (Kahneman & Tversky,1979), the concern again is that experimental conditions create artificially stronger

    responses to some variables. In other words, by enhancing external validity (byenhancing realism), researchers may be inadvertently compromising other forms of validity. The literature provides little guidance on this issue. Whereas the importanceof realistically defining treatment levels is clearly important, the extent to whichemphasis should also be placed on scaling variables consistently has not beenexplored. We suggest researchers focus first on enhancing realism and second on scal-ing variables as consistently as possible. That is, realistic treatment levels are prefera-ble even if it means that the resultant scales are inconsistent. If emphasizing consis-tency in scaling results in unrealistically defined variables, then the results are of minimal value because they cannot be generalized to other settings. Moreover, even if the case could be made that inconsistent scaling results in unfair comparisons of variable effects, inferences can still be made about whether the effect of a variable issignificant or nonsignificant (Cooper & Richardson, 1986). Hence,in situationswhereachieving realism and consistent scaling is not possible, we recommend using thetreatment levels that reflect real decision contexts.

    To summarize, the use of hypothetical scenarios to examine judges decisions cancompromise the external validity of a study unless care is taken to ensure that the vari-ables included are salient to the judges and that variable levels and combinations arerepresentative of those observed in their environments. Realism can be enhanced byusing actual scenarios or, if creating scenarios, by involving knowledgeable individu-als in the creation of hypothetical scenarios, replicating actual correlations betweenexplanatory variables, and/or selecting variables that are naturally orthogonal in theenvironment. Care should also be taken, however, to minimize variableintercorrelations and thereby enhance the ability to discern the relative importance of each variable.

    The Limits of a Full Factorial Design

    As noted above, many of the policy-capturing studies reviewed in this article haveemployed a full factorial design in which all variables are completely crossed and bal-

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 347

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    13/26

    anced.Such an approach allows theassessment of theindependenteffectsof each vari-able on an individual respondents decision (Graham & Cable, 2001; Webster &Trevino, 1995). A full factorial design also allows the assessment of all main andhigher order effects (Graham & Cable, 2001).

    Depending on the number of variables included in the study, however, employing a

    full factorial design means that respondents could be asked to review an inordinatelylarge number of scenarios. Completely crossing seven variables with two levels each,for example, generates128 unique scenarios (2 7 = 128). Employers and/or individualsare often reluctant to participate in such a time-consuming study, and procuring anadequate sample under these circumstances can be difficult. Moreover, even amongthose individuals willing to read this many scenarios, respondent overload, fatigue,and stress can affect responses (Graham & Cable, 2001; Webster & Trevino, 1995;York, 1989). Survey length has been found to be associated with significant differ-ences in respondent stress and exhaustion (Graham & Cable, 2001). Consequently,researchers maywant to limit thenumberof scenariospresented to participants(Rossi &Anderson,1982; Webster& Trevino, 1995).This canbe achieved by limiting thenum-ber of experimental variables, thereby minimizing the number of scenarios createdwhen the variables are completely crossed. Limiting the study to a small number of

    variables, however, may require the researcher to exclude potentially importantexplanatory variables (Graham & Cable, 2001). Hence, researchers employing a fullfactorial design may find that they must either limit the scope of their study or risk compromising the quality of the data (Graham & Cable, 2001).

    A common approach to this dilemma is to ask respondents to read a subset of thefull factorial set. This is known as a confounded factorial design, which encompassestwo popular designs: the incomplete block design and the fractional factorial design.These designs allow the researcher to examine a broader set of variables while avoid-ing respondent overload (Graham & Cable, 2001). Cue sets can be created wherebyvariables are orthogonal, thus allowing the assessment of each variables independenteffects but not higher order (three-way or higher) interaction effects (Cochran & Cox,1957). If such interactionsarenot theoretically important, and enough respondents areavailable to evaluate all scenarios, then this type of design may be an appropriatemethod for reducing respondent boredom and strain (Graham & Cable,2001; Klaas &DellOmo, 1991).

    Cue sets for the incomplete block design and the fractional factorial design are cre-ated in similar fashion. Both involve systematically dividing the full factorial set intoblocks and presenting each respondent with one of the blocks (Cochran & Cox, 1957;Graham & Cable, 2001; Webster & Trevino, 1995). Each block is composed of aunique set of scenarios and is created by dividing the full set into halves, quarters,eighths, and so on (Graham & Cable, 2001).

    The major difference between the two types of confounded designs is that all of thesubsets are used (i.e., subgroups of participants each receive a different subset of sce-narios) with the incomplete block design, whereas only onesubset is used (i.e., allpar-ticipants receive the same subset of scenarios) with the fractional factorial design.Consequently, thenumber of participantsrequired to conduct a study using the incom-plete block design increases with thenumberof blocks used (Graham & Cable, 2001).That is, at least two participantsare required fora one-half (two-block) design, at leastfour are needed for a one-quarter (four-block) design, and so on.

    348 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    14/26

    Owing at least in part to the additional sample and procedural requirements of theincomplete block design, fractional designs have historically been the approach of choiceamong researchers using theconfounded factorial approach (Graham & Cable,2001). Another approach is to randomly select scenarios from the full factorial set,checking to ensure that variable intercorrelations are not high (Allen & Muchinsky,

    1984; Klaas & DellOmo, 1991; Pablo, 1994; Viswesvaran & Barrick, 1992; York,1989). Allen and Muchinsky (1984), for example, randomly selected 36 out of the 72possible transportation service descriptions; intercorrelations between the variableswere low ( r < .25). Both of these fractional designs allow theresearcher to examine thefull range of important cues without creating respondent overload. Because they donot incorporate the full set of scenarios, however, the researcher cannot be sure thatresults are unaffected by the particular set of scenarios selected (Graham & Cable,2001).Relative to theincomplete block design, which uses thefull setof scenarios, thefractional design allows estimation of fewer effects and requires making moreassumptions about which effects are unimportant (Graham & Cable, 2001).

    The confounded factorial design seems to offer researchers a method of studyinghow people make decisions without straining respondents or limiting the scope of thestudy. Nevertheless, the extent to which this method is a viable alternative to the more

    statistically rigorous full factorial designis unclear. We areawareof just onestudy thathasexaminedthe meritsof a full versusa confounded factorialdesign. In a study of theeffects of five firm attributes on job seekers perceptions of firm reputation, GrahamandCable(2001) randomly assigned 108collegestudents to eithera full (32scenarios)or an incomplete block (8 scenarios) design. Their results indicated that the estimatedeffects of explanatoryvariables were substantially thesame acrossthe twodesigns andthat a full factorial design generated significantly more stress, fatigue, and negativereactions to survey length among respondents than did an incomplete block design.

    Given the limited empirical research on the viability of the confounded factorialdesign, it is probably prudent to usea full factorial design whereverpossible. One con-cern is that a design with too few scenarios lacks sufficient power, an issue that is dis-cussed later in the article. Furthermore, scenarios that contain many factors arecognitively complex and may require some practice with the response scale beforestudy respondents begin to process the information reliably. In these situations, a con-founded factorial design with a small number of scenarios may have insufficient reli-ability unless respondents are given several practice scenarios before beginning thetask. Nevertheless, theGraham and Cable (2001)studysuggested that this type of con-founded factorial design is under certain circumstances an acceptable, and perhapspreferable, alternative. More specifically, these authors argued that where a con-founded factorial design is indicated, researchers should give particular considerationto the incomplete block design. The primary advantage of this approach is that itreduces thenumberof scenariosstudy participants areasked to evaluate. Moreover, asnoted above, estimation of cueeffects using the incomplete block design is not limitedby theexclusion of scenarios, as is thecase with thefractional design. Hence,when thenumber of salient explanatory variables and/or treatment levels included in thestudyissuch that a full factorial designwould generate an inordinatelylarge numberof scenar-ios, researchers should give serious consideration to an incomplete block design.

    Determining what constitutes inordinately large may be a matter of judgment atthis point. Rossi and Anderson (1982) suggested an upper limit of 60 scenarios.

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 349

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    15/26

    Aiman-Smith et al. (2002) have recommended using no more than 80 written scenar-ios. Graham and Cable (2001) found that respondents reacted more negatively to 32than to 8 scenarios, although the mean stress score for the larger survey was relativelymoderate (3.28 on a 7-point scale). It may be that acceptable survey length variesacross individualsand that pretesting may be needed to determine themaximum num-

    berof scenarios respondents can reasonably be expectedto process. Theoptimal num-ber mayalso vary according to thesize of the scenario.Rossi and Anderson, forexam-ple, maintained that participants in their study could respond to 60 scenarios in 20minutes. A smaller number might be indicated where the scenarios require more timeto read and process.

    Graham and Cable (2001) also suggested that the incomplete block design may bemost appropriate where theexamination of individual decision policies is not a centralfocus of the inquiry because individual regression equations must be interpreted withcaution when study participants do not evaluate all possible scenarios. Nevertheless,we are aware of at least three published studies employing a confounded factorialdesign in which data analysis included the estimation of individual regression equa-tions (Klaas& DellOmo,1991; Pablo,1994; Webster& Trevino, 1995).AccordingtoGraham and Cable, regression estimates under these circumstances are likely to be

    more useful if researchers employ larger fractions (e.g., one half as opposed to onequarter).Finally, a confounded factorial design is only appropriate where higherorderinteractions are expected to be unimportant to explanations of judgments (Graham &Cable, 2001; Klaas & DellOmo, 1991).

    Orthogonality of Cues

    The chief theoretical advantage of orthogonality is that it facilitates the assessmentof the independent effects of each of the explanatory variables (Martocchio & Judge,1994; Zedeck & Kafry, 1977). That is, partitioning out a cues unique contribution tovariance in the dependent variable is most feasible when it does not covary or overlapwith other cues (Darlington, 1968; Pedhazur, 1982). More specifically, if variation inonecue is associated with variation in a second cue, then determining which portion of thevariation in thedependentvariable can be attributed to thefirst cue, the second cue,or a combination of the two becomes difficult (Kennedy, 1989). Indeed, a number of researchers have suggested that the precise measurement of cue importance (betaweights) is very difficult in theabsence of orthogonality (Darlington, 1968). Evidencealso suggests that cluster analysis, a technique whereby groups of respondents withsimilar policies are identified, is more successful when variables are notintercorrelated (Zedeck, 1977). The problem is that intercorrelation results in unstableparameter estimates with higher variance, which in turn makes the identification of discrete patterns of policies more difficult (Kennedy, 1989).

    It is for these reasons that most researchers employing the policy-capturing designcreate variable combinations to ensure that intercorrelations are 0. As Table 1 shows,variables are orthogonal in 22 of the 37 studies reviewed for this article. Among thesizeable minority of studies not taking thisapproach, variable intercorrelations rangedfrom a low of .02 to a high of .91(Sanchez& Levine, 1989).Given theapparentaccept-ability of nonorthogonal designs in some cases,and questionsaboutthe realism of sce-narios in which factors are forced to be orthogonal, we next consider the question of

    350 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    16/26

    whether,or under what circumstances, researchers shouldtake care to ensurethat vari-ables are completely uncorrelated.

    To ourknowledge, onestudy addresses this issue.Lane et al. (1982)compared esti-mates of variable importance under three different correlation structures and foundthat raw-score regression weights did not change across structures, whereas other

    measuresof importance (simple correlations between explanatory and dependentvari-ables, semipartial correlations, and standardized regression coefficients) did. Theauthors concluded that zero intercorrelations are not required to estimate the impor-tance of explanatory variables and that raw-score regression weights are the mostappropriate indicators to usewhen variables arenot orthogonal. They also noted, how-ever, that raw-score regression weights are not independent of the scale used and maynot be ideal in studies where such independence is important. For example, theobserved effect of a one-unit change in intelligence quotient (with a range of 100 ormore) is likely to be considerably smaller than that of a one-unit change in years of experience (with a range of 25 or less), suggesting, perhaps erroneously, that years of experience is a more important determinant of the outcome of interest than intelli-gence. Hence, in decision problems where the variance of cues is dissimilar, the rela-tive importance of these cues cannot be accurately assessed unless the regression

    weights are standardized (i.e., independent of the scale). In such cases, Lane et al. rec-ommended using standardized regression coefficients, as the results of theirstudy sug-gest that the change in these estimatesacrossdifferent correlation structures, althoughsignificant, is relatively small (p. 238).

    Variable intercorrelations in the Lane et al. (1982) study were not very high, so it isnot clear whether estimatesof variableimportance when intercorrelations are high canbe interpreted with any confidence. Furthermore, use of a nonorthogonal design maylimit the researchers choiceof importance measures and make theuse of cluster anal-ysis more difficult. Zero or near-zero variable intercorrelation structures would there-fore seem to be the preferred design. If an orthogonal design is not possible (due, per-haps, to the creation of unrealistic scenarios), the researcher may want to consider adesign in which variable intercorrelations are relatively low and raw-score regressionweights are used as the measure of importance.

    Sample Size and Power

    One way to increase the power of a research study is to increase the number of sub- jects or participants in the study (Cohen, 1988). When planning a research study,researchers will try to reduce Type II errors and increase power.Specifically, research-erspreferhaving a reasonably large chanceof rejecting thenull hypothesis if it, in fact,is false. However, when designing a policy-capturing study, the number of subjectsmay take a secondary role because the main focus typically is the individual analysisfor each subject. In this case, the power of the individual analysis is not based on thenumber of subjects, because there is only one subject, but the number of scenarios or judgments made by each subject. The number of scenarios determines the number of observations for each analysis. That is, whether the regression weights will be signifi-cant is likely to be related to the number of scenarios. In a study employing multipleregression techniques, the preferred ratio of the number of scenarios to factors is 10:1,butthe minimum ratio is considered to be 5:1 (Cooksey, 1996).These ratiosare guide-

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 351

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    17/26

    lines, and other factors should be considered, such as theextent of theexplained varia-tion in judgments and the intercorrelations of the cues (see Cooksey, 1996, for furtherdiscussion). However, as discussed above, increasing the number of factors, andhence the number of scenarios, may also create problems such as stress and exhaus-tion (Graham & Cable, 2001). Thus, there seems to be a trade-off associated with a

    more comprehensive survey. Although it offers higher power, a comprehensive surveymay result in fatigue and, furthermore, the likelihood of reduced reliability.As discussed previously, researchers sometimes use a confounded factorial design

    rather than a full factorial design to reduce fatigue. In some cases, this is done so theresearcher can add additional cues into the judgment process without burdening therespondent with a large numberof scenarios. That is, there are situations or contexts inwhich the decision maker generally considers a large number of cues before making a judgment, creating the need for fractionalization. However, fractionalization alsoresults in fewer scenarios, and this, in turn, reduces the power of the design. Thus, it isimportant that researchers consider both power and fatigue when determining thenumber of scenarios to include in the final design. One approach may be to develop atleast five scenarios foreach cue. This would be theminimum ratio forstudies employ-ing regression techniques (Cooksey, 1996).

    Although thesize of thesample does not affect thepowerof theindividual analysis,it canbe an issue in studies using other types of analysis.Forexample, if theresearcheris going to cluster subjects by strategy, large sample sizes offera more comprehensiveanalysis of the grouping process. Large sample sizes are also likely to be more effec-tive when analyzing individual differences between the respondents. For example, intheGraves and Karren (1992)study, 29 interviewers were used to analyze thedifferentdecision-making strategies. After doing a cluster analysis, 13 clusters were found todifferentiate thevarious decision-making strategies amongthe interviewers.Althoughit waslikely that most if not allof thedecision strategies were found, it wasrather diffi-cult to estimate the relative popularity of each cluster because there were 13 clustersamong 29 interviewers. Larger sample sizes allow better estimation of the relativepopularity of theclusters. Forinstance, KlaasandDellOmo(1991) used a much largersample size (93 managers). Their cluster analysis indicated 7 clusters, and with theirmuch larger sample size, they were able to estimate the popularity of each cluster.

    Relatively small (less than 50 subjects)samples arenot uncommon among thepub-lished studies using the policy-capturing approach. Rynes et al. (1983), for example,examined the job application decisions made by 10 college seniors. Because therewere only 10 subjects, individual analyses were conducted, but subjects were notclus-tered.Having few subjects makes it difficult to do any form of clustering. Furthermore,it is unlikely that any kind of generalization can realistically be made regarding thesestudents, as small samples may not be representative of thepopulation. Obtaining repre -sentative samples that can be used to generalize results to the population is not possiblewithout sufficiently large numbers of respondents and the use of probability-samplingtechniques.

    The smallestsample amongthe publishedstudies reviewed for thisarticleconsistedof three subjects (Dougherty et al., 1986). However, in this case, the objective was notto discern therelative effects of each of thecues but to determinethe validityof each of the three decision makers in making decisions about job candidates. Studies that userelatively few subjects probably have very different objectives; they are unlikely tomake inferences regarding the external validity of their results. Thus, large sample

    352 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    18/26

    sizes should not be a requirement for all policy-capturing studies. It is necessary thatthe researcher desire sufficient power when testing hypotheses.

    Reliability

    Reliability is an important criterion in most research studies, as it is a necessary butnota sufficient conditionfor thevalidity of measures (Carmines& Zeller, 1979). Inter-estingly, few of the published studies shown in Table 1 have analyzed the reliability of their decision makers judgments. This may seem to be somewhat disturbing becausemany of the studies use only single-item dependent measures. There have been somenotable exceptions. Both Rynes et al. (1983) and Shereret al. (1987) asked subjects tomake replicate judgmentson theset of scenarios, allowing them to estimate reliability.The correlation between the two sets of scenarios represents a test-retest check of the judgments. Rynes et al. found reliabilities between .75 and .90, averaging approxi-mately .82 for their 10 subjects. Sherer et al. found the average reliability to be about.78for the 11 subjectsin their study. Ina study byHollenbeckand Williams (1987), a rel-atively small number of subjects ( n = 11) were asked to perform the policy-capturingstudy a second time a month later. The median test-retest reliability for these 11 sub-

    jects was .72, which suggests some degree of stability over time.Although the results from these three studies indicate reasonable estimates of reli-

    ability (greater than .70), it is noteworthy that very few of the published studies madereliability estimates. Furthermore, among those that did, relatively few subjects wereasked to duplicate their judgments. It seems that researchers have difficulty askingsubjects to duplicate their judgmentswhen they areasked to process a large numberof scenarios. In most cases, duplication would require an additional experimental ses-sion. A study by Cable and Judge (1994) asked subjects to replicate 4 of 32 full facto-rial scenarios. The authors calculated reliability on the 4 duplicated scenarios for allsubjects and found an average correlation of .82. Although this process of estimatingreliability does not include the full set of items, it still may be a reasonably good com-promise when circumstances do not allow researchers to fully duplicate the set of sce-narios. Thus, this strategy of limited duplication is recommended as it is not likely tocreate fatigue and the researcher is still able to estimate the reliability of each sample.Furthermore, these duplicated scenariosmaywarm up participantsto the task and thuslessen start-up effects, a problem discussed by Aiman-Smith et al. (2002).

    Summary

    Our analysis of key design issues has so far relied on a review of policy-capturingstudies in the management literature. A number of the issues we discuss are alsoaddressed by Aiman-Smith et al. (2002). Both articles consider the issue of realism inthe design and presentation of scenarios to participants. Our article considers a varietyof approaches and designs that may be utilized to create realism. We discuss our pref-erence for zero or near-zero intercorrelations between the cues when designing astudy, then consider the advantages and disadvantages of various alternative designs(e.g., fractionaland incompleteblockdesign) to the full factorial design. Furthermore,in thenextsection,we discuss approaches used by conjoint analysis researchers to dealwith the realism problem. We also propose that realism is more important than consis-tency when considering the range or difference between cue levels (values). This

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 353

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    19/26

    means that in some cases, some cues will have wider ranges than others. Aiman-Smithet al., on theotherhand,propose that rangesshould be about thesameeven ifcuelevelsare less realistic. We are in agreement, however, about the importance of consistency

    with thenumberof levels acrosscues. We also discuss theadvantages of orthogonalityamong cues and the use of raw regression weights and standardized weights whenthere are nonorthogonal designs.

    Both articlesalso address theissueof fatigue andthe cognitive limitsof thedecisionmaker. Aiman-Smith et al. (2002) advise the researcher not to use more than five cues.They also state that if written scenarios are used, the total number of scenarios shouldbe between 50 and 80. We tend to be less prescriptive. Because there are many poten-tial designs and approaches, we do not specify the number of cues or scenarios to use.We believe, however, that there should be an absolute minimum ratio of scenarios tocues (i.e., 5:1).

    Finally, both we and Aiman-Smith et al. (2002) address the reliability issue andsuggest that more estimates aredesirable. Both adviseusingduplicate scenariosto cal-culate reliability. We suggest that these duplicates may assist in lessening start-upeffects, a problem found, as discussed by Aiman-Smith et al., when subjects initiallylearn the task.

    We next consider the insights offered by studies from the marketing literature thathave utilized a very similar methodologyconjoint analysis.

    Conjoint Analysis

    Conjoint analysis is a methodology that hasbeen used extensively in marketing andconsumer research to understand how consumers evaluate preferences for products orservices (Green & Srinivasan, 1978, 1990). Like policy-capturing, conjoint analysisuses an individualized factorial survey approach to examine the effects of product orservice attributes on evaluative judgments. An examination of this methodology maytherefore yield useful guidelines for researchers employing a policy-capturingapproach. We begin with a consideration of the methodological and computationalsimilarities between the two approaches. Our comparison of these approaches is sum-marized in Table 2. We include in this discussion an examination of the approachesmarket researchers have taken to resolve some of the issues (e.g., information over-

    354 ORGANIZATIONAL RESEARCH METHODS

    Table 2 A Comparison of Conjoint Analysis and Policy-Capturing

    Conjoint Analysis Policy-Capturing

    Type of analysis Decompositional DecompositionalCue presentation method Profiles method, trade-off Profiles method

    method, pairwise combination,adaptive designs

    Survey design Full factorial design, fractional Full factorial design, fractionaldesign design

    Level of analysis Individual IndividualAggregate analyses Yes, including clustering Yes, including clusteringEvaluation of stimuli Metric scales, nonmetric Metric scales

    procedures

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    20/26

    load, orthogonality) associated with a factorial survey design. Finally, we discussresearch on conjoint analysis that may be applicable to policy-capturing.

    Design of Conjoint Analysis Studies

    Conjoint analysis is used to measure the relative importance consumers give tothe attributes that describe the products or services of interest. Similar to the policy-capturing approach, conjoint analysis involves the construction of real or hypotheticalproducts or services by combining the selected levels of each attribute (factor). Thesehypothetical products or services are then presented to the respondents, who providean overallevaluation.This typeof analysishas beencalled decompositional becauseit involves decomposing respondentspreferences, or ratings, to determine the relativevalue of each attribute (Hair, Anderson, Tatham, & Black, 1998). Policy-capturing isalso decompositional, as the decision makers are asked for overall evaluations of thescenario rather than the factors that make up the scenario.

    One issue for researchers designing a conjoint-analysis study is choosing a methodfor presenting product or service descriptions. Conjoint-analysis researchers haveused a number of presentation methods. The full profile is the most popular presenta-

    tion method, especially in studies examining fewer than six factors. Each stimulus(hypothetical product or service) is described separately, most often on a profile card,and defines varying levels of all of the factors included in the study. Respondents areaskedto either rank-order the stimuli or rateeachindependently. The decision problemtends to be more complex than some of the other methods because all the factors areincluded in each presentation. Furthermore, as the number of factors increases, so toodoes the possibility of information overload. Therefore, it is more likely to be usedwith six or fewer factors.

    Our review of policy-capturing studies suggests that the full-profile method is themost popular approach to cue presentation. Those researchers wishing to expand thescope of their inquiries to a larger number of factors have tended to reduce the numberof scenarios presented to participants (e.g., fractional factorial design) rather than thenumber of cues included in the scenarios. In contrast, marketing researchers havedeveloped a number of alternatives to full-profile presentation. The trade-off method,forexample, entails presenting attributes twoat a time and asking respondents to rank-order the full set of combinations for each pair. It is less complex than the full-profilemethod and has the advantage of avoiding information overload by presenting onlytwo attributes at a time. It has a number of limitations, however, and its use hasdecreased in recent years. Limitations include the large number of judgments neces-sary even where the number of factor levels is relatively small, fatigue, the sole use of nonmetric responses, and the inability to use fractional designs. An alternative to thismethod is basicallya combination of thetrade-off andfull-profile methods (Hair et al.,1998). Referred to as thepairwise comparison method, it involves comparisons of twoprofiles containing a subset of the attributes. Respondents typically indicate thestrength of their preference forone profile over another on a ratingscale. It is similar tothe trade-off method in that profiles contain a subset of the attributes, but in thepairwise comparison method, respondentsneverviewmore than twoprofiles at a time.It is similar to the full-profile method in that profiles include more than two attributesand metricresponsemeasures (ratings)are used. Theadvantageof themethod is that itallows researchers to examine more than seven factors without creating the problems

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 355

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    21/26

    of respondent fatigue often associated with other presentation methods (Hair et al.,1998).

    As with policy-capturing studies, the problems associated with information over-load are also concerns in the design of conjoint-measurement studies. Marketresearchers have addressed these concerns in much the same way as researchers using

    the policy-capturing approachby employing a fractional factorial design. Using afractional factorial design serves to reduce the number of scenarios to a manageablesize while maintaining orthogonality (Green, 1974). As discussed earlier, thesedesigns result in a reduction of interpretable interaction effects. This is not a problem,however, if the decision maker is using an additive model, as most of the interpretablevariance is assigned to the main effects.

    Researchers using the conjoint-analysis method have examined the effects of asmany as 30 factors and have consequently developed a number of alternativeapproaches to the fractional factorial design not found in the policy-capturing litera-ture. Thehybrid or adaptiveapproaches, forexample, involve a two-stageprocedure inwhich respondents are first asked to rate the desirability of the full set of factors. 2 Theadaptive approach is the more popular of these methods, perhaps because software isavailable that allows the researcher to generate individualized scenarios and is proba-

    blymost usefulwhen examining10 or more factors (Green& Srinivasan, 1990).In thisapproach, each respondent receives a set of scenarios that only include those factorsdesignated as the most important in the first stage of the procedure. The scenarios arethen evaluated in the same way as the above methods (Hair et al., 1998).

    A third issue for researchers using either the conjoint-analysis or policy-capturingapproach is the impact of forced orthogonality on external validity. That is, creatinga design in which the variables are orthogonal but are naturally correlated in the envi-ronment may produce profiles that are not representative of the environment familiarto the respondents (Green & Srinvasan, 1978,1990). Wherevariables are substantiallycorrelated, Green and Srinvasan (1978) have suggested the use of composite factors,which provide a summary measure of all correlated subfactors. For example, a medi-cal-cost-sharing variable would summarize the overall level of deductible andcoinsurance provisions, which tend to be highly correlated across health care plans.Thisapproach avoids the problem of creating unrealistic profiles(e.g., high deductibleand low coinsurance); however, it does not allow the researcher to partial out theeffects of subfactors that may be of more interest to the study.

    Alternatively, Steckel, DeSarbo, and Mahajan (1991) devised a new optimizingmethodology that entails creating a survey to ensure that variables are as orthogonalas possible. A combinatorial optimizationprocedure is used to createa modifiedfrac-tional factorial design by identifying and excluding nonrepresentative or unrealisticprofiles(i.e.,very unlikelyto occur in theenvironment).An algorithmfinds a subsetof the realisticprofiles that areas close as possible to being orthogonal. This is not unlikesome policy-capturing designs wherein the researcher takes care to create stimulussets in which variable intercorrelations areminimized and realism is enhanced (Beattyet al., 1988; Klaas & DellOmo, 1991; Klaas & Wheeler, 1990).

    Data Analysis

    Similarities may also be found in the computational issues that arise in conjoint-analysis and policy-capturing studies. One issue is the level of analysis. In both

    356 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    22/26

    approaches, analyses are typically carried out at the individual level, which means thatthe analyst generatesa separate model for predicting preferences for each respondent.At the individual level, each respondent rates enough profiles that the analyses can beperformed separately for each person. Predictive accuracy is calculated for eachrespondent rather than for the whole sample. One of the common uses of these analy-

    ses is to group individuals with similar importance values into segments. Researchersusing either conjoint analysis or policy-capturing are interested in better understand-ing these segments and may combine this information with other variables such asdemographics to derive respondentgroupings thatare similar in preference (Hairet al.,1998). Many times, however, an aggregate analysis is used to estimate the relativeimportance of the attributes of the whole sample. In some studies, this between-groupanalysis was used to test hypotheses about the average effects of attributes.

    A second computational issue involves the specification of the respondents com-position rule. The rule describes how the respondent combines the factors to obtainoverall worth. The most common rule invokes the additive model, which assumes thatthe respondent simply adds up the values of each attribute to get a total value for thefactor or attribute combination. As is the case in policy-capturing studies, the maineffects account formost of thetotalvariation in preferences,and hence,this model suf-

    fices for most consumer applications. Alternatively, the composition rule using inter-active effects allows for the interaction of two or more attributes. The choiceof a com-position rule determines the types and number of stimuli that the decision maker mustevaluate. More stimuli are required if the researcher is interested in evaluating interac-tions. Consider a study using 4 factors and 4 levels. If the factors are presented using afull-profile method, in which all factors are included in all scenarios, and theresearcher is only interested in estimating main effects, then just 16 (4 factors 4 lev-els)of the 256 (4 4) possible scenarios are needed (Hair et al., 1998). That is, it is possi-bleto estimate main effects using a fractional factorial design. However, the16 scenar-iosmust be carefullyconstructedfor orthogonality to arrive at thecorrect estimation of the main effects. If, on the other hand, interactions are specified as important, addi-tional scenarios are required to assess these effects. In this case, a full factorial design,with the full set of 256 scenarios (240 more than are needed to assess main effects)would be required to assess the importance of all 11 interactions.

    Research Results

    Because of the similarities between conjoint analysis and policy-capturing,research on either method can be informative in designing these studies. Research onconjoint analysis has shown that the relative importance of an attribute or factorincreases as the number of levels on which it is defined increases. This occurs eventhough the minimum and maximum values of the attribute are held constant (Wittink,Krishnamurthi, & Nutter, 1982; Wittink, Krishnamurthi, & Reibstein, 1990). Thisresult was observed in analyses of both rank-order and ratings data. It indicates a seri-ous problem because the estimated regression coefficients are supposed to be unbi-ased.Applied to policy-capturing, this finding suggests that investigators should prob-ably consider using the same number of levels for all factors when their intention is tocompare the relative importance of these factors.

    Over the past three decades, conjoint analysis has been an important tool for thoseconducting consumer and marketing research. Its popularity in evaluating consumer

    Karren, Barringer / THE POLICY-CAPTURING METHODOLOGY 357

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    23/26

    preferences has promoted a broader range of useful methodological techniques thatcan and should be used to expand the capabilities of the policy-capturing methodol-ogy. Specifically, conjoint analysis has offered more flexibility with methods (e.g.,pairwise comparison and adaptive) that can be used when either few factors or manyfactors are utilized. In situations in which factors are substantially correlated and are

    causing potential interpretation problems, conjoint researchers have suggested the useof either compositefactorsor newoptimizingmethodologies that remove the unrealis-tic scenarios. Hopefully, the advancesof conjoint research can be helpful in addressingand aiding some of the issues and problems plaguing policy-capturing researchers.

    Conclusion

    Our analysis of some of the key issues related to the policy-capturing method hasindicated not only that this method has been a very effective tool in understanding theprocesses by which individual decision makers integrate cues of information in mak-ing various types of organizational decisions but that themethod is highly flexible andable to adapt to many contexts. Our comparison with conjoint analysis has been quitebeneficial, as research advances to this methodology can be of assistance in solving

    some of the key methodological problems of the policy-capturing method. Althoughwe are both positive and quite hopeful regarding the scope and versatility of futureresearch and practice, the researcher designing a study using this method shouldunderstand its limitations and constraints.

    To design a soundand valid policy-capturing study, the researchershould first focuson thepurpose of the research. Clearly, the researchers intentions arecritical in deter-mining how to deal with problematic situations when no ideal solution is available. Inour analysis, we encountered a number of clear trade-offs when evaluating thevariouspolicy-capturing studies. For example, if researchers are going to use an experimentaldesign, they typically have to choose between a full factorial design and a confoundedfactorial design. In the former, the obvious advantages are the ability to assess the fullmodel, both the linear and nonlinear components, and to find the relative contributionof all the variables. On the other hand, the investigator has to consider problems of stress and fatigue, especially if there are many variables, and also whether the addi-tional scenarios will result in unreliable judgments. Using a confounded factorialdesign, the researcher may want to consider the loss of powerwith fewerscenariosandthe limitations related to the inability of assessing higher order interactions. Theseconsiderations,however,may becomemoot ifandwhen therequirednumber of cues isfar too large to use a full factorial design. In situations where there are far too manycues, researchers are likely to utilize statistical packages, which create scenarios withnear-zero correlationsbetween thecues. In cases where there aremore than 10 factors,researchers should utilize the hybrid and adaptive techniques that conjoint researchershave used for many years.

    Design decisions by researchers are not all based on some form of trade-off. Webelieve that researchers should be careful to avoid designing policy-capturing studiesthat lack realism and result in poor validity. In planning their studies, they may want tocheck their variables carefully to avoid cues that arecorrelated in real situations andwould result in implausible combinations if forced to be uncorrelated in the experi -ment. They should ensure that levels of each cue are applicable to real settings. Fur-

    358 ORGANIZATIONAL RESEARCH METHODS

    at U.A.E University on August 11, 2014orm.sagepub.comDownloaded from

    http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/http://orm.sagepub.com/
  • 8/10/2019 A Review and Analysis of the Policy Capturing Methodology in Organizational Research Guidelines for Research and Practice 2002 Organizational Research

    24/26

    thermore, they maycheckto seeif theconstructedscenarios arelikely to be interpretedappropriately withinthe context of thestudy. Once this isknown,they canthen take thenecessary steps to enhance realism. Finally, most of the past studies have not includeda reliability check. Because reliability is a necessary condition for validity, some formof replication should be included as part of the policy-capturing study.

    Notes

    1. Evidencesuggests thatthe moresophisticated judges seethrough the indirectapproach,andeliminatingsocialdesirability effectsaltogethermay notalways bepossible(Mazen, 1990).

    2. This so-called self-explication procedure is similar to an approach used in policy-capturingstudies in which researchers run focus groups or interview individuals familiarwith thedecisionproblems to identify salient decision criteria. In policy-capturing studies, factors are invariantacross respondents, whereas in these alternative methods, the factors vary according to individ-ual respondents desirability ratings.

    References

    Aiman-Smith,L., Scullen,S. E.,& Barr, S. H. (2002). Conductingstudies of decisionmaking inorganizational contexts: A tutorial for policy-capturing and other regression-based tech-niques. Organizational Research Methods , 5, 388-414.

    Allen, J. S., & Muchinsky, P. M. (1984). Assessing raters policies in evaluating proposed ser-vices for transportingthe physicalhandicapped. Journal of AppliedPsychology , 69 , 3-11.

    Arnold, H. J., & Feldman, D. C. (1981). Social desirability response bias in self-report choicesituations. Academy of Management Journal , 24 , 377-385.

    Beatty, J. R., McCune, J. T., & Beatty, R. W. (1988). A policy-capturingapproach to thestudy of United States and Japanese managers compensation decisions. Journal of Management ,14 , 465-474.

    Brannick, M. T., & Brannick, J. P. (1989). Nonlinear and noncompensatoryprocesses in perfor-mance evaluation. Organizational Behavior and Decision Processes , 44 , 97-122.

    Bretz,R. D.,Jr.,& Judge, T. A. (1994). Therole of humanresource systems in jobapplicant de-cision processes. Journal of Management , 20 , 531-551.

    Cable, D. M.,& Judge, T. A. (1994).Paypreferences andjob search decisions: A person-organi-zation fit perspective. Personnel Psychology , 47 , 317-348.

    Carmines, E. G.,& Zeller, R. A. (1979). Reliability and validityassessment . BeverlyHills, CA:Sage.

    Cochran, W. G., & Cox, G. M. (1957). Experimental designs . New York: John Wiley.Cohen, J. (1988). Statistical power analysis for thebehavioral sciences (2nded.). Hillsdale, NJ:

    Lawrence Erlbaum.Cooksey, R. W. (1996). Judgment analysis:Theory, methods,and applications . SanDiego, CA:

    Academic Press.Cooper,W.H., & Richardson,A. J. (1986).Unfair comparisons. Journalof AppliedPsychology ,

    71 , 179-184.Darlington, R. B. (1968). Multiple regression in psychological research and practice. Psycho-

    logical Bulletin , 69 , 161-182.Dougherty, T. W., Ebert, R. J., & Callender, J. C. (1986). Policy capturing in the employment in-

    terview. Journal of Applied Psychology , 71 , 9-15.Dun