DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

download DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

of 34

Transcript of DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    1/34

    PERSONNEL PSYCHOLOGY2008, 61, 727759

    DEVELOPMENT ENGAGEMENT WITHIN AND

    FOLLOWING DEVELOPMENTAL ASSESSMENTCENTERS: CONSIDERING FEEDBACKFAVORABILITY AND SELFASSESSOR AGREEMENT

    SANG E. WOOUniversity of Illinois at UrbanaChampaign

    CARRA S. SIMSRAND Corporation

    DEBORAH E. RUPPUniversity of Illinois at UrbanaChampaign

    ALYSSA M. GIBBONSColorado State University

    This study sought to understand employees level of behavioral engage-ment in response to feedback received in developmental assessmentcenter (DAC) programs. Hypotheses were drawn from theories of self-enhancement and self-consistency and from ndings in the multisourcefeedback and assessment center literatures regarding recipients percep-tions of feedback. Data were gathered from 172 U.S. middle managersparticipating in a DAC program. Results suggested that more favorablefeedback was related to higher behavioral engagement. When discrep-ancies between self- and assessor ratings were examined, overraters(participants whose overall self-ratings were higher than their assessor ratings) tended to show less engagement in the program compared tounderraters. However, pattern agreement on the participants dimensionprole didnotsignicantlycorrelate with behavioral engagement. Basedon these ndings, avenues for future research are presented and practicalimplications are discussed.

    This research has been supported by the Douglas W. Bray and Ann Howard Award/SIOPFoundation, The State Farm Companies Foundation, the University of Illinois CampusResearch Board, the University of Illinois at UrbanaChampaign Institute for Labor andIndustrial Relations, and a National Science Foundation Graduate Research Fellowship. Atthe time of the study, Carra Sims was at the University of Illinois at Urbana-Champaign.She is now at RAND. Any opinions, ndings, conclusions, or recommendations expressedin this publication are those of the author(s) and do not necessarily reect the views of theseunits/agencies.

    We thank all of the members of both the Managerial Development Program and theLaboratory for theStudy of Developmental Assessment Centers (DACLab) at the Universityof Illinois at UrbanaChampaign and Colorado State University for their assistance withthis project.

    C d d q t f i t h ld b dd d t D b h E R D

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    2/34

    728 PERSONNEL PSYCHOLOGY

    Assessment centers, traditionally used as mechanisms for selectingor promoting employees, are becoming increasingly popular as methodsfor employee development (Thornton & Rupp, 2005). Although there isno clear consensus on what constitutes a developmental assessment cen-ter (DAC), it is often dened as a collection of workplace simulationexercises and other assessments that provide individuals with practice,feedback, and developmental coaching on a set of developable behavioraldimensions found to be critical for their professional success (Thorn-ton & Rupp, 2005, p. 58). Whereas selection and diagnostic assessmentcenters are designed to assess an individuals competence, developmentalassessment centers have both an assessment and a development compo-nent. In DAC programs, the goal is not to arrive at a single overall rating of potential but to provide the participant with detailed feedback regardinghis or her strengths and developmental needs on a predetermined set of performance dimensions. This feedback is often collected from multiplesources, and at a minimum, employees conduct selfassessments and areprovided feedback stemming from assessor ratings.

    Feedback may be given at the conclusion of the program and usedto plan subsequent development experiences, or it may be given at mul-tiple points within the program and used as a basis for active learningand development as participants work through subsequent sets of exer-cises. In either case, the intent is that participants will respond to thefeedback they receive by engaging in appropriate developmental activ-ities. For this to occur, participants must not only accept the feedbackthat they receive (Kudisch, Lundquist, & Smith, 2002; Poteet & Kudisch,2003; Smither, London, & Reilly, 2005), but they must also act on it.Dreher and Sackett (1983) have suggested that assessment results havepsychological impacts on three different types of responses: cognitive re-sponses, affective responses, and behavioral responses. Ilgen, Fisher, andTaylors (1979) process model of the effects of feedback on recipients alsosuggests that although acceptance of and desire to respond to feedbackare important outcomes, intended response (goals) and actual responseare the ultimate outcomes for a feedback intervention. Although severalstudies in the past have looked at cognitive and affective responses to as-sessment ratings (whether multisource feedbackor AC ratings), behavioralresponses have received far less attention. The existing literature regardingfeedback-seeking behaviors (e.g., Ashford, Blatt, & VandeWalle, 2003)and goal-regulation processes (e.g., Ilies & Judge, 2005) suggests manytheoretical insights, but thus far little empirical research has been con-ducted to integrate underlying social psychological theories with feedbackreactions and subsequent feedback-seeking processes in actual feedback

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    3/34

    SANG E. WOO ET AL. 729

    favorability, selfassessor agreement) impact participants subsequent ac-tions. In this study, we pull from the assessment center, performanceappraisal, and multisource feedback literatures to form hypotheses aboutthis phenomenon.

    Effects of Feedback Favorability on Participant Reactions

    The majority of research on the effects of feedback has used par-ticipant reactions as a dependent variable. That is, immediately after receiving feedback, participants are asked to report their thoughts andfeelings regarding the feedback just received. Participant reaction criteriahave included perceptions of accuracy and usefulness, as well as feedbacksatisfaction. This research typically shows that participants react morepositively to favorable feedback. For example, Kudisch (1997) found thatassessment center participants tended to perceive favorable feedback (i.e.,higher ratings) as more accurate than unfavorable feedback. Arnold (2002)also found that assessment center participants who received higher ratingstended to view the program more positively than those with lower ratings;in addition, lower rated participants perceived fewer opportunities andless support for development following their experience in the program.Similar results have been found within the multisource feedback litera-ture. In a study of 125 MBA students enrolled in a multisource feedbackprogram on leadership, Brett and Atwater (2001) found that individualsreceiving less favorable ratings developed beliefs that the feedback wasless accurate and less useful (as compared to individuals receiving morefavorable feedback). Bono and Colbert (2005) found similar results witha sample of 152 MBA students from a leadership program that incor-porated multisource ratings. Results showed that favorable feedback waspositively linked to feedback satisfaction measured immediately after thefeedback was given.

    These ndings are consistent with self-enhancement theory(Schrauger, 1975), which posits that the favorability of feedback receivedfrom others affects recipients reactions, such that people tend to reactpositively to more favorable feedback and negatively to less favorablefeedback. In essence, this theory argues that reactions to feedback aredriven by recipients desire to feel positively about themselves, and theywill be more amenable to feedback that is favorable. Whereas the re-search on the effects of feedback favorability on participants evaluativereactions has been supportive of these predictions, far less research hastested the theory on behavioral criteria. Even less is known about theeffects of feedback favorability on subsequent behavioral engagement in

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    4/34

    730 PERSONNEL PSYCHOLOGY

    The Effects of SelfOther Agreement Regarding Feedback Favorability

    As we mentioned above, DAC programs often involve both self- andassessor ratings, and as such, the favorability of assessor feedback mustbe considered in light of its alignment with how DAC participants haverated themselves. Indeed, selfother agreement has been considered inpast research, and multiple theories have shed light on this phenomenon.For example, both self-consistency theory and cognitive dissonance the-ory suggest that people are motivated to achieve a state of mind wherebeliefs, attitudes, and behaviors are consistent with one another. Accord-ing to these theories, discrepancies between an individuals perceptionsand evaluations by others can lead the individual either to disqualify thedisconrming feedback from others and search for more conrming feed-back that is consistent with his or her own beliefs, or to try to change hisor her behavior to be in line with others expectations (Festinger, 1957;Korman, 1976). Likewise, control theory suggests that a discrepancy be-tween a perceived state (i.e., self-perceptions) and a reference value (i.e.,feedback from others) leads to conforming to the standard (i.e., othersperceptions; Carver & Scheier, 1982).

    Together, these theories suggest that any discrepancy between an in-dividuals self-perceptions and how they are perceived by others will bedissatisfying. However, the empirical research has shown the phenomenonto be somewhat more complicated than this. For example, Brett and Atwa-ter (2001) examined the effect of selfother agreement on satisfaction withfeedback by adding selfother interactions (i.e., selfother discrepancies)to a model containing feedback favorability. Selfother discrepancies didnot explain signicant additional variance in positive reactions to feed-back in their study. However, they found that participants who underratedtheir own performance (their self-ratings were lower than ratings fromothers) were less likely to experience negative reactions, whereas thosewho overrated (their self-ratings were higher than ratings from others)were more likely to have negative reactions in some cases. These ndingsare also consistent with self-enhancement theory in that a discovery of underrating allows one to adjust ones self-evaluation upward, leading toa more positive self-evaluation in the end.

    Extending the Literature: The Uniqueness of DAC Feedback and the Need for Behavioral Criteria

    To summarize, the research literature suggests that both self-enhancement and self-consistency affect participants general reactions to

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    5/34

    SANG E. WOO ET AL. 731

    When selfother discrepancy is taken into account, the ndings are lessconclusive, although overraters seem more likely to believe that the feed-back is not accurate and not useful. In the sections that follow, we morefully explore the issue of feedback favorability and selfother agreementwithin the unique context of DACs and argue for the inclusion of behav-ioral criteria in studying these phenomena.

    The uniqueness of DAC feedback. The difference between develop-mental feedback and performance feedback has been well documented(Ryan, Brutus, Greguras, & Hakel, 2000). DAC feedback is purely devel-opmental, and it is often repeatedly emphasized in the program that theevaluations made by DAC assessors will not be shared with managementor be used to make personnel decisions of any kind. DAC participants aretherefore expected to focus more on their personal development and bemore receptive to developmental suggestions than participants in tradi-tional AC or performance appraisal programs. Because such individualsare free from evaluation for the purpose of personnel decision making,they are less likely to deliberately adjust their self-ratings to compensatefor the overall performance ratings of others (e.g., assessors, supervisors,peers, etc., as has been shown to occur in performance appraisal set-tings; Campbell & Lee, 1988). Ryan et al. (2000) have also suggested thatalthough the purpose of job performance feedback is to enhance perfor-mance in specic, immediate job contexts, purely developmental feedbackis provided in order to guide self-improvement on a set of competenciesat a broader level.

    In addition to differing from performance appraisal feedback, DACfeedback also differs in several ways from other types of developmentalfeedback, such as multisource feedback. First, in most multisource feed-back programs, participants are easily able to determine both feedbackfavorability and agreement because they are provided with quantitativeratings (e.g., Atwater, Ostroff, Yammarino, & Fleenor, 1998; Atwater,Roush, & Fischthal, 1995). In a DAC, determination of favorability of and agreement with feedback becomes a more complex undertaking for participants, as the feedback provided is often both quantitative and qual-itative (Latham & Marchbank, 1994). DAC feedback usually consistsof a detailed discussion between a participant and an assessor, cover-ing not only what was observed by assessors regarding both overall anddimension-levelperformance across the exercises butalso the participantsown observations (Goodge, 1991; Latham & Marchbank, 1994). As a re-sult, participants must infer the overall favorability of their feedback andthe degree to which this information agrees with their self-assessments. Itis not clear how this inference process may affect the effects of feedback

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    6/34

    732 PERSONNEL PSYCHOLOGY

    demonstrated by a participant on a set of behavioral dimensions (Ballan-tyne & Povah, 2004; Thornton & Rupp, 2005). The feedback discussionis often framed in terms of these relative strengths and weaknesses andparticipants are encouraged to reect on their own performance in thisway. As a result, the degree to which the participant and assessors identifythe same prole of strong and weak dimensions becomes more importantin the context of DACs.

    Behavioral engagement in response to feedback. As mentioned above,much of the existing research on multisource feedback interventions, aswell as the available literature on DACs, has focused on feedback accep-tance and satisfaction. This practice has been based on the assumptionthat feedback acceptance and satisfaction lead participants to engage intraining and development activities (Kudisch, 1997; Smither et al., 2005).However, as Bono and Colbert (2005) recently reported, satisfaction withfeedback does not necessarily lead to commitment to ones developmentgoals. Further, a meta-analysis on the correlations among training crite-ria also revealed that affective reactions to training interventions do notcorrelate with actual learning or behavior change (Alliger, Tannenbaum,Bennett, Traver, & Shotland, 1997). As DACs are intended to foster activelearning behavior during the program, after the program, or both (Bal-lantyne & Povah, 2004), mere acceptance of feedback may not be themost appropriate criterion for measuring the impact of feedback in thiscontext. Also, using self-report questionnaires to assess participants goalcommitment and engagement level in the program may be susceptible tosocially desirable responding (Podsakoff & Organ, 1986). In other words,although participants may report high levels of commitment and engage-ment, their actual behavior may not necessarily indicate the same level of engagement.

    Therefore, in this study, we focus on program engagement , whichwe dene as the level of behavioral involvement in developmental activ-ities that participants exhibit during and after the DAC program. Thereare unique aspects to a DAC program that allow for the measurement of program engagement. First, assessors can be trained to rate the degreeto which participants are engaged in the feedback sessions. Indicatorsof this variable might include participant involvement in the discussion,asking of follow-up questions, and voluntary goal setting. Second, simu-lation exercises within the assessment center can be specically designedto elicit engagement behaviors. For example, role players might be in-structed to offer feedback in the middle of an exercise, giving participantsan opportunity to react behaviorally. Third, because DAC programs areoften embedded in broader development programs and involve regular

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    7/34

    SANG E. WOO ET AL. 733

    development activities can be explicitly documented. In this study, we col-lected data on all three of these program engagement variables (feedbackengagement, exercise engagement, and follow-up activities).

    Hypotheses

    Because feedback favorability and selfother agreement have not beenexplored as predictors of actual program engagement, and because littleresearch has studied feedback mechanisms within DAC programs, thisstudy sought to ll this void. That is, we sought to test whether DACfeedback favorability and selfother (i.e., selfassessor) agreement haveinuences on participants subsequent behavioral engagement, measuredin the three ways outlined above. We proposed three hypotheses basedon the existing research literature. First, consistent with self-enhancementtheory, we predicted that more favorable DAC feedback would be relatedto higher behavioral engagement. This is based on the ndings that thefavorability of feedback coming from others is positively related to satis-faction with the feedback (Bono & Colbert, 2005) and that the higher thefavorability of feedback received from others, the more accurate the indi-vidual believes the feedback to be (Brett & Atwater, 2001; Kudisch, 1997).Specically, drawing from self-enhancement theory, we argue that favor-able feedback induces heightened self-efcacy (feeling good about self),which leads one to be engaged in activities where one can potentiallyreceive more positive feedback about oneself. As suggested by Anseelet al. (2007), such ego-based motives (i.e., desire to protect, defend, andmaintain ego) integrated with the self-enhancement tendency provide aclear theoretical mechanism to link the evaluative criteria of satisfactionand affect to actual behavior. In further support, Maurer, Weiss, and Bar-beite (2003) elucidated a theoretical link between affective/motivationalconstructs such as generalized self-efcacy and behavioral outcomes suchas participating and engaging in development activities, and provided em-pirical support. Consistent with the existing literature, we dened feed-back favorability as the absolute level of others (in our case, assessors)ratings.

    Hypothesis 1 : More favorable assessor ratings will be related to higher participant engagement.

    We also examined the level of engagement when there was disagreementbetween self-ratings and assessor ratings. When both consistency theoriesand self-enhancement theory are taken into account, one can expect thatparticipants engagement will be higher when they receive feedback that

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    8/34

    734 PERSONNEL PSYCHOLOGY

    In other words, when individuals overrate themselves as compared toassessors ratings, this selfassessor inconsistency will be a detriment tosubsequent engagement.

    Hypothesis 2 : To the extent that self-ratings exceed assessor ratings(overrating), participant engagement will be lower;also, to the extent that self-ratings are below asses-sor ratings (underrating), participant engagement willbe higher.

    Hypotheses 1 and2 areconcernedwith overall ratingagreement, across

    dimensions and exercises. However, the DAC feedback discussion is oftenframed in terms of these relative strengths and weaknesses, as discussedearlier. As a result, agreement between self- and assessor perceptionscan be considered, at a more detailed level, as the degree to which theparticipant and assessors perceive the same prole of strong and weakdimensions (London & Smither, 1995; London & Wohlers, 1991; Wohlers& London, 1989). We therefore also propose a third hypothesis.

    Hypothesis 3 : Engagement will be higher when self- and assessor-

    rated dimension proles (indicating the strengths andweaknesses of the individual) are similar than whenthey are dissimilar.

    We propose to test Hypothesis 3 via prole correlation. Edwards(1994) pointed out that prole correlation only indicates similarity inprole shape not the actual distance between prole scores. However, inthis context, differences in actual magnitudes would not matter as muchas the general pattern of prole agreement (i.e., the relative strengthsand weaknesses within each person). For instance, consider an individualwhose self-ratings on ve dimensions are 2, 3, 4, 1, and 6, and whoseassessor ratings are 3, 4, 5, 2, and 7. Although this person is a consistentunderrater in terms of difference scores, she/he is in perfect agreementwith assessor ratings in terms of the general pattern (e.g., prole) of therelative strengths and weaknesses. It is this general pattern that wouldbecome apparent in the feedback discussion between an assessor andprogram participant, given the way feedback is structured. Further, thepolynomial regression approach, which Edwards proposed as an alterna-tive to prole correlation, does not allow us to consider agreement on thepatterns of ratings across dimensions. Therefore, we argue that prole cor-

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    9/34

    SANG E. WOO ET AL. 735

    Method

    Participants

    Data were collected from 172 U.S. middle managers participating in aDAC program. Most participants (89%) were Caucasian, and slightly over half were women (55%). Participants were an average of 43.3 years of ageand had an average of 20.1 years experience in the workforce. Participantscame from a variety of organizations and industries, including banking,manufacturing, county and city government, construction, research, andpostsecondary education. Most (73%) described themselves as middlemanagers, the target audience for the program.

    DAC Procedure

    The DAC used in this study was part of a managerial developmentprogram offered by an applied research center at a large midwesternuniversity. Participants were from several regional organizations. Partici-pating organizations were recruited through presentations to local humanresource management groups, the program Web site, and through wordof mouth. The DAC experience was provided free of charge to partic-ipants and their employers in exchange for the opportunity to use thedata collected for research purposes. Data presented here were collectedover a period of 2 years. The DAC was structured around six core be-havioral dimensions identied as being essential for midlevel managersacross jobs and amenable to development: information seeking, planningand organizing, problem solving, leadership, oral communication, andconict management. These dimensions were identied through a sys-tematic synthesis of the research literature (e.g., published managerialperformance taxonomies, assessment center technical reports, O NET,and performance models used by major I-O/HR consulting rms). Thedimensions were further veried through more traditional job analytictechniques. The result was a list of important and developable dimen-sions relevant to middle management positions across organizations andindustries.

    Forty-eight assessors were involved in the program over the 2 yearsthat program data were collected. Assessors were graduate students inhuman resources (HR) with bachelors degrees in psychology ( n = 40),or industrial-organizational (I-O) psychology doctoral students ( n = 5),and local Society for Human Resource Management (SHRM)-certiedHR professionals ( n = 3). The HR graduate program from which most

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    10/34

    736 PERSONNEL PSYCHOLOGY

    were female. Although we do not have data on their exact ages, we canreport frequencies within age range categories. Forty assessors were 20 35, and eight were 3545 years old. The assessor group participated in a4-week training and certication program. Each trainee had to meet sev-eral criteria in order to be certied as an assessor. These criteria includedparticipating in the DAC itself, extensive classroom training (which in-cluded process training, frame of reference training, and training to guardagainst observation and rating errors), and video-based practice exercises.Trainees were also required to shadow a senior assessor, be shadowed bya senior assessor, and pass a series of exams. Interrater agreement wasestablished in training, prior to the certication of the assessors. The as-sessors training process was described to DAC participants to ensure thatthey understood the extensive credentialing process that assessors wentthrough (and to build credibility for the assessors).

    Prior to attending the assessment center, participants were asked tocomplete a survey, in which they were asked to provide self-ratings on thesix dimensions along with other attributes such as their attitude towarddevelopmental experiences. Each dimension was divided into three subdi-mensions, each of which was rated on a 7-point scale. Subdimensions werelabeled in clear behavioral terms in order to make it easier for participantsto understand their meanings. Assessors used these same dimensions andsubdimensions to make their ratings during the course of the assessmentcenter. The dimensions, subdimensions, and their denitions are listed inAppendix A.

    Six participants went through the DAC at a time. The DAC lasted for 1 full day, and was run once per week. Participants completed two sets of three behavioral simulation exercises each (a leaderless group discussion,interview simulation, and a case study/oral presentation). Throughout thepaper, we refer to these as exercise blocks to be consistent with pastresearch (Thornton & Rupp, 2005). Each dimension was assessed in two of the three exercises in each block. Each individual participant was observedby three different assessors over the course of the day (one assessor observed each exercise, but assessors observed the same participant inthe same type of exercise in both blocks). After each exercise, assessorsrated the participants on each dimension using behaviorally anchoredrating scales (BARS). On the BARS, each dimension was divided intothree subdimensions, each of which was rated on a 7-point scale. Thesesubdimension ratings were averaged across exercises within each blockto create the overall (assessor-rated) dimension ratings.

    Assessors engaged in a consensus integration discussion after eachblock to outline the feedback to be given to each participant. After the

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    11/34

    SANG E. WOO ET AL. 737

    feedback sessions in which they provided extensive feedback on each of the six dimensions. Assessors spoke in terms of strengths and develop-mental needs and focused on specic behavioral examples for each. Noquantitative ratings were provided to participants. Participants were en-couraged to reect on their own self-evaluations and compare them withthe evaluations provided by the assessors during the feedback session.The entire process was then repeated for the second block of exercises.During the rst feedback session, assessors encouraged participants to setimmediate goals for their performance in the second block. In the secondfeedback session, participants and assessors considered more long-termgoals for improvement on the job. In the months following the DAC, par-ticipants were contacted by an assessor via telephone at a prescheduledtime for a follow-up interview. During the interview, participants wereencouraged to discuss their progress toward the goals they had set duringthe program.

    Measures

    Self-ratings. As discussed above, the self-rating instrument was com-pleted by participants prior to their arrival at the DAC. It consisted of 18statements (three for each dimension) corresponding to the subdimensionsused by the assessors (see Appendix A). Ratings were made on a 7-pointscale, with higher ratings indicating higher (perceived) prociency. Re-sponses were averaged across the three statements for each dimensionto create dimension scores. These three-item scales demonstrated highinternal consistency, with coefcient alpha values ranging from = .86 to = .92 (average = .89). These ratings were aggregated across dimen-sions to form a general self-rating score.

    Attitude toward developmental experiences. A measure of attitude to-ward development experiences (ADE; Walter, 2004) was used as a controlvariable in our analyses in order to determine the contribution of self andDAC ratings over and above general willingness to develop. This mea-sure assessed participants general level of psychological and behavioralengagement in everyday developmental processes. The concept coversseeking feedback and suggestions for improvement, accepting feedbackfrom others in a nondefensive way, and expressing interest in personalgrowth and improvement.

    The measure contained 19 items, which are listed in Appendix B, andused a 5-point response scale. A factor analysis of the measure supportedits unidimensionality. Parallel analysis suggested a two-factor solution,but the rst factor explained more than 37% of the overall variance, and

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    12/34

    738 PERSONNEL PSYCHOLOGY

    keyed, suggesting that this second factor was artifactual rather than contentrelated. Internal consistency for this 19-item scale was = .82. As this wasa relatively new measure, we also present here some preliminary evidencefor its convergent and discriminant validity. Based on the available datafrom our current sample ( n = 92), we found that ADE was stronglycorrelated with mastery goal orientation ( r = .55, p < .001) but weaklycorrelated with performance goal orientation ( r = .115, ns), with bothtypes of goal orientation measured using a 16-item scale developed byButton, Mathieu, and Zajac (1996).

    Assessor ratings. As discussed above, assessors used BARS dividedinto subdimensions to rate each participants behavior in each exercise (seeAppendix A). Anchors for the BARS were derived from the dimensiondenitions, pilot testing, and previous experience. As with the self-ratings,subdimension ratings were averaged to create dimension ratings withineach exercise and across the two exercises measuring each dimensionin each block. Internal consistencies of the within-exercise dimensionratings ranged from = .78 to = .89, with average = .84. Internalconsistencies of across-exercise dimension ratings (i.e., based on a total of six subdimension ratings) ranged from = .69 to = .84, with average = .78. The across-exercise dimension ratings from the rst exerciseblock were used to calculate the pattern agreement scores described below.A general DAC rating score was also calculated by aggregating acrossdimensions.

    Pattern agreement. Following suggestions made by London and hiscolleagues (London & Smither, 1995; London & Wohlers, 1991; Wohlers& London, 1989), we calculated Pearson correlations across six dimen-sion ratings from two sources (participants themselves and assessors) for each individual. Such pattern agreement scores reect the degree to whichassessor ratings and self-ratings show similar rank ordering of dimensions,identifying relative strengths and weaknesses for each individual (London& Smither). This is consistent with traditional practices for quantify-ing prole agreement (or similarity) in the previous literature (e.g., Bem& Allen, 1974; Cable & Judge, 1997; OReilly, Chatman, & Caldwell,1991).

    Program engagement. The DAC design offered a number of oppor-tunities to evaluate participants behavioral engagement with differentaspects of the program. The level of engagement was rated by assessorsimmediately following both feedback sessions (feedback engagement),in one specially designed simulation exercise (exercise engagement),and upon conclusion of the follow-up interview (follow-up activities).Each measure highlights a unique aspect of behavioral engagement in the

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    13/34

    SANG E. WOO ET AL. 739

    (a) Feedback engagement . Participants engagement in the feedbacksessions was measured via a readiness to develop (RTD) scale(Walter & Thornton, 2004) completed by assessors followingeach feedback session. The scale consisted of eight items (seeAppendix B), with responses rated on a 5-point scale. The internalconsistency of the scale was = .94. RTD questionnaires were ad-ministered twice during the program (morning and afternoon). Avariable indicating overall feedback engagement during the DACprogram was calculated by averaging the morning and afternoonRTD scores.

    (b) Exercise engagement . One specially designed simulation exercisewas used as another measure of participants engagement. In thiscase study/oral presentation exercise, participants were asked togive a 5-minute presentation to the assessor. Immediately after their presentation, during the exercise, they received brief feed-back and suggestions focusing on oral communication skills andwere given the opportunity to revise the presentation and presentit again. Assessors rated participants engagement (or readinessto develop) in the exercise, especially during and after receivingfeedback from the assessor, using a behaviorally anchored ratingscale. This BARS followed the same format as the BARS used torate the other dimensions: Assessors used a 7-point scale to rateeach of three subdimensions (i.e., feedback seeking, acceptance of feedback, and showing interest in development). Examples of rele-vant behaviors included discussing the feedback with the assessor,expressing and sharing ideas about how to improve, following theassessors recommendations, and revising the presentation aboveand beyond the assessors recommendations.

    (c) Follow-up activities : Finally, participants ongoing behavioral en-gagement in development was evaluated during the follow-up in-terview. That is, we documented the degree to which participantswere actively implementing the goals and action plans from theDAC. The protocol for the interviews included questions regardingparticipants progress on the action plans they had created in theDAC and whether participants had sought their supervisors inputand involvement in their development plans. Examples of thesequestions include Have you been working towards the medium-term goals that you set with your development facilitator 1 at theprogram?, Have you had your meeting with your supervisor,and if so, what new goals did the two of you come up with?,

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    14/34

    740 PERSONNEL PSYCHOLOGY

    and Have you recently met with your supervisor to discuss theinformation in the report, or the goals from the program?

    Participants engagement levels were independently scored using a5-point scale by two raters (study authors) based on the answers to thesequestions. Answers were scored in terms of the number and complexityof activities that participants reported. This separate scoring procedurewas necessary because participants progress on developmental activitieswas recorded only qualitatively during the follow-up interview. The scalewas anchored as follows: A score of 1 indicated little engagement: Noactivity/effort reported,without a substantial excuse. A scoreof 3 indicatedeither moderate engagement (reporting some activity) or no activity/effort,

    justied with specic and substantial excuses (e.g., being on a vacationfor a month), and a score of 5 indicated considerable engagement withmultiple development activities reported. All scale points were used inthe ratings. The agreement between raters was computed using Cohenskappa. For the three items, ranged from .42 to .79, with an average of .63, indicating an acceptable level of agreement (Von Eye & Mun, 2005).

    Due to the temporal nature of the follow-ups, fewer participants wereavailable for analyses using this variable. Of the 172 participants, 89 weresuccessfully contactedand participated in the follow-up session, 23 did not

    respond to the follow-up calls (5 of these individuals explicitly andactivelydeclined to participate in the follow-up interview due to work-relatedcomplications), and 60 were not contacted due to a shortage of programstaff at the time of data collection. However, the results of ANOVA andchi-square tests indicated that there were no signicant differences amongthese three groups in terms of their engagement level, age, sex, educationlevel, work experience, and position in their organization.

    Results

    Means,standard deviations,andcorrelationsamongstudyvariables arepresented in Table 1. Hypothesis 1 states that feedback favorability shouldbe positively related to participants engagement in the DAC program. Ascan be seen from the zero-order correlations, DAC ratings were positivelyand signicantly correlated with all three engagement criteria, whereasself-ratings were not signicantly correlated with any of these criteria. Wealso conducted multivariate multiple regression analyses using self-ratingsand DAC ratings as predictors of the three measures of engagement. TheDAC ratings had a signicant effect on the dependent variables (Wilkes

    = .788, p < .01) whereas the self-ratings did not (Wilkes = .925, p

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    15/34

    SANG E. WOO ET AL. 741

    TABLE 1Correlations Among Predictor and Criterion Measures

    Variable Mean SD 1 2 3 4 5 6 71 DAC assessor rating 4.47 .86 2 Self-rating 4.73 .93 .27 3 ADE 3.68 .55 .23 .32 4 Feedback engagement 3.61 .74 .38 .10 .13 5 Exercise engagement 4.48 1.15 .40 .03 .12 .33 6 Follow-up activities 3.18 1.05 .27 .03 .03 .30 .26 7 Pattern agreement .08 .44 .18 .17 .13 .13 .09 .13

    Note . ADE = attitude toward development experiences. Sample sizes for individualcells range from 76 to 172.

    p < .05.

    Hypothesis 2 states that overraters will be less engaged than under-raters. To test this hypothesis, both self-ratings and DAC ratings shouldbe examined simultaneously. Much of the prior work in this area has useddifference scores (e.g., Atwater & Yammarino, 1992); however, there arenotable problems with difference score use, including ambiguities in in-terpretation, imposition of untested and undesired constraints, and the

    confounding of the effects of the component measures (e.g., Edwards,1994; Edwards & Parry, 1993). More recent work has therefore utilizedpolynomial regression analysis (e.g., Atkins & Wood, 2002; Atwater,Waldman, Ostroff, Robie, & Johnson, 2005; Bono & Colbert, 2005). Ben-ets of this technique include the ability to examine the individual effectsof self-ratings and DAC ratings as well as avoidance of unhypothesizedconstraints on the relationships between ratings and various criterion mea-sures of engagement. We therefore utilized this approach.

    ADE was entered as a control variable (Step 1) in the regression

    equations, based on the idea that prior attitude toward developmentalactivities should be controlled in order to determine the contribution of self and DAC ratings over and above general willingness to develop.In Step 2, we entered the main effects of self-ratings and DAC ratings. InStep 3, we entered the squared self-rating term, the squared DAC-ratingterm, and their interaction. Thus, our nal overall equation for each of thedependent variables is as follows:

    Y = 0 + 1 ADE + 2 Self-rating + 3 DAC-rating + 4 Self- rating 2

    + 5 Self-rating DAC-rating + 6 DAC-rating2.

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    16/34

    742 PERSONNEL PSYCHOLOGY

    TABLE 2 Results of Hierarchical Regression Analyses for All Three Criterion Measures

    Criterion R2 R2

    Feedback engagement ( n = 137)Step 1: Control (ADE) .01Step 2: S and DAC .15 .14

    Step 3: S 2 , S DAC, & DAC 2 .16 .01Exercise engagement ( n = 135)

    Step 1: Control (ADE) .01Step 2: S and DAC .11 .10

    Step 3: S 2 , S DAC, & DAC 2 .11 .01Follow-up activities ( n = 75)

    Step 1: Control (ADE) .01Step 2: S and DAC .07 .06Step 3: S 2 , S DAC, & DAC 2 .10 .03

    Note. ADE = attitude toward development experiences; S = self-ratings; DAC = DACassessor ratings.

    p < .05.

    ratings (Step 2) to the equation produced a signicant change in varianceaccounted for ( R2) for two criteria: feedback engagement and exercise en-gagement. The addition of the squared terms and the interaction betweenself- and DAC ratings (Step 3) did not explain signicant additional vari-ance in any case. Therefore, examination of Step 2, the main effects of self-ratings and DAC ratings, is most appropriate. Constraining ourselvesto the rst-order coefcients necessarily eliminates examination of curvi-linear relationships, even though such relationships may in fact occur inour sample. However, the effect sizes for the amount of incremental vari-ance explained for by introducing second-order terms were very small( f 2 = .0167 for feedback engagement and f 2 = .0079 for exercise en-gagement). Such small effects are very hard to detect and rarely reachstatistical signicance unless the sample size is very large (a power anal-ysis revealed that an n of at least 600 would have been needed to have astatistical power of .50 to detect f 2 = .01). Nonetheless, these analysesare more than adequate to allow us to test our hypotheses. Although thesimpler form of the regression equation was used, we still present responsesurface graphs as they serve to illustrate the linear results very clearly. Tocreate correct linear graphs, therefore, the second-order coefcients wereset to 0 to eliminate curvilinear trends that the data did not support.

    In testing Hypothesis 2, we examined the results for the criteriaof feedback engagement and exercise engagement in more depth (see

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    17/34

    SANG E. WOO ET AL. 743

    TABLE 3 Regression Coefcients for Feedback Engagement and Exercise Engagement

    Unstandardizedregression

    Criterion coefcient

    Feedback engagementConstant 3.370

    ADE .064Self-rating .079DAC rating .343

    Slope along self = DAC line .264

    Slope along self = DAC line .422

    Exercise engagementConstant 4.174

    ADE .091Self-rating .001DAC rating .434

    Slope along self = DAC line .435

    Slope along self = DAC line .433

    Note . ADE = attitude toward development experiences; DAC = DAC assessor ratings. p < .05.

    coefcient indicates that exercise engagement is high when DAC ratingsare likewise high.

    These results do not offer a direct test of the hypotheses regarding theeffects of selfassessor discrepancy, however. As the quadratic terms inthe equation were not signicant (see Table 2), it is possible to test the hy-potheses simply by examination of the slope of the response surface alongthe lines of perfect agreement (self = DAC) and perfect disagreement(self = DAC), without recourse to the curvature of the response surface(e.g., Atwater et al., 2005; Bono & Colbert, 2005). The coefcients re-quired to examine these particular areas of the response surface are takenfrom linear, rather than curvilinear, portions of the regression equation andare still applicable for our simplied regression equations. More speci-cally, the slope along the self = DAC line of perfect agreement is given byb1 + b2 , where b1 is the coefcient for self-ratings and b2 the coef-cient for DAC ratings. The slope along the line of perfect disagreement,or self = DAC, is given by b1 b2 (for more discussion about testingslopes and curvatures along the lines of agreement and disagreement, seeBono & Colbert, 2005; Dabos & Rousseau, 2004; Edwards & Parry, 1993).As suggested by Atkins and Wood (2002), we restricted the response sur-face in our graphs to the range of our variables to avoid drawing spurious

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    18/34

    744 PERSONNEL PSYCHOLOGY

    1 . 7

    5

    0 . 7 5

    - 0

    . 2 5

    - 1

    . 2 5

    - 2

    . 2 5

    - 3

    . 7 5

    - 2

    . 7 5

    - 1

    . 7 5

    - 0 . 7 5

    0 . 2

    5 1

    . 2 5

    2 . 2

    50

    1

    2

    3

    4

    5

    FeedbackEngagement

    DAC RatingSelf Rating

    Figure 1: Response Surface of Self-DAC Rating Agreement PredictingFeedback Engagement.

    1 . 7

    5

    0 . 7

    5

    - 0

    . 2 5

    - 1

    . 2 5

    - 2

    . 2 5

    - 3

    . 7 5

    - 2

    . 7 5

    - 1

    . 7 5

    - 0

    . 7 5 0

    . 2 5 1 .

    2 5

    2 . 2

    50

    1

    2

    3

    4

    5

    6

    ExerciseEngagement

    DAC Rating

    Self Rating

    Figure 2: Response Surface of Self-DAC Rating Agreement PredictingExercise Engagement.

    As is shown in Figures 1 and 2, the lines of perfect agreement extendfrom the back corner of the graph to the front corner; examination of these lines indicate that, for both feedback and exercise engagement,

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    19/34

    SANG E. WOO ET AL. 745

    were signicant in both cases. Most relevant to Hypothesis 2 is the lineof perfect disagreement, which may be seen on the graphical displayextending from the left corner to the right corner. Results indicated that,for both feedback and exercise engagement, engagement level was higher when individuals underrated themselves (i.e., when DAC ratings arehigher than self-ratings) rather than when individuals overrated themselves (i.e.,when DAC ratings are lower than self-ratings). The slopes for perfectdisagreement were also signicant for both feedback engagement andexercise engagement criteria. Therefore, Hypothesis 2 was supported.

    Results regarding Hypothesis 3 may be seen in Table 1, where relevantcorrelations are displayed. As is shown, pattern agreement was not signi-cantly correlated with any of the engagement criteria. Thus, Hypothesis 3,which stated that the general pattern of agreement would be related toengagement criteria, was not supported.

    Discussion

    The purpose of this study was to investigate the factors that inuenceparticipants behavioral (rather than perceptual) engagement in DAC pro-grams following feedback. Our results conrmed that, for all three behav-ioral engagement criteria, higher assessor ratings were associated withhigher engagement. One should note that this nding does not mean thatthose who received less favorable feedback were actively disengaged. Our data showed that although those who had unfavorable feedback consis-tently showed lower levels of engagement than those who had favorablefeedback across the three engagement criteria, their engagement levelswere still above neutral. We also found that for feedback engagementand exercise engagement, to the extent that participants rated themselveslower than did assessors, program engagement was higher. Although weexpected dimension prole similarity to also predict program engagement,this hypothesis was not conrmed.

    We feel our results make three major contributions. First, our ndingsbridge the literature on multisource feedback with that of DACs. Whereasfeedback favorability, selfother agreement, and individuals reactions tofeedback in multisource feedback contexts have been relatively well re-searched, there is a paucity of research on these issues conducted in thecontext of DACs. Given the fact that multisource feedback ratings andDAC ratings are often fundamentally different in nature, research wasneeded to conrm the applicability of ndings regarding one to the other.Second, whereas the currently available literature reveals a focus on im-mediate, perceptual reactions to feedback (i.e., accuracy and satisfaction),

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    20/34

    746 PERSONNEL PSYCHOLOGY

    selfassessor discrepancies in individuals overall evaluations by examin-inghow theholistic agreementpattern of relativestrengthsandweaknessesimpacts behavioral engagement.

    Interpretation of Results

    Although results on feedback favorability and selfassessor agree-ment were generally consistent with the ndings from multisource feed-back interventions and perceptual reaction criteria, our data showed noevidence for Hypothesis 3, which predicted a relationship between self other dimension prole similarity (i.e., pattern agreement) and behavioralengagement. In the context of a DAC, inference for selfassessor agree-ment on dimension prolesthat is, agreement on relative strengths andweaknessesshould be as salient as favorability of feedback on overallperformance, given that DAC feedback is typically designed specicallyfor communicating this pattern of relative strengths and weaknesses acrossdimensions (Thornton & Rupp, 2005). The lack of support for Hypoth-esis 3 is intriguing from a theoretical point of view. We have based our hypotheses on both self-enhancement theory and self-consistency theo-ries (i.e., consistency, cognitive dissonance, and control theories). Self-enhancement theory argues that we are more responsive to favorable feed-back because it allows us to elevate our self-images. The self-consistencytheories, on the other hand, argue that in addition to a desire for fa-vorable self- and other perceptions, it is also psychologically necessaryto receive consistency in information from multiple sources (includingourselves). Although research has found some evidence for the joint ef-fects of self-enhancement and self-consistency (e.g., Bono & Colbert,2005), direct supporting evidence for self-consistency has been scarce atbest in the previous literature. Whereas our tests of Hypothesis 1 tappedself-enhancement tendencies, and our tests of Hypothesis 2 tapped theinterplay of self-enhancement and self-consistency theories, our test of Hypothesis 3 tapped self-consistency in isolation. That is, when correlat-ing proles, the effects of underrating and overrating cancel each other outat a group level. Consequently, this analysis tested the unique effect of self-consistency, which can only be tested via pattern agreement (i.e., prolesimilarity) across individual dimensions. Therefore, the lack of supportfor Hypothesis 3 in our study suggests that the impact of self-consistencyin pattern agreement may not be as strong as the joint effects of self-enhancement and self-consistency in overall under/overratings and thatself-enhancement is the stronger imperative. Future research is certainlyneeded to further tease apart these issues.

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    21/34

    SANG E. WOO ET AL. 747

    Though feedback favorability was signicantly correlated with follow-up activities, this correlation was smaller than those for other types of engagement, and the more complex polynomial regression analyses didnot support the existence of a meaningful effect. Although the failure tond a long-term relationship may simply be a function of the smaller sample size available for analysis, the smaller effect size for follow-upengagement suggests an intriguing direction for future research. DACprograms and other development interventions often hope to catalyze notonly immediate engagement but also continued development over the longterm. A common criticism of development programs in organizations isthat the change they create is often short-lived. Although our results are far from conclusive, they are consistent with this intuitive notion that creatinginitial engagement in development activities and sustaining involvementover time may be very different things.

    The fact that feedback favorability did not consistently predict follow-up engagement does not automatically dismiss its practical value, how-ever. Note that (a) follow-up engagement was moderately correlated withfeedback favorability (although to a lesser extent than the other two en-gagement criteria), and (b) follow-up engagement was also correlatedwith both feedback and exercise engagement during the program. Conse-quently, we propose that the effect of feedback on follow-up engagementmay be mediated by the initial engagement level during the program (feed-back and exercise engagement), which is directly inuenced by feedbackfavorability (DAC ratings). As a post hoc analysis, we ran a preliminarytest of this mediation hypothesis by running a series of regressions usingDAC assessor ratings, during-DAC engagement (a composite of feedbackand exercise engagement), and follow-up engagement. A signicant Sobeltest (2.89; p < .01) suggested that the effect of assessor feedback favor-ability on follow-up engagement was fully mediated by the compositevariable of feedback and exercise engagement. Certainly, future researchshould more thoroughly investigate this possibility.

    Practical Implications

    Our ndings showed that favorable feedback was related to higher behavioral engagement within the DAC program. However, it is hardlylogical to recommend that feedback providers should simply elevate thelevel of feedback favorability in order to increase participants engage-ment. Also, our data cannot completely rule out an alternative explanation:that better performers (i.e., those who receive higher ratings) have reachedtheir level of competence as a result of a tendency to engage in develop-

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    22/34

    748 PERSONNEL PSYCHOLOGY

    program engagement; moreover, this was incorporated as a control vari-able in our analyses. It seems more useful at this point to suggest that DACdevelopers focus on ways of delivering assessor feedback to the partici-pants that do not threaten positive self-perceptions, while still conveyingthe necessary information about areas for improvement. Consistent withthis notion, McFarland and Miller (1994) found that people who focuson the positive aspects of feedback were more likely to accept and valuethe feedback, and were more condent that their performance could beimproved.

    Improving the accuracy of self-ratings. Our results also suggest thatefforts to increase selfassessor agreement may be helpful in increasingbehavioral engagement in the program. In our sample, the initial agree-ment between self-ratings and assessor ratings was negligible ( r = .08).This is not surprising because one of the common ndings in the multi-source feedback literature is that although ratings from different sourcestend to correlate with one another somewhat, self-ratings tend to correlatepoorly with other ratings (e.g., Atkins & Wood, 2002; Conway & Huf-fcutt, 1997; Harris & Schaubroeck, 1988). One approach to explainingthis phenomenon is that each rater has different opportunities to observebehaviors from a unique perspective, focusing on different facets of jobperformance, and therefore inconsistencies among different sources of rat-ings are to be expected (e.g., Borman, 1974; Klimoski & London, 1974;Murphy, Cleveland, & Mohler, 2001).

    However, multisource feedback researchers have argued that com-parisons between self-ratings and ratings by others (assuming that thoseothers are accurate) indicate the self-raters degree of self-awareness (e.g.,Atwater & Yammarino, 1992; Wohlers & London, 1989), with high agree-ment indicating high self-awareness (e.g.,Atwater et al., 1995; Yammarino& Atwater, 2001). Sosik (2001) dened self-awareness as a self-regulationmechanism that enables one to work cooperatively, seek feedback fromothers, and adjust ones attitudes and behaviors to adapt to organizationaldemands and contexts. Individuals who have high self-awareness havebeen shown to be more successful, effective, and promotable in organi-zations (e.g., Atwater & Yammarino, 1992; Bass & Yammarino, 1991;Carless, Mann, & Wearing, 1998; Church, 1997; Mabe & West, 1982).Conversely, low self-awareness may lead to disagreement between self-and other ratings, creating negative outcomes such as career derailment(McCall & Lombardo, 1983). Consequently, it may behoove practitionersto implement strategies to increase the self-awareness of employees inan effort to increase the accuracy of their self-ratings (and their agree-ment with others ratings). In general, DAC feedback (and some multi-

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    23/34

    SANG E. WOO ET AL. 749

    and compare them with more objective evaluations given by others in asafe environment. Research has shown that it is possible that those withlow self-awareness may eventually increase their self-awareness by par-ticipating in such intensive feedback interventions multiple times (Rogers,2005).

    Improving the accuracy of assessor ratings. Of course, assessor ac-curacy should not be ignored, in that feedback accuracy is essential toany development program (Brett & Atwater, 2001). Research has pointedto several strategies by which assessor accuracy and the validity of di-mension ratings can be improved (Arthur, Woehr, & Maldegen, 2000;Lievens, 2001; Thornton & Rupp, 2005). These include using across-exercise ratings (which are based on multiple observations of behavior)rather than within-exercise ratings, using a small number of clearly de-ned behavioral dimensions, ensuring that there is ample opportunity toobserve dimension-relevant behavior in exercises (also see Haaland &Christiansen, 2002), and creating a common frame of reference for asses-sors during the training process. This research also recommends limitingthe number of dimensions, ensuring they are distinct from one another, andusing assessors with psychology backgrounds (see also Gaugler, Rosen-thal, Thornton, & Bentson, 1987). The DAC in which we collected our data was developed specically to have each of these design character-istics. That is, the DAC used a reasonable number of dimensions thatwere well dened and distinct from one another. Further, assessors hadbackgrounds in psychology and human resource management and weretrained in understanding the dimensions, their scaling (i.e., frame of refer-ence training; Schleicher, Day, Mayes, & Riggio, 2002), and the avoidanceof rating errors.

    Perceptions of assessor accuracy. In addition to ensuring the actualaccuracy of assessor ratings, it is equally critical to make certain thatparticipants perceive assessor ratings to be accurate. If participants donot perceive assessor ratings as generally credible and objective, then anyattempt to convince participants to align their ratings with the ratings of the assessors is likely to fail. In the context of performance appraisal,Taylor and colleagues (Taylor, Tracy, Renard, Harrsion, & Carroll, 1995)found that employees who were presented with ratings in the context of adue-process (procedurally just) appraisal system were more satised withtheir ratings even though the overall average rating was lower than theaverage rating for the employees in the control condition. Thus, violationsof due process in the presentation of DAC feedback may offer an alter-native explanation for why participants might have disengaged from theassessment center over the long term. Unless the participants viewed their

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    24/34

    750 PERSONNEL PSYCHOLOGY

    they might view low ratings not as a reection of their own performancebut as a problem with the assessor or the system as a whole. 2

    The DAC used in this study met all three essential features of thedue process system described in Taylor et al. (1995): adequate notice,fair hearing, and judgments based on evidence. Participants had ampleopportunity to learn about the dimensions that they would be assessed onbefore participating in the program. They received behavioral denitionsof each dimension and had an opportunity to reect on their prociencyon each dimension. Feedback sessions were explicitly designed to behighly interactive. Participants were strongly encouraged to share their self-ratings with assessors. Also, assessors encouraged participants tocome up with their own action plans based on the feedback they received.Finally, the feedback given by assessors was well based in actual behavior,and detailed behavioral observations taken from each simulation exercisewere shared with participants.

    To further explore this issue, we looked to our program evaluation data.In order to obtain the most candid and accurate evaluations, this surveywasadministered at the end of the DAC day in a completely anonymous man-ner (i.e., disconnected from all other individually identiable data from theprogram). We asked participants to indicate, using a 7-point Likert scale(1 = strongly disagree , 4 = neither agree nor disagree , 7 = stronglyagree ), their agreement with various statements about the quality of theprogram and the competence of the assessors. Participants generally indi-cated that they thought the program was well organized ( M = 6.02, SD =1.14) and that they viewed the assessors as qualied and professional ( M =5.96, SD = 1.07), helpful and caring ( M = 6.08, SD = 1.02), and knowl-edgeable about the dimensions ( M = 5.96, SD = 1.07). Hence, given thatassessor feedback was likely as accurate as possible, due process was fol-lowed in the program, and that assessor credibility was established, thisleads us to conclude that the low correlation between self- and assessor ratings was likely due to low self-awareness among participants. Althoughthe program fromwhich the current datawere derived strove to increase theself-awareness of participants (through coaching on the dimensions, mul-tiple experiential simulation exercises, and multiple feedback sessions),perhaps this was not enough, and perhaps interventions specically fo-cused on enhancing self-awareness would be benecial. This is certainlya potential avenue to be pursued by future research.

    Boundary conditions: A caveat. Before concluding, we must includeone caveat here. Our analysis revealed that unfavorable and inconsistent(with self-ratings) feedback led participants to be less engaged in the

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    25/34

    SANG E. WOO ET AL. 751

    program, even when the assessor ratings were generally perceived ascredible and objective. Atwater and colleagues (1995) reported results thatare somewhat different from these ndings. Based on data from U.S. NavalAcademy student leaders, these authors found that follower ratings of those leaders who received negative feedback improved after the receipt of this negative feedback,whereas the follower ratings for those whoreceivedpositive feedback did not change. Although the Atwater et al. study isnot entirely compatible with the design of this study, the inconsistencybetween these two sets of ndings has an intriguing implication.

    In order for a selfother discrepancy to lead to behavioral self-improvement, substantial effort is required and situational and motiva-tional factors will likely play an important role in the execution of such anendeavor. Atwater et al.s (1995) participants were being explicitly trainedto be military leaders. It may be said that the military represents an idealtest case for leadership interventions as leadership and the military arepractically inseparable (Wong, Bliese, & McGurk, 2003, p. 657). Theparticipants in their study were also a select, highly motivated group. Thecontext of this training program was very different from our study, inthat our participants were subject to the vagaries attending developmentin the nonmilitary workforce and in multiple corporate and governmentalcontexts. In our study, program participation was entirely voluntary, andparticipants often expressed a lack of organizational support for their de-velopment. Although a systematic investigation of organizational supportis beyond the scope of our study, it certainly has promise as an importantmoderator in predicting whether inconsistent feedback for overraters (i.e.,receiving assessor feedback that is lower than self-perceptions) leads tomore positive outcomes in developmental programs and beyond.

    In summary, ndings from this study have three major practical im-plications. First, when DAC feedback is provided, framing the feedbackin a positive way can lead to higher engagement in the program. Second,improving self-awareness (agreement between self- and assessor evalua-tions) is important for engagement in developmental interventions as wellas for various organizational outcomes. Finally, it is critical to ensure thatassessor ratings are accurate and perceived by participants as fair andcredible in order to facilitate the process of increasing self-awareness anddevelopment.

    Limitations and Future Research

    The ultimate goal of DAC feedback is to catalyze behavioral changeover time. Although our study was developed to examine participants

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    26/34

    752 PERSONNEL PSYCHOLOGY

    and actual developmental outcomes (i.e., improvement in dimension per-formance on the job). We attempted to measure engagement beyond theDAC itself through the follow-up interviews, but our small sample andthe variance in the amount of detail provided by respondents may havelimited our ability to assess engagement accurately, which may in turnhave obscured patterns that might otherwise be apparent.

    Previous research has shown that assessor characteristics (e.g., edu-cational and professional background, age, gender) may systematicallyinuence feedback favorability and participants reactions to feedback(e.g., Gaugler et al., 1987; Kudisch, 1997). These factors could have beenstudied as potential moderators of the relationship between feedback char-acteristics and participants behavioral engagement. However, due to thecomplex nature of our program and the lack of diversity in our assessor group, it was impossible to conduct such an investigation in this DAC.We believe that future investigations will be needed to better clarify theseissues.

    Further, the time interval between program participation and thefollow-up interviews varied substantially across participants (i.e., 1 to4 months). One might speculate that there would be a possible attenuationover time in the effects of the development program (i.e., the longer thetime interval between program participation and follow-up interview, theless engagement in follow-up activities would be reported). Our post hocanalysis did nd a modest, negative correlation ( r = .14) between thetime interval (measured in days) and follow-up engagement, although thiscorrelation was not statistically signicant ( p = .26). Such a nonsigni-cant nding might have been due to the small sample size and the limitedrange of follow-up questions in capturing participants follow-up activitiesand engagement. This is certainly an important topic and deserves moresystematic research. Further, our data were unable to speak to the longer term results of participants engagement in development. Future endeav-ors should expand the scope of this study by tracing participants longer term commitment to the development program as well as the outcomes(e.g., performance improvement, promotion) that result.

    Finally, another limitation of our study was our relatively small samplesize for certain types of analyses. However, given the difculties in obtain-ing a larger sample of DAC participants (i.e., signicant time investmenton the part of both researchers/assessors and the participants themselves),we feel this study adds considerable value.

    Conclusion

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    27/34

    SANG E. WOO ET AL. 753

    potential to impact participants and facilitate behavioral change. Like allfeedback interventions, however, their effectiveness may be limited by anumber of factors, including characteristics of the feedback itself or of the feedback recipient. This study established that self-enhancement andself-consistency effects play a role in participants responses to DAC feed-back, which is more detailed and qualitative in nature than the feedbackprovided in multisource feedback interventions. Also, a direct link wasestablished between feedback characteristics and participants behavioralengagement. We argue that behavioral engagement is the most appropriatecriterion for DAC research of this kind. We further suggest that it mightalso be a more appropriate criterion for multisource feedback researchinterventions as compared to mere perceptions of feedback.

    We found that DAC participants are more likely to engage behav-iorally in subsequent aspects of a DAC program when they receive fa-vorable feedback and when they receive feedback that is consistent withtheir own general self-evaluations. Indeed, DAC designers and feedbackproviders can benet from an awareness of these effects. Though it wouldbe counterproductive to manipulate feedback to conform to participantsexpectations, strategies can be used during the feedback session as wellas throughout the DAC to mitigate these tendencies and to bring partici-pants into agreement with assessors. The benet of such a process is of course dependent on the ability of assessors to make accurate ratings andparticipants acceptance of such feedback.

    REFERENCES

    Alliger GM, Tannenbaum SI, Bennett W Jr, Traver H, Shotland A. (1997). A meta-analysisof the relations among training criteria. P ERSONNEL PSYCHOLOGY , 50 , 341358.

    Anseel F, Lievens F, Levy P. (2007). A self-motives perspective on feedback-seekingbehavior: Linking organizational behavior and social psychology research. Interna-tional Journal of Management Reviews, 9 , 211236.

    Arnold J. (2002). Tensions between assessment, grading, and development in developmentcentres: A case study. International Journal of Human Resource Management, 13 ,975991.

    Arthur W, Woehr DJ, Maldegen R. (2000). Convergent and discriminant validity of as-sessment center dimensions: A conceptual and empirical reexamination of the as-sessment center construct-related validity paradox. Journal of Management, 26 ,813835.

    Ashford SJ, Blatt R, VandeWalle D. (2003). Reections on the looking glass: A review of research on feedback-seeking behavior in organizations. Journal of Management,29 , 773799.

    Atkins PW, Wood RE. (2002). Self- versus others ratings as predictors of assessmentcenter ratings: Validation evidence for 360-degree feedback programs. P ERSONNELPSYCHOLOGY , 55 , 871904.

    ff l ( ) lf h

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    28/34

    754 PERSONNEL PSYCHOLOGY

    Atwater LE, Roush P, Fischthal A. (1995). The inuence of upward feedback on self- andfollower ratings of leadership. P ERSONNEL PSYCHOLOGY , 48 , 3559.

    Atwater LE, Waldman D, Ostroff C, Robie C, Johnson KM. (2005). Self-other agreement:

    Comparing its relationship with performance in the U. S. and Europe. International Journal of Selection and Assessment, 13 , 2540.Atwater LE, Yammarino FJ. (1992). Does selfother agreement on leadership perceptions

    moderate the validity of leadership and performance predictions? P ERSONNEL PSY-CHOLOGY , 45 , 141164.

    Ballantyne I, Povah N. (2004). Development centres. In Ballantyne I, Povah N (Eds.), Assessment and development centres (pp. 142161). Aldershot, Hampshire, UK:Gower.

    Bass BM, Yammarino FJ. (1991). Congruence of self and others leadership ratings of naval ofcers for understanding successful performance. Applied Psychology: An International Review, 40 , 437454.

    Bem DJ, Allen A. (1974). On predicting some of the people some of the time. Psychological Review, 81 , 506520.

    Bono JE, Colbert AE. (2005). Understanding responses to multisource feedback: The roleof core self-evaluations. P ERSONNEL PSYCHOLOGY , 58 , 171203.

    Borman WC. (1974). The rating of individuals in organizations: An alternate approach.Organizational Behavior & Human Performance, 12 , 105124.

    Brett JF, Atwater LE. (2001). 360 feedback: Accuracy, reactions, and perceptions of usefulness. Journal of Applied Psychology, 86 , 930942.

    Button S, Mathieu J, Zajac D. (1996). Goal orientation in organizational research: Aconceptual andempirical foundation. Organizational BehaviorandHuman DecisionProcesses, 67 , 2648.

    Cable DM, Judge TA. (1997). Interviewers perceptions of person-organization t andorganizational selection decisions. Journal of Applied Psychology, 82 , 546561.

    Campbell D, Lee C. (1988). Self-appraisal in performance evaluation: Development versusevaluation. Academy of Management Review, 13 , 302314.

    Carless SA, Mann L, Wearing AJ. (1998). Leadership, managerial performance and 360-degree feedback. Applied Psychology: An International Review, 47 , 481496.

    Carver CS, Scheier MF. (1982). Control theory: A useful conceptual framework for personality-social, clinical, and health psychology. Psychological Bulletin, 92 , 111 135.

    Church A. (1997). Managerial self-awareness in high-performing individuals in organiza-tions. Journal of Applied Psychology, 82 , 281292.

    Conway JM, Huffcutt AI. (1997). Psychometric properties of multisource performanceratings. Human Performance, 10 , 331360.

    Dabos GE, Rousseau DM. (2004). Mutuality and reciprocity in the psychological contractsof employees and employers. Journal of Applied Psychology, 89 , 5272.

    Dreher GF,SackettP. (1983). Perspectiveson employee stafnganddevelopment: Readingsand commentary . Homewood, IL: Irwin.

    Edwards JR. (1994). The study of congruence in organizational behavior research: Cri-tique and a proposed alternative. Organizational Behavior and Human DecisionProcesses, 58 , 51100.

    Edwards JR, Parry ME. (1993). On the use of polynomial regression equations as analternative to difference scores in organizational research. Academy of Management Journal, 36 , 15771613.

    Festinger L. (1957). A theory of cognitive dissonance . Evanston, IL: Row, Peterson.

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    29/34

    SANG E. WOO ET AL. 755

    Goodge P. (1991). Development centres: Guidelines for decision makers. Journal of Man-agement Development, 10 , 412.

    Haaland S, Christiansen ND. (2002). Implications of trait-activation theory for evaluating

    the construct validity of assessment center ratings. P ERSONNEL PSYCHOLOGY , 55,137163.Harris MH, Schaubroeck J. (1988). A meta-analysis of self-supervisory, self-peer, and

    peer-supervisory ratings. P ERSONNEL PSYCHOLOGY , 41 , 4362.Ilies R, Judge TA. (2005). Goal regulation across time: The effects of feedback and affect.

    Journal of Applied Psychology, 90 , 453467.Ilgen DR, Fisher CD, Taylor MS. (1979). Consequences of individual feedback on behavior

    in organizations. Journal of Applied Psychology, 64 , 349371.Klimoski RJ, London M. (1974). Role of the rater in performance appraisal. Journal of

    Applied Psychology, 59 , 445451.Korman AK. (1976). Hypothesis of work behavior revisited and an extension. Academy of

    Management Review, 1 , 5063.Kudisch J. (1997). Factors related to participants acceptance of developmental assessment

    center feedback. Dissertation Abstracts International, 58 (6-B), 3349 (UMI No.AAT9735329).

    Kudisch J, Lundquist C, Smith AR. (2002, September). Reactions to dual-purpose assess-ment center feedback: What does it take to get participants to buy into and actuallydo something with their feedback ? Presentation at the 29th International Congresson Assessment Center Methods , Frankfurt, Germany.

    Latham C, Marchbank T. (1994). Feedback techniques. In Lee G, Bear D (Eds.), Develop-ment centers (pp. 156179). New York: McGraw-Hill.

    Lievens F. (2001). Assessor training strategies and their effects on accuracy, interrater

    reliability, and discriminant validity. Journal of Applied Psychology, 86 , 255264.London M, Smither JW. (1995). Can multisource feedback change perceptions of goal ac-

    complishment, self-evaluations, and performance-related outcomes? Theory-basedapplications and directions for research. P ERSONNEL PSYCHOLOGY , 48, 803839.

    London M, Wohlers AJ. (1991). Agreement between subordinate andself-ratings in upwardfeedback. P ERSONNEL PSYCHOLOGY , 44 , 375390.

    Mabe P, West J. (1982). Validity of self-evaluation of ability: A review and meta-analysis. Journal of Applied Psychology, 67 , 280296.

    MaurerT, Weiss EM, Barbeite FG.(2003). A model of involvement in work-relatedlearningand development activity: The effects of individual, situational, motivational, andage variables. Journal of Applied Psychology, 88 , 707724.

    McCall MW, Lombardo MM. (1983). Off the track: Why and how successful executivesget derailed . Greensboro, NC: Center for Creative Leadership.

    McFarland C, Miller DT. (1994). The framing of relative performance feedback: Seeingthe glass as half empty or half full. Journal of Personality & Social Psychology, 66 ,10611073.

    Murphy KR, Cleveland JN, Mohler CJ. (2001). Reliability, validity, and meaningfulness of multisource ratings. In Bracken DW, Timmreck CW, Church AH (Eds.), Handbook of multisource feedback (pp. 130148). San Francisco: Jossey-Bass.

    OReilly CA, Chatman J, Caldwell DE. (1991). People and organizational culture: A prolecomparison approach to assessing person-organization t. Academy of Management Journal, 34 , 487516.

    Podsakoff PM, Organ DW. (1986). Self-reports in organizational research: Problems andprospects. Journal of Management, 12 , 531544.

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    30/34

    756 PERSONNEL PSYCHOLOGY

    Rogers DA. (2005, April). Alpha, beta, and gamma change on assessees understanding of DAC dimensions . Paper presented at the 20th Annual Conference of the Society for Industrial and Organizational Psychology , Los Angeles, CA.

    Ryan A, Brutus S, Greguras G, Hakel M. (2000). Receptivity to assessment-based feedbackfor management development. Journal of Management Development, 19 , 252276.Schleicher DJ, DayDV, Mayes BT, RiggioRE. (2002). A new frame for frame-of-reference

    training: Enhancing the construct validity of assessment centers. Journal of Applied Psychology, 87 , 735746.

    Schrauger JS. (1975). Responses to evaluation as a function of initial self-perceptions.Psychological Bulletin, 82 , 581596.

    Smither JW, London M, Reilly RR. (2005). Does performance improve following mul-tisource feedback? A theoretical model, meta-analysis, and review of empiricalndings. P ERSONNEL PSYCHOLOGY , 58 , 3334.

    Sosik JJ. (2001). Self-other agreement on charismatic leadership: Relationships with work

    attitudes and managerial performance. Group & Organization Management, 26 ,484511.

    Taylor S, Tracy K, Renard M, Harrison J, Carroll S. (1995). Due process in performance ap-praisal: A quasi-experiment in procedural justice. Administrative Science Quarterly,40 , 495523.

    Thornton GC, Rupp DE. (2005). Assessment centers in human resource management:Strategies for prediction, diagnosis, and development . Mahwah, NJ: Erlbaum.

    Von Eye A, Mun EY. (2005). Analyzing rater agreement: Manifest variable methods .Mahwah, NJ: Erlbaum.

    Walter M. (2004). Approach to development experiences questionnaire: Reliability and validity evidence . Unpublished manuscript.

    Walter M, Thornton GC III. (2004). Measuring readiness to develop in a developmentalassessment center . Unpublished manuscript.

    Wohlers AJ, London M. (1989). Ratings of managerial characteristics: Evaluationdifculty,co-worker agreement, and self-awareness. P ERSONNEL PSYCHOLOGY , 42, 235 261.

    Wong L, Bliese P, McGurk D. (2003). Military leadership: A context specic review. Leadership Quarterly, 14 , 657692.

    Yammarino FJ, Atwater LE. (2001). Understanding agreement in multisource feedback. InBracken DW, Timmreck CW, ChurchAH (Eds.), Handbook of multisource feedback (pp. 205220). San Francisco: Jossey-Bass.

    APPENDIX A

    Dimensions, Denitions, and Subdimensions

    Information seeking. Actively seeks information from multiplesources, identies and nds relevant and essential information neededto solve a problem, organizes data into meaningful patterns, gathers data,effectively analyzes, and uses data and information.

    Use of multiple sources: Gets information from multiple sources. Situational relevance:Findsall relevant information for the situation.

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    31/34

    SANG E. WOO ET AL. 757

    Problem solving. After gathering pertinent information, identiesproblems and uses analysis to perceive logical relationships among prob-lems or issues; develops and evaluates courses of action to determine costsand benets of each; makes timely and logical decisions; and evaluatesthe outcomes of a problem solution.

    Problem understanding: Identies problems and perceives logicalrelationships among problems or issues.

    Thinking solutions through: Develops courses of action to determinecosts and benets of each and evaluates the outcomes of a problemsolution.

    Decisiveness: Makes timely and logical decisions.

    Planning and organizing. Effectively schedules own work and time,by handling multiple demands; establishes a system for monitoring tasks,activities, or responsibilities of self or others to assure accomplishment of specic objectives; determines priorities and allocates time and resourceseffectively by recognizing time limitations; makes effective short- andlong-term plans; and handles administrative detail.

    Goal setting: Makes short- and long-term goals.

    Allocation of time andresources: Determines priorities and allocatestime and resources by recognizing time limitations. Monitoring and conducting planned activities: Systematically mon-

    itors tasks and activities of self and/or others to assure accomplish-ment of specic objectives.

    Conict management. Recognizes and openly addresses conict ap-propriately and arrives at constructive solutions while maintaining positiveworking relationships.

    Effective strategies: Possesses an effective strategy for dealing withconict.

    Handling conict: Recognizes and openly addresses conict appro-priately.

    Constructive solutions: Arrives at constructive solutions whilemain-taining positive working relationships.

    Leadership. Guides, direct, and motivates subordinates toward im-portant and challenging work in line with their interests and abilities aswell as the needs of the organization; gives regular, specic, and construc-tive feedback to subordinates in relation to their personal goals; commands

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    32/34

    758 PERSONNEL PSYCHOLOGY

    Guidance of others: Guides, directs, and motivates others usingregular, specic, and constructive feedback.

    Balance of needs: Balances the interests, abilities, goals, and prior-ities of self and others with the needs of the organization.

    Personal and organizational effectiveness: Commands attention andrespect, promotes positive change.

    Oral communication. Expresses thoughts verbally and nonverballyin a clear, concise, and straightforward manner that is appropriate for thetarget audience whether in a group or individual situation.

    Verbal/nonverbal expression: Speaks (both verbally and nonver-bally) with clarity in message, pitch, volume, and gesture. Message clarity: Conveys message that is straightforward and con-

    cise. Appropriate communication style: Matches communication style

    with audience.

    APPENDIX B

    List of Unpublished Scale Items

    Attitude toward developmental experiencesI seek feedback about my performance in training programs.I participate in training programs even if they are not required.I get upset when someone suggests how I could do things differently.I take advantage of opportunities to better myself.I look forward to new challenges.I ask others to suggest ways I can improve myself.I seldom try new ways to do things.I am aware of my development needs.I set highly competitive goals for myself.I dont go to presentations or programs that help me improve.I search for new ways to develop myself.I regularly evaluate my development goals.I read the latest materials in my eld.I actively search for ways to advance myself.I take advantage of opportunities to improve my job-related skills.I get angered when someone comments on my job performance.I am interested in improving my job-related skills.

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    33/34

    SANG E. WOO ET AL. 759

    Feedback engagement (readiness-to-develop scale; Walter & Thornton,2004).

    What was the participants rst reaction to your initial feedback?How many questions did the participant ask during the feedback session?Which statement best describes the participants level of interest in re-

    ceiving feedback?How engaged in the discussion was the participant throughout the feed-

    back session?How aware of his or her development needs was the participant?To what extent did the participant express interest in his/her personal

    growth?How do you predict this participant will act in the future with regard to

    this program and his or her development?In summary, what level of readiness to develop did this participant

    demonstrate?

  • 7/30/2019 DEVELOPMENT ENGAGEMENT WITHIN and Following Developmental Assesment

    34/34