Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring...

18
Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium of Assessment Tools Second Edition N D E P A R T T O F H E A L T H & H U M A N S E R V IC E U S A D E P A R T M E T O F H E A L T H & H U M A N S E R V IC E S U S A

Transcript of Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring...

Page 1: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

Measuring Violence-RelatedAttitudes, Behaviors,and InfluencesAmong Youths:A Compendium of Assessment ToolsSecond Edition

N

DEPART

TO

FH

EA

LTH

&HUMAN SERVICE

USA

DEPARTME

TO

FH

EA

LTH

&HUMAN SERVICES USA

Page 2: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

This compendium of assessment tools is a publication of the

National Center for Injury Prevention and Control of the

Centers for Disease Control and Prevention.

Centers for Disease Control and PreventionJulie L. Gerberding, MD, MPH, Director

National Center for Injury Prevention and ControlIleana Arias, PhD, Acting Director

Division of Violence PreventionW. Rodney Hammond, PhD, Director

Graphic Design and Layout:Jeffrey C. Justice

Cover Design:Jeffrey C. Justice

Cover Photography:Kid’s World, James Carroll—Artville, LLC, 1997

Suggested Citation: Dahlberg LL, Toal SB, Swahn M,

Behrens CB. Measuring Violence-Related Attitudes,

Behaviors, and Influences Among Youths: A Compendium of

Assessment Tools, 2nd ed., Atlanta, GA: Centers for Disease

Control and Prevention, National Center for Injury Prevention

and Control, 2005.

Page 3: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

Measuring Violence-RelatedAttitudes, Behaviors,and Influences Among Youths:A Compendium of Assessment ToolsSecond Edition

Compiled and Edited by

Linda L. Dahlberg, PhDSusan B. Toal, MPHMonica H. Swahn, PhDChristopher B. Behrens, MD

Division of Violence Prevention

National Center for Injury Prevention and Control

Centers for Disease Control and Prevention

Atlanta, Georgia

2005

Page 4: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

Acknowledgments

In 1992 and 1993, the Centers forDisease Control and Prevention funded15 evaluation projects whose primarygoal was to identify interventions thatchange violence-related attitudes,beliefs and behaviors among childrenand youths. The investigators andprogram staff from these projects madeinvaluable contributions to the field ofviolence prevention and wereinstrumental to the development of thefirst edition of this compendiumpublished in 1998. Since that time,additional studies have been completedwhich serve to further enrich our abilityto evaluate outcomes of violenceprevention efforts. There have also beena number of longitudinal studiesconducted over the last two decades thathave greatly enhanced ourunderstanding of the factors thatincrease and decrease the risk for youthviolence. We wish to acknowledge andthank all individuals who havecontributed measures to thiscompendium and who have helped tomove the field of violence preventionforward.

ii

J. Lawrence AberMichael W. ArthurHenry (Hank) AthaKris BosworthRichard CatalanoJohn CoieEdward DeVosKenneth DodgeDennis D. EmbryLeonard EronDorothy EspelageAlbert D. FarrellDavid P. FarringtonDaniel J. FlanneryRobert L. FlewellingVangie A. FosheeRoy M. GabrielDeborah Gorman-SmithNancy G. GuerraMarshall HaskinsJ. David HawkinsDavid HenryTony HopsonArthur (Andy) M. HorneCynthia HudleyL. Rowell HuesmannKenneth W. JacksonRussell H. JacksonSteven H. KelderMarvin D. Krohn Molly Laird

Gerry LandsbergJennifer LansfordLinda Lausell-BryantFletcher LinderAlan J. LizotteRolf LoeberChristopher MaxwellAleta L. MeyerHelen NadelPamela OrpinasMallie J. PaschallPam K. PorterDavid L. RabinerChristopher L. RingwaltTom RoderickFaith SamplesRobert J. SampsonMichael SchoenyJohn SlavikMark SpellmannCarolyn A. Smith David A. StoneMagda Stouthamer-LoeberTerence P. Thornberry Patrick TolanRick VanAckerWelmoet B. Van KammenAlexander T. VazsonyiWilliam H. Wiist

Page 5: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

ContentsAcknowledgments . . . . . . . . . . . . . . . . . . . .ii

How To Use This Compendium . . . . . . . . . . . .1How This Compendium Is OrganizedChoosing the Right Instrument

Introduction . . . . . . . . . . . . . . . . . . . . . . . .5Why Outcome Evaluations Are So ImportantComponents of Comprehensive EvaluationsTen Steps for Conducting Outcome EvaluationsFuture Considerations

Section IAttitude and Belief Assessments . . . . . . . . .13

Section IIPsychosocial and Cognitive Assessments . . .61

Section IIIBehavior Assessments . . . . . . . . . . . . . . .161

Section IVEnvironmental Assessments . . . . . . . . . . .275

Index . . . . . . . . . . . . . . . . . . . . . . . . . . .360

iii

Page 6: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium
Page 7: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

This compendium provides researchers andprevention specialists with a set of tools to assessviolence-related beliefs, behaviors, and influences,as well as to evaluate programs to prevent youthviolence. If you are new to the field of youthviolence prevention and unfamiliar with availablemeasures, you may find this compendium to beparticularly useful. If you are an experiencedresearcher, this compendium may serve as aresource to identify additional measures to assess thefactors associated with violence among youths.

Although this compendium contains more than170 measures, it is not an exhaustive listing ofavailable measures. A few of the more widely usedmeasures to assess aggression in children, forexample, are copyrighted and could not be includedhere. Other measures being used in the field, but notknown to the authors, are also not included. Many ofthe measures included in the first edition of thecompendium focused on individual violence-relatedattitudes, beliefs, and behaviors. These types ofmeasures are included in this edition as well and maybe particularly useful if you are evaluating a school-based curriculum or a community-based programdesigned to reduce violence among youths. Severalmeasures to assess peer, family, and communityinfluences have been added to the compendium.Many of these measures are from the majorlongitudinal and prevention research studies of youthviolence being conducted in the United States.

Most of the measures in this compendium areintended for use with youths between the ages of 11and 24 years, to assess such factors as seriousviolent and delinquent behavior, conflict resolutionstrategies, social and emotional competencies, peerinfluences, parental monitoring and supervision,

family relationships, exposure to violence, collectiveefficacy, and neighborhood characteristics. Thecompendium also contains a number of scales andassessments developed for use with childrenbetween the ages of 5 and 10 years, to measurefactors such as aggressive fantasies, beliefssupportive of aggression, attributional biases,prosocial behavior, and aggressive behavior. Whenparent and teacher versions of assessments areavailable, they are included as well.

How This Compendium Is OrganizedThe Introduction, beginning on page 5, provides

information about why outcome evaluations are soimportant and includes some guidance on how toconduct such evaluations. Following theIntroduction, you will find four sections, eachfocusing on a different category of assessments.Each section contains the following components:

• Description of Measures. This tablesummarizes key information about all of theassessments included in the section. Eachassessment is given an alphanumeric identifier(e.g., A1, A2, A3) that is used repeatedlythroughout the section, to guide you throughthe array of assessments provided. The tableidentifies the constructs being measured(appearing in alphabetical order down the left-hand column), provides details about thecharacteristics of the scale or assessment,identifies target groups that the assessment hasbeen tested with, provides reliability andvalidity information where known, andidentifies the persons responsible fordeveloping the scale or assessment. Whenreviewing the Target Group information, keepin mind that we have included only those

1

How To Use This Compendium

Page 8: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

target groups we know and that the reliabilityinformation pertains specifically to thesegroups and may not apply to other groups.When reviewing the Reliability/Validityinformation, you will notice that severalmeasures are highly reliable (e.g., internalconsistency > .80) whereas others areminimally reliable (e.g., internal consistency < .60). We included measures with minimalreliability because the reliability informationis based, in some cases, on only one targetgroup from one study; these measures may bemore appropriate for a different target group.We also included measures with limitedreliability with the hope that researchers willtry to improve and refine them. Evidence ofvalidity is available for only a few of themeasures included in this compendium.

• Scales and Assessments. The items thatmake up each assessment are provided,along with response categories and someguidance to assist you with scoring andanalysis. In the few instances where scaleshave been adapted, the most recent(modified) version is presented. We alsohave provided information on how to obtainpermission to use copyrighted materials. Inmost cases, we have presented individual

scales rather than the complete instrumentsbecause instruments generally are composedof several scales. This approach increasesthe likelihood that the scales’ test propertieswill be altered. Nonetheless, we did thisbecause the field has produced fewstandardized instruments with establishedpopulation norms for a range of targetaudiences.

• References. This list includes citations forpublished and unpublished materials pertainingto original developments as well as any recentadaptations, modifications, or validations. Inthe few instances where scales have beenadapted, references for the most recent(modified) version are provided. To obtaininformation about the original versions, pleasecontact the developers and refer to any relevantreferences cited.

Choosing the Right InstrumentDeveloping instruments that are highly reliable,

valid, and free of any bias is not always possible.Carefully choose among the measures included inthis document. The criteria on the facing page mayassist you in making this selection. As with anyresearch effort, consider conducting a pilot test tominimize problems and to refine the instrument.

2

Page 9: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

3

Source: Robinson JP, Shaver PR, Wrightsman LS. Measures of personality and social psychological attitudes.San Diego, CA: Academic Press, Inc., 1991.

General Rating Criteria for Evaluating ScalesCriterion Rating Exemplary Extensive Moderate Minimal

Inter-item correlation Average of .30 or better Average of .20 to .29 Average of .10 to .19 Average below .10

Alpha-coefficient .80 or better .70 to .79 .60 to .69 < .60

Test-Retest Reliability Scores correlate morethan .50 across a periodof at least 1 year.

Scores correlate more than .40across a period of 3-12 months.

Scores correlate morethan .30 across a periodof 1-3 months.

Scores correlate more than.20 across less than a 1month period.

Convergent Validity Highly significantcorrelations with morethan two relatedmeasures.

Significant correlations withmore than two relatedmeasures.

Significant correlationswith two relatedmeasures.

Significant correlationswith one related measure.

Discriminant Validity Significantly differentfrom four or moreunrelated measures.

Significantly different from twoor three unrelated measures.

Significantly different fromone unrelated measure.

Different from onecorrelated measure.

Page 10: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium
Page 11: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

5

Youth violence is a serious global public healthproblem.1 Despite a decline in homicide rates acrossthe United States during the 1990’s,2 homicide ratesare again rising and continue to claim the lives ofmany young people. The human and economic tollof violence on young people, their families, andsociety is high. Homicide is the second leadingcause of death for persons 15-24 years of age andhas been the leading cause of death for African-Americans in this age group for over a decade.2 Theeconomic cost associated with violence-relatedillness, disability, and premature death is estimatedto be in the billions of dollars each year.1

Researchers and prevention specialists are underpressure to identify the factors that place youngpeople at risk for violence, to find out whichinterventions are working, and to design moreeffective prevention programs. Across the country,primary prevention efforts involving families,schools, neighborhoods, and communities appear tobe essential to stemming the tide of violence, andmany promising and effective programs have beenidentified.3-6 Identifying effective programs rests, inpart, on the availability of reliable and validmeasures to assess change in violence-relatedattitudes, beliefs, behaviors, and other influences.Monitoring and documenting proven strategies willgo a long way toward reducing youth violence andcreating peaceful, healthier communities.

Why Outcome Evaluations Are So ImportantIn their desire to be responsive to constituents’

concerns about violence, schools and communitiesoften are so involved with prevention activities thatthey rarely make outcome evaluations a priority.Such evaluations, however, are necessary if we wantto know what works in preventing aggression and

violence. In the area of youth violence, it is notenough to simply examine how a program is beingimplemented or delivered, or to provide testimonialsabout the success of an intervention or program.Programs must be able to show measurable changein behavioral patterns or change in some of themediating or moderating factors associated withaggression and violence. To demonstrate thesechanges or to show that a program made adifference, researchers and prevention specialistsmust conduct an outcome evaluation.

Components of Comprehensive EvaluationsEvaluation is a dynamic process. It is useful for

developing, modifying, and redesigning programs;monitoring the delivery of program components toparticipants; and assessing program outcomes. Eachof these activities represents a type of evaluation.Together, these activities compose the keycomponents of a comprehensive evaluation.

• Formative Evaluation activities are thoseundertaken during the design and pretesting ofprograms.7 Such activities are useful if youwant to develop a program or pilot test all orpart of an intervention program prior toimplementing it routinely. You can also useformative evaluation to structure or tailor anintervention to a particular target group or useit to help you anticipate possible problems andidentify ways to overcome them.

• Process Evaluation activities are thoseundertaken to monitor programimplementation and coverage.7 Such activitiesare useful if you want to assess whether theprogram is being delivered in a mannerconsistent with program objectives; for

Introduction

Page 12: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

6

determining dose or the extent to which yourtarget population participates in the program;and for determining whether the delivery of theprogram has been uniform or variable acrossparticipants. Process or monitoring data canprovide you with important information forimproving programs and are also critical forlater program diffusion and replication.

• Outcome Evaluation activities are thoseundertaken to assess the impact of a programor intervention on participants.7 Such activitiesare useful if you want to determine if theprogram achieved its objectives or intendedeffects—in other words, if the programworked. Outcome evaluations can also helpyou decide whether a program should becontinued, implemented on a wider scale, orreplicated in other sites.

Ten Steps for Conducting Outcome EvaluationsOutcome evaluations are not simple to conduct

and require a considerable amount of resources andexpertise. If you are interested in conducting anoutcome evaluation, you will need to incorporateboth formative and process evaluation activities andtake the following steps:

• Clearly define the problem being addressed byyour program.

• Specify the outcomes your program isdesigned to achieve.

• Specify the research questions you want theevaluation to answer.

• Select an appropriate evaluation design andcarefully consider sample selection, size, andequivalency between groups.

• Select reliable and valid measures to assesschanges in program outcomes.

• Address issues related to human subjects, suchas informed consent and confidentiality.

• Collect relevant process, outcome, and recorddata.

• Analyze and interpret the data.• Disseminate your findings, using an effective

format and reaching the right audience.• Anticipate and prepare for obstacles.

Define the problem. What problem is yourprogram trying to address? Who is the targetpopulation? What are the key risk factors to beaddressed? Youth violence is a complex problemwith many causes. Begin by focusing on a specifictarget group and defining the key risk factors yourprogram is expected to address within this group.Draw evidence from the research literature showingthe potential benefit of addressing the identified riskfactors. Given the complexity of the problem ofyouth violence, no program by itself can reasonablybe expected to change the larger problem.

Specify the outcomes. What outcome is yourprogram trying to achieve? For example, are youtrying to reduce aggression, improve parenting skills,or increase awareness of violence in the community?Determine which outcomes are desired and ensurethat the desired outcomes match your programobjectives. A program designed to improve conflictresolution skills among youths is not likely to lead toan increased awareness of violence in the community.Likewise, a program designed to improve parentingskills probably will not change the interactions of peergroups from negative to prosocial. When specifyingoutcomes, make sure you indicate both the nature andthe level of desired change. Is your program expectedto increase awareness or skills? Do you expect yourprogram to decrease negative behaviors and increaseprosocial behaviors? What level of change can youreasonably expect to achieve? If possible, useevidence from the literature for similar programs andtarget groups to help you determine reasonableexpectations of change.

Specify the questions to be answered. Researchquestions are useful for guiding the evaluation.

Page 13: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

7

When conducting an outcome evaluation of a youthviolence prevention program, you may want todetermine the answers to three questions: Has theprogram reduced aggressive or violent behavioramong participants? Has the program reduced someof the intermediate outcomes or mediating factorsassociated with violence? Has the program beenequally effective for all participants or has it workedbetter for some participants than for others? Ifmultiple components of a program are beingevaluated, then you also may want to ask: Have allcomponents of the program been equally effective inachieving desired outcomes or has one componentbeen more effective than another?

Select an appropriate evaluation design.Choose an evaluation design that addresses yourevaluation questions. Your choice in design willdetermine the inferences you can make about yourprogram’s effects on participants and theeffectiveness of the evaluation’s variouscomponents. Evaluation designs range from simpleone-group pretest/posttest comparisons tononequivalent control/comparison group designs tocomplex multifactorial designs. Learn about thevarious designs used in evaluation research andknow their strengths and weaknesses.

Special consideration should be given to sampleselection, size, and equivalency between groups aspart of your evaluation plan. Outcome evaluationsare, by definition, comparative. Determining theimpact of a program requires comparing personswho have participated in a program with equivalentpersons who have experienced no program or analternative program.7 The manner in whichparticipants are selected is important for theinterpretation and generalizability of the results.Sample size is important for detecting groupdifferences. When estimating the sample size, ensurethe sample is large enough to be able to detect groupdifferences and anticipate a certain level of attrition,

which will vary depending on the length of theprogram and the evaluation. Before the program isimplemented, make sure that the treatment andcontrol/comparison groups are similar in terms ofdemographic characteristics and outcome measuresof interest. Establishing equivalency at baseline isimportant because it helps you to attribute changedirectly resulting from the program rather thanchange resulting from an extraneous factor.

Choose reliable and valid measures to assessprogram outcomes. Selecting appropriatemeasurement instruments—ones that you knowhow to administer and that will produce findingsthat you will be able to analyze and interpret—is animportant step in any research effort. Whenselecting measures and developing instruments,consider the developmental and culturalappropriateness of the measure as well as thereading level, native language, and attention span ofrespondents. Make sure that the response burden isnot too great, because you want respondents to beable to complete the assessment with ease.Questions or items that are difficult to comprehendor offensive to participants will lead to guessing ornon-responses. Subjects with a short attention spanor an inability to concentrate will have difficultycompleting a lengthy questionnaire.

Also consider the reliability and validity of theinstrument. Reliable measures are those that havestability and consistency. The higher the correlationcoefficient (i.e., closeness to 1.00), the better thereliability. A measure that is highly reliable may notbe valid. An instrument is considered valid if itmeasures what it is intended to measure. Evidenceof validity, according to most measurementspecialists, is the most important consideration injudging the adequacy of measurement instruments.

Address issues related to human subjects.Before data collection begins, take steps to ensure

Page 14: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

8

that participants understand the nature of theirinvolvement in the project and any potential risksassociated with participation. Obtaining informedconsent is necessary to protect participants andresearchers. Obtaining permission from participantseliminates the possibility that individuals willunknowingly serve as subjects in an evaluation. Youmay choose to use active informed consent, in whichcase you would obtain a written statement from eachparticipant indicating their willingness to participatein the project. In some cases, you may decide to usepassive informed consent, in which case you wouldask individuals to return permission forms only ifthey are not willing to participate in the project.Become familiar with the advantages anddisadvantages of both approaches. Once you havesecured informed consent, you also must take stepsto ensure participants’ anonymity and confidentialityduring data collection, management and analysis.

Collect relevant data. Various types of data canbe collected to assess your program’s effects. Theoutcome battery may be used to assess attitudinal,psychosocial, or behavioral changes associated withparticipation in an intervention or program.Administering an outcome battery alone, however,will not allow you to make conclusions about theeffectiveness of your program. You also must collectprocess data (i.e., information about the materialsand activities of the intervention or program). Forexample, if a curriculum is being implemented, youmay want to track the number of sessions offered toparticipants and the number of sessions attended byparticipants, as well as monitor the extent to whichprogram objectives were covered and the manner inwhich information was delivered. Process data allowyou to determine how well a particular interventionis being implemented as well as interpret outcomefindings. Interventions that are poorly delivered orimplemented are not likely to have an effect onparticipants.

In addition to collecting data from participants,you may want to obtain data from parents, teachers,other program officials, or records. Multiple sourcesof data are useful for determining your program’seffects and strengthening assertions that the programworked. The use of multiple sources of data,however, also presents a challenge if conflictinginformation is obtained. Data from records (i.e.,hospital, school, or police reports), for example, areusually collected for purposes other than theevaluation. Thus, they are subject to variable record-keeping procedures that, in turn, may produceinconsistencies in the data. Take advantage ofmultiple data sources, but keep in mind that thesesources have limitations.

Analyze and interpret the data. You can useboth descriptive and inferential statistical techniquesto analyze evaluation data. Use descriptive analysesto tabulate, average, or summarize results. Suchanalyses would be useful, for example, if you wantto indicate the percentage of students in thetreatment and comparison groups who engaged inphysical fighting in the previous 30 days or thepercentage of students who reported carrying aweapon for self-defense. You also could usedescriptive analyses to compute gain scores orchange scores in knowledge or attitudes bysubtracting the score on the pretest from the score onthe posttest. You could extend the descriptiveanalyses to examine the relationship betweenvariables by utilizing cross-tabulations orcorrelations. For example, you might want todetermine what percentage of students with beliefssupportive of violence also report engaging inphysical fights.

Inferential analyses are more difficult to conductthan descriptive analyses, but they yield moreinformation about program effects. For example,you could use an inferential analysis to showwhether differences in outcomes between treatment

Page 15: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

9

and comparison groups are statistically significant orwhether the differences are likely due to chance.Knowing the change scores of the treatment orcomparison groups is not as useful as knowing if thechange scores are statistically different. Withinferential statistical techniques, evaluators can alsotake into account (i.e., statistically control for orhold constant) background characteristics or otherfactors (e.g., attrition, program dose, pretest score)between the treatment and comparison groups whenassessing changes in behavior or other programoutcomes. Regardless of the statistical technique youuse, always keep in mind that statistical significancedoes not always equate with practical meaningfulsignificance. Use caution and common sense wheninterpreting results.

Many statistical techniques used by researchersto assess program effects (e.g., analysis of varianceor covariance, structural equation, or hierarchicallinear modeling) require a considerable amount ofknowledge in statistics and measurement. Youshould have a good understanding of statistics andchoose techniques that are appropriate for theevaluation design, research questions, and availabledata sources.

Disseminate your findings. This is one of themost important steps in the evaluation process. Youmust always keep program officials abreast of theevaluation findings, because such information isvitally important for improving intervention programsor services. Also communicate your findings toresearch and prevention specialists working in thefield. Keep in mind that the traditional avenues fordisseminating information, such as journal articles,are known and accessible to researchers but notalways to prevention specialists working incommunity-based organizations or schools.

When preparing reports, be sure to present theresults in a manner that is understandable to the

target audience. School, community and policyofficials are not likely to understand complexstatistical presentations. Reports should be brief andwritten with clarity and objectivity. They shouldsummarize the program, evaluation methods, keyfindings, limitations, conclusions andrecommendations.

Anticipate obstacles. Evaluation studies rarelyproceed as planned. Be prepared to encounter anumber of obstacles—some related to resources andproject staffing and others related to the fieldinvestigation itself (e.g., tension between scientificand programmatic interests, enrollment of controlgroups, subject mobility, analytic complexities, andunforeseeable and disruptive external events).8

Multiple collaborating organizations with competinginterests may result in struggles over resources,goals, and strategies that are likely to complicateevaluation efforts. Tension also may exist betweenscientists, who must rigorously documentintervention activities, and program staff, who mustbe flexible in providing services or implementingintervention activities. During the planning phasesof the evaluation, scientific and program staffersmust have clear communication and consensusabout the evaluation goals and objectives, andthroughout the evaluation, they must havemechanisms to maintain this open communication.

Future ConsiderationsThe field of violence prevention needs reliable,

valid measurement tools in the quest to determinethe effectiveness of interventions. In past years,researchers in violence prevention have looked tothe literature for established measures and havemodified them accordingly to assess violence-related attitudes and behaviors. These adaptationshave sometimes yielded satisfactory results, but inother cases, the measures have not yet proven to bevery reliable. Researchers have also tried to developnew measures to gauge skill and behavior changes

Page 16: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

10

resulting from violence prevention interventions.Many of these measures also require furtherrefinement and validation.

To ensure that the instruments we use areculturally appropriate, we must involve a wide rangeof target groups. Violence cuts across all racial andethnic groups and is especially prevalent amongAfrican-American and Hispanic youths. Some of themore standardized instruments that have beenadapted for use in violence prevention efforts,however, were not developed specifically for usewith minority populations. Thus, the items containedin some of the more standardized instruments maynot be culturally or linguistically appropriate forminority populations.

One final problem we must continue to addressis the lack of time-framed measures that can be usedfor evaluation research. To assess the effectivenessof an intervention, we must be able to assess how aparticular construct (e.g., attitudes toward violenceor aggressive behavior) changes from one point in

time to another point in time following anintervention. Instruments that instruct respondents toindicate “usual behavior,” or to “describe orcharacterize the behavior of a child or teenager,” arenot likely to precisely measure behavior change.Instruments that instruct respondents to considerbehavior “now or in the last six months” are also notprecise enough to measure behavior change.

Much progress has been made over the lastdecade in terms of understanding the factors thatplace young people at risk for violence andidentifying promising and effective approaches toreduce youth violence. Still, more work remains tobe done. New tools must be developed and existingtools need to be improved. More importantly,researchers and prevention specialists dedicated tothe prevention of youth violence must have access tothe many measurement tools that have beendeveloped. We hope that increased use of andexperience with these measures will help to validatethem and will expand our knowledge about effectivestrategies to prevent youth violence.

Page 17: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium

11

References

1. Krug EG, Dahlberg LL, Mercy JA, Zwi AB,Lozano R (eds.). World report on violence andhealth. Geneva, Switzerland: World HealthOrganization, 2002.

2. Centers for Disease Control and Prevention. Web-based Injury Statistics Query and ReportingSystem – WISQARS. Available on the Internet:http://www.cdc.gov/ncipc/ wisqars/default.htm.

3. United States Department of Health and HumanServices. Youth violence: a report of the SurgeonGeneral. Washington, DC: US GovernmentPrinting Office, 2001.

4. Thornton TN, Craft CA, Dahlberg LL, Lynch BS,Baer K. Best practices of youth violenceprevention: a sourcebook for community action.Atlanta, GA: Centers for Disease Control andPrevention, National Center for Injury Preventionand Control, 2000.

5. Mihalic S, Irwin K, Elliott D, Fagan A, Hansen D.Blueprints for violence prevention. JuvenileJustice Bulletin. Washington, DC: U.S.Department of Justice, Office of Juvenile Justiceand Delinquency Prevention, 2001 (July).

6. Lipsey MW, Wilson DB. Effective interventionsfor serious juvenile offenders: a synthesis ofresearch. In: Loeber R, Farrington DP (eds.).Serious and violent juvenile offenders: risk factorsand successful interventions. Thousand Oaks, CA:Sage 1998:313–345.

7. Rossi PH, Freeman HE. Evaluation: a systematicapproach. 5th Edition, Newbury Park, CA: SagePublications, 1993.

8. Powell KE, Hawkins DF. Youth violenceprevention: descriptions and baseline data from13 evaluation projects. American Journal ofPreventive Medicine 1996;12(5 Suppl).

Page 18: Measuring Violence-Related Attitudes, Behaviors, and ... › fulltext › ED486261.pdf · Measuring Violence-Related Attitudes, Behaviors, and Influences Among Youths: A Compendium