Monitoring School PerformanceThe US Department of Education, through its Center for Education...

14
Monitoring School Performance: A Guide for Educators by J. Douglas Wilims Centre for Policy Studies in Education University of British Columbia and Centre for Educational Sociology University of Edinburgh p The Falmer Press (A member of the Taylor & Francis Group) Washington, D.C. London

Transcript of Monitoring School PerformanceThe US Department of Education, through its Center for Education...

Page 1: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Monitoring SchoolPerformance:

A Guide for Educators

by

J. DouglasWilims

Centre for Policy Studiesin EducationUniversity of British Columbia

andCentre for Educational Sociology

University of Edinburgh

p

The Falmer Press

(A memberof theTaylor & FrancisGroup)Washington,D.C. • London

Page 2: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary
Page 3: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Chapter1

Introduction

Educatorsand administratorshave dramatically increasedtheir efforts tocollect data describingthe performanceof their educationalsystems.Manycountriesare establishingprogramsto collect ‘indicators’ of schoolqualityformonitoring performanceat national, regional, and local levels. The questfor more and better data is widespread.Nearly everycountry in Europe isdevelopinga monitoring systembasedon performanceindicators.The UKGovernment established a national curriculum of core and foundationsubjects,and mountedan ambitioustestingprogramto assesspupils’ attain-mentof the curricularobjectives.The US Departmentof Education,throughits Centerfor EducationStatistics,collects a variety of indicatorsdescribingthe ‘health’ of the elementaryand secondaryschoolingsystems.From thesedata, it publishesthe Secretaryof Education’s‘wall chart’, which includesstate-by-statecomparisonsfor a number of performanceindicators (Smith,1988). Currently the National Governors’AssociationPanelto Monitor theNational EducationGoals is developinga ‘national reportcard’ for monitor-ing progresstowards the national educationalgoals recently establishedbyPresidentBush and the Governors(Lewis, 1991; Pipho, 1991). Most stateshave established monitoring systems based on performance indicators(Selden,1988),andmanyschooldistrictsare following their lead.

An ‘indicator’ is simply a statisticdescribingsome~featureof the school-ing systemassociatedwith its performance,suchas the averagetest scoreof aschool,the proportionof drop-outs,or thepupil-teacherratio. Like moststat-istics, an indicatorderivesits meaningfrom its trendover time, from its vari-ation within a sample,or from comparisonto somestandard.The standardcan be the averagevaluefor a setof schools,apredefinedgoal, or a sociallydeterminedstandard(Oakes,1986;Shavelson,McDonnell,Oakes,CareyandPicus,1987).

The collection of performanceindicators is not a new phenomenon.Thereis a long historyof usingnationaldatato monitor long-termtrendsineducational performance;to examine inequalities in attainmentbetweenracial, ethnic, and social-classgroups; to make inter-regional comparisons;and to assessthe impact of majoreducationalreforms (CharteredInstituteof Public Finance Accounts, 1986; Koretz, 1986; McPhersonand Wilims,1987; Murnane,1987; Powell andSteelman,1984; Stern, 1986; Welch, 1987).

1

Page 4: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Monitoring SchoolPerformance

Schoolsand schooldistricts’ havecollecteddata for planningand decision-making,for assessingstrengthsandweaknessesin variouscurricularareas,andfor selectingandallocatingpupils todifferent typesof programs.

Whatdistinguishesthe currentwork on indicatorsfrom earlierevaluationefforts is the amountand kind of data that arebeingcollected, andthe waydatacollection is being institutionalizedat all levels.Previously,programstomonitorperformanceincludedeasy-to-measureindicators,suchasgraduationrates,pupil-teacherratios, and achievementtest scores.Now administratorsare acquiringinformation on a much widerrangeof outcomes,includingnotonly cognitive measures,but alsoaffectivemeasuressuchas self-conceptandattitudesto school and work. Many programsinclude measuresdescribingpupils’ family background,and various indicators of school ‘processes’be-lieved to be related to schooling outcomes.Before the recent interest inperformanceindicators,dataof this kind werecollectedonly in evaluationsorresearchstudiesaddressingparticular questions.Many school districts nowcollect and analyze thesedataas part of an institutionalizedroutine. Theroutine includes the productionof annualreportsto school boards,parents,teachers,and administrators.

TheImpetusfor MonitoringSystems

Schoolboards, administrators,and teachersrely heavily on their workingknowledgeto makedecisions(SproullandZubrow,1981; Williams andBank,1984).The ‘working knowledge’of decision-makersincludesthe largebodyoffacts, principles, andperceptionsthat determinetheir particularview of theworld. It is influencedby theirvalues,attitudes,andbeliefs,and by the steadystreamof information gatheredfrom friends, colleagues,clients, and thepublic. But decision-makersoften view their working knowledge as inad-equatefor confrontingcertainissues:it is too subjectiveor shallow,or biasedbecauseit doesnot representthe opinionsof a wide constituency.Thus theyregularly seek‘specific knowledge’.They appoint committees,hire consult-ants,conductevaluations,commissionresearch,andattendcoursesto obtaininformation focused on specific problems or activities. Although specificknowledgeis oftenmoreobjectiveandrelevant,obtainingit requirestime andresources.Also, specificknowledgeis sometimesat oddswith workingknow-ledge,anddoesnot alwaysfulfil decision-makers’needs(Husenand Kogen,1984).

Monitoring information can fill someof the gapsbetweenworking andspecificknowledge.If dataare collectedroutinely from a varietyof sources,then many regularinformation needscan be met more quickly. Monitoringdataon sometopics can be moreobjective than thoseobtainedthroughin-

1 The administrativeunits for governingschoolsat the regionallevel are generallycalledlocal educationauthorities(LEAs) in EnglandandWales,educationauthorities(EAs) in Scotland,andschooldistricts in North America.For sakeof brevity I usetheterm‘school districts’ to coverall of these,unlessI am referringspecificallyto adminis-trativeunits in the UK.

Page 5: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Introduction

formal observationsandconversations.Monitoring datatendto covera widervariety of topics than thoseobtainedthroughevaluations,consultancies,orcommissionedresearch.In somecasesmonitoring information is insufficientfor addressingparticularproblems,but it canprovide a basisfor the kind ofspecificknowledgerequired.

The collectionof monitoringdataalsocanserveseveralfunctionsdirectlypertinentto improving schoolingand reducinginequities.The informationcanbe usedto identify problemareasin the schoolingsystem,so thatcorrect-ive action can be taken. It canassistadministratorsin determiningthe bestallocationof resources.It canbe usedto diagnosestrengthsandweaknessesinpupils’ masteryof curricular objectives,and thereforeguide curriculum andinstruction. It canbe usedto assessthe effectsof interventionsimplementedat the state,district, or schoollevel. It canstimulatediscussionaboutthegoalsof schooling,andgive rise to newideasthataffect policy andpractice.

Monitoring data can also motivate administratorsand teachersto im-prove performanceand reduceinequities. This function is not necessarilyachievedthroughtop-downaccountabilitytactics.In manycasesthe provisionof information itself stimulatesthe self-regulatorymechanismsalreadyexist-ing in most schools.Researchon school and teachereffectivenesssuggeststhateffectiveschoolsgenerallyhavesomesystemfor monitoringperformance(e.g., Cohen,1981; Odden,1982;Purkeyand Smith, 1983),andthat effectiveteachersfrequentlytest their pupils andconductweekly and monthly reviews(RosenshineandStevens,1986).

However,the movementto collect performanceindicatorsis driven notjust by a desirefor betterinformation.Many educatorsbelieve that the expli-cit intention of monitoring systemsis to makeschoolsaccountablethroughmarket forces. Throughout Europe and North America a faith in marketmechanismspervadesgovernmentoffices. One of its doctrinesis thatpubliclyfundedorganizationsshouldbeheld accountableby havingto reportregularlyon their performance.In somepublic services,suchas transportation,postalservices,media services,and public utilities, performanceis equatedwithprofits or losses.Balancesheetsof the public servicesarecomparedto thoseof private companiesoffering comparableservices.Public servicesthat areunprofitablebecomecandidatesfor closureor privatization.But in areassuchas education,health, social welfare, corrections,and the environment,theoutcomesare less tangible.Value for moneyis moredifficult, perhapsimposs-ible, to ascertain. In these areasthe push has been towards measuringperfonnanceon a numberof criteria, and comparingresults acrossorgan-izations.Thebelief is that suchcomparisonswill stimulatecompetitionandmotivateorganizationsto providehigherlevelsof service.

Some administratorsand educatorsbelieve that the introduction ofmarket mechanismsto educationwill significantly improve schooling. Theview is that inter-schoolor inter-regionalcomparisonswill bringpressurestobear on schools, particularly those performing below the average.Thesepressureswill induceschoolsto performbetter,and if not, the datawill con-stituteobjective groundsfor closingschoolsor appointingnew staff. Also, ifindicatorscanbe usedto accuratelyand fairly assessthe performanceof indi-vidual teachersor principals,then they mightbe usedasan objectivebasisonwhich to decidepromotions,merit pay, and dismissals.Indicatorsof school

3

Page 6: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

MonitoringSchoolPerformance

performancehavebeenusedin schoolaward programsin California,Florida,and SouthCarolina (Mandeville andAnderson,1987;Wynne, 1984). A fewstateshave proposedthat teachersbe awardedcashbonusesfor superiorresultsbasedon indicatordata.

Educatorsdo not unanimouslyacceptthe view that marketmechanismswill improve the education system. Landsberger,Carlson, and Campbell(1988) surveyedapproximately6000 administratorsin Englandand Wales,West Germany,and the US to determinethe most importantpolicy issuesfacing educationaladministrators.The primary concern of these adminis-tratorswaswhether‘market mechanisms’shouldbebuilt into the educationalsystem.Opponentsto monitoring arguethat thereis notconsensusaboutthegoalsof education,thecharacteristicsof an effective schoolor teacher,or thenature of educationalachievement.They believe that monitoring restrictswhat is taught in schools,displacesvaluableteachingtime, and reducestheautonomyof teachers.

Even if administratorsdo not intend to usemonitoringdataexplicitly forpurposesof accountability,the collectionof databy itself unleashessubtleandindirect marketforces. For example,the resultsof inter-schoolcomparisonsmight be used only to supplementother evaluative data, and to supportschoolsin their self-evaluations.But schools directly or indirectly competefor the best teachersand pupils, and monitoring results affect schools’reputations,which eventually influence teachers’decisionsabout where toteach, and parents’ decisionsabout the best area in which to live. Thesemarket forces are supportedby policies that promotegreaterchoice anddiversity in schooling,such as open enrolmentplans that allow parentstochooseschoolsoutsidetheir designatedcatchmentareas.Parentssometimesusemonitoring resultsto exertpressuresthroughtheir locally electedschoolboards.The pressurecanbe considerablein areaswith declining enrolments,wheresomeschoolsare threatenedwith closure.

Someof the impetus for monitoringhascomefrom the fear that mon-itoring datacollectedat nationalor statelevels are inadequateor will be usedinappropriately. Smith (1988) suggestedthat the widespreadinterest inperformancemonitoring in the US stemmedfrom the discontentof analystsandpolicy-makerswith existingnationaldata,anda determinationon the partof the federal governmentto use performanceindicators for purposesofaccountability.The government’sintention to continuepublishing its ‘wallchart’ of state-by-statecomparisons,and the criticism the documentreceivedfrom someof the states,induced the Council of Chief StateSchoolOfficers(CCSSO)to developa morecomprehensiveandvalid systemfor monitoringperformanceat the statelevel. In turn, someschool districts have createdmonitoringsystemsasa meansof protectionagainstcriticisms thatmightstemfrom stateassessmentson a narrowsetof schoolingoutcomes.

Finally, the collection of indicator datais consistentwith a more generaltrendamongstgovernmentsandother administrativebodiesto amassextens-ive amountsof dataand to compile descriptivestatistics.This trend hasbeensupportedby rapid advancesin the technology for collecting, storing, andanalyzingdata.Theseactivities servenot only an administrativefunction, butalsoa political one.Statisticsdescribingthe healthof educationsystemscanbeusedto demonstratetheneedfor reformarisingfromthepoormanagement

Page 7: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Introduction

of apreviousadministration,or to demonstrateimprovementsstemmingfromreforms of the administrationin power. Somecritics contendthat analystschooseto report, dependingon their political purposes,statisticsdescribingabsolutelevels of performance,changesin levels of performance,levels orchangesfor a particular subsampleof the population,or comparisonswithother districts, states,or countries.Porter (1988) arguesthat performanceindicatorsaremerelya political tool designedto strengthenthe handof thosefavouringcentralizedcontrol of the processandproductsof teaching.

Purposeof the Book

This book is intendedto guide the many decisionsentailed in developingamonitoring system.Its purposeis to specify the kind of datathat might beroutinelycollectedby a schooldistrict orby individualschools,andto describeappropriatemethodsfor analyzingandreportingdata.No single designfor amonitoring systemcould be appropriateacrossall districts or schools,andadistrict-level design would not necessarilyserve the requirementsfor mdi-vidual schools.Theguidebeginsthereforewith a more generaldiscussionofthe main issuespertainingto performancemonitoring, and setsforth somegeneralprinciplesconcerningthedesignof a monitoringsystem.

Theguidedoesnot describequalitativeapproachesto educationalevalu-ation, suchas thoseproposedby Eisner(1985),Fetterman(1988),Hammer-sley and Atkinson (1983), Lincoln andGuba (1985),andPatton(1980).Thisdecisionwasnot intendedto disparagethesemethods;it wasmadesimply tolimit the scopeof the book,The multilevel modelsand methodsproposedinthis book provide a framework for describingthe variability in schoolingoutcomesbetweenandwithin schools.This frameworkis potentiallyuseful forguidingqualitativestudy in that it invites one to think abouthow policiesandpractices at different levels of the system affect schooling outcomes,andwhethertheir effectvariesfor pupils with differing backgrounds.The frame-work alsoservesto contextualizethe findings of qualitative studies(e.g., seeRaffe andWilhns, 1989).

The developmentof a systemfor monitoringschoolor district perform-anceis not an easytask. If a monitoringsystemis to be useful for planninganddecision-making,it mustbe basedon a soundtheory abouthow schoolsachievetheir effects,and it must have a clearly defined purposewhich isunderstoodby educatorsand policy-makersat all levels of the schoolingsystem.It mustcover a wide rangeof educationalgoals,andbe responsivetochangesin the educationalsystem. Yet it cannotbe too costly in termsofpupil, teacher,or administrativeresources.

Severaltechnical issuesconcerningthe measurementof schoolingout-comesand the assessmentof schooleffects alsomustbe addressed.Perhapsthe most difficult issue concernsthe identificationof goalsthat are commonacrossschoolsin a system.This is complicatedby the fact that schoolsoftenhavedifferent goalsfor pupils with differing interestsandlevelsof ability. Inaddition,test developersfind it difficult to constructmeasuresthat spantheentirerangeof achievementfor pupils at a particulargradelevel, particularlyin the lateryearsof schooling,andyet cover the curriculum in enoughdetailto beuseful for diagnosticpurposes.

5

Page 8: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

MonitoringSchoolPerformance

Evenwith adequatesolutionsto the measurementproblems,the taskofseparatingthe effects of school practicesand policies from factors that lieoutside the school is complicatedand requiresextensivedata. Researchonschool effectivenessat the Centrefor EducationalSociology (University ofEdinburgh)showedthat pupil attainmentat the secondarylevel is relatedtopupils’ socioeconomicstatus(SES), their prior level of ability, the com-position of their family, the type of neighbourhoodthey live in, the overallability andsocioeconomiccompositionof their school,andthe level andstruc-ture of local employment(Garnerand Raudenbush,1991, McPhersonandWillms, 1986; Raffe and Willms, 1989; WilIms, 1986). The multilevel model-ling techniquesdiscussedlater in this book allow one to make statisticaladjustmentsto the schoolmeanson an outcomemeasureto take accountoffactorsthat lie outsidethe school.Estimatesof the adjustedmeansprovideabetterbasis for makingcomparisonsbetweenschools;however,the accuracyof the estimatesdependson the amountandtype ofdataavailable.Theaccur-acy dependsalso on the assumptionsmadeabout the relationshipsbetweenoutcome measures,policy and practice variables,and measuresdescribingoutsideinfluences.

All of thesetheoretical,administrative,and technicalissuesare inextric-ably tied to political issuesconcerningthe professionalautonomyof teachers,the natureof the curriculum,and the control of resources.The developmentof a monitoring systemrequireshundredsof little decisionsaboutwhat kindof datashouldbe collected,how datawill be analyzed,whowill haveaccesstodata,andwhat actionswill be takenon the basisof findings. Thesedecisionsare affectedby how thosedevelopingthe systemview the purposesof mon-itoring, and the amountof resourcesthat canbe devotedto the enterprise.Ifa guide is to establisha standardfor performancemonitoring,it must eitherattemptto takethe competinginterestsof severalgroupsinto account,or setforth its own biases.

Initial Premises

Onecould feasiblywrite an entirevolumediscussingthe political issuescon-cerningperformancemonitoring.This is not my purpose.However,I will notskirt the central issueof whethersucha book shouldbewritten at all. Thosewho decrymonitoringmayview the bookasan attemptto accelerateit. Theyopposeits accelerationbecausethey believemonitoring maycurb educatorsfrom questioningthe purposesof schooling, and from critically examiningwhat they teachand how they teach it. They would argue also that mon-itoring systemshelp institutionalize organizationalstructuresand practicesaimed at goals that havenot beenjustified or acceptedby the educationalcommunity.The opponentsof monitoring include hundredsof teachersandadministratorswho feel that monitoring placesunrealisticdemandson theirtime and resources,and that ultimately it will reducetheir authorityand beusedagainstthem.

Advocatesof monitoring would counterthat both administratorsandteachersneed objective information to make soundeducationaldecisions.They would arguethat marketmechanismshavea positive effect on school-ing, or at least that monitoring motivatesteachersand administrators.They

Page 9: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Introduction

would point to Gallup Poll resultsfor the US thatsuggestover80 percentofparentswith childrenin schoolwantstandardizednationaltestingprogramstomeasurethe academicachievementof pupils attendingpublic schoolsin theircommunity (Elam and Gallup, 1989). The advocatesof monitoring mightconcedethat data derivedfrom standardizedtestsand questionnaireshavelimitations,butwould maintainthatsuchdataarebetterthannodataat all.

The debate in the UK has taken a different turn. The Governmentstrived to involve teachersin setting standardsand constructingtests.Thestandardizationof the testshasbeenbasedon teachers’judgementsof whatachild shouldbe able to accomplishatvarious stages,ratherthanon statisticalcriteria pertainingto how well test items discriminateamongstpupils. Thetestsalso incorporateseveraldifferent types of assessmenttasks, includingperformance-basedtasks that require pupils to use higher-orderthinkingskills. Thus, in many respects,the national testing program increasestheprofessionalautonomyof teachers.Despitetheseefforts, however,the pilottesting of the national tests has met with widespreadresistance.Manyteachersare unhappybecausethe Governmenthas not made it clear howthe results will be used. In particular, it has not specified whetherschoolcomparisonswill be made,and if so,on what basis.A numberof parentsaresceptical too. They are unsure whether they want their children testedbecausethey fear the resultsmaybeusedto makedecisionsaboutthetype ofschool programsuitablefor their children. More generally,thereis mistrustof the Government’s political agendaconnectedwith national tests andmonitoring.

My own position on monitoring is that the benefits can outweigh anynegativeconsequences.I believealsothat someof the dangersof monitoringcanbe avoided,or at leastminimized.This position hasseveralantecedents.First, it is derivedfrom my interestin studyingthe causesandconsequencesof inequitiesin schooling.Throughmy researchon public andprivateschoolsin the US (Wilims, 1985a),andmy researchwith McPhersonon the effectsofcomprehensiveschooling in Scotland(McPhersonandWillms, 1987), I sawthe potential for using systematicallycollected datafor examiningquestionsaboutequity betweenracial, ethnic,andsocial-classgroups,and betweenthesexes.I also learnedabout some of the limitations of monitoring data. Ifeducatorsare to argue for equality of outcomes,they must be preparedtospecifyanddefendsomesetof outcomes.My positionon monitoringstemsaswell from havingwitnessedinappropriateusesof performanceindicators.Inmany casescomparisonsbetweenschoolsare madewithout makingstatisticaladjustmentsfor the types of pupils enteringschools.Thus, the findings fre-quently suggest that the best-performingschools are thosewith the mostfavourablepupil intakes.Theseconclusionsare oftenunwarranted.The thirdantecedentto my position on monitoring is a genuinecuriosity about howschoolswork. As an educationalresearcherI seegreat potential in usingmonitoring datato further ourunderstandingaboutthe relationshipsbetweenschoolingprocessesandpupil outcomes.

Monitoring systemshaveacquiredsome momentum;to a largeextentItake them as afait accompli.I hope that this guide will help administratorsand teachersuse indicator datamore fairly, and remind them of the limita-tions of performanceindicators. I believe that monitoring can further our

7

Page 10: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

MonitoringSchoolPerformance

understandingof the effectsof educationalpolicies andpracticeson school-ingoutcomes,andcanhelpdeterminewhetherparticulareducationalinterven-tionshavea desirableeffect.

However,my endorsementof monitoring is notwithout qualifications.Ialsomakesomeassumptionsaboutwhatcouldbe,eventhoughits realizationin practiceis difficult. To begin, therefore,I am settingout a list of premisesasa meansto clarify my position.They arepresentedin thespirit of Cronbachet al.’s ninety-five thesesof evaluation(Cronbachet a!., 1980); that is, I hopetheywill provokefurtherdiscussionof the issues:

• Monitoring systemscancontributeto the working knowledgeof bothteachersand administrators.They can serve a numberof functionsrelevantto improving schoolingandreducinginequities.

• Monitoringdataare not a substitutefor otherkinds of data.Monitor-ing datashouldbe usedin conjunctionwith datacollectedboth fromdiscussionswith staff andpupils,and throughdetailedobservationsofschoolandteacherpractice.

• Monitoring systems can induce debateabout school policies andpractices.Their usefulnessin raising questionscanbe as greatas it isin answeringthem.

• Monitoring systemswill not turn the social control andmanagementof schoolsinto a technology.The fear that this will occur presumesthat administratorsare ignorantof the complexitiesof schoolingandthe limitations of monitoring data,and that teachersand pupils arefully submissiveto administrativeauthority.

• One of the dangersof monitoring is that it can restrict the goalsofeducationto a set of objectivesdefinedcentrallyratherthan locally.This can be alleviated by devolving the design and control of mostaspectsof monitoringto the level at which decisionsaboutpolicy andinstructionaremade.

• Administratorsmust make decisionsconcerningaccountability,suchas decisionsabout schoolclosures,the dismissalof teachers,or staffpromotion. If monitoring dataare available, they will inevitably beused,either formally or informally, in thesedecisions.Someof theanxiety concerningaccountabilitycan be lessenedif administratorsusemonitoringdatato provideconstructivefeedbackaboutpolicy andpractice.Thosemonitoringschoolperformanceneedtospecifyclearlyhow monitoring data will be used in summativedecisions — whatanalyseswill be undertaken,how findings will be reported,andwhowill haveaccessto the findings.

In writing this book I continually encounteredtwo difficulties. I wantedtodescribewhatI considerto bebestpracticeof monitoringschoolperformance,ratherthan describethe benefitsof monitoringor belabourits pitfalls as it iscurrentlypracticed.Theproblemis that thedegreeto which bestpracticecanbeaccomplishedis alwayscircumscribedby political considerations,andthesetendedto temper my view of the ideal. Thus, I was continually caught inhaving to decide betweenwhat constitutesideal practiceand what may bepractical. Generally, I attemptedto prescribestandardsfor the ideal and

Page 11: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Introduction

discussattendantpractical considerations— ratherthanattemptingto assesswhat may be practical in most situations,and discussinglimitations becausethe practicalwaslessthan ideal. The seconddifficulty, relatedto the first, isthat even thebestpracticeof monitoringhaslimitations. In attemptingto seta standardfor good practiceI wantedalsoto delineatethe shortcomingsofmonitoring.However,the requirementsfor bestpracticemay be overwhelm-ing to administratorswanting to get started,suchthat they decidemonitoringis too costly given their resources.Thereis the dangertoo that by providingdetailed descriptionsof the shortcomingsof monitoring, I would give theimpressionthat thereare so many problemswith it that it may be bestnotto do it at all. I choseto presentan optimistic picture of what could beaccomplishedthroughmonitoring,buthavenot ignoredthemanylimitations.I hopethat the readerwill view the work as simply a guidefor betterassess-ment,andnotprematurelyjudge thebenefitsor limitationsof monitoring.

Overviewof theBook

Thenext chapterexaminesthe abovepremisesin the light of currentreformsand policy initiatives in the UK andthe US. Thetwo systemsare very differ-ent, especiallyat the secondarylevel, in part becauseof the long history ofnationalexaminationsin the UK. Theseexaminationsto a largeextentdrivethe curriculum, and ensurea degreeof uniformity. The US curriculum ischaracterizedby diversitymorethanuniformity, which posesspecialproblemsfor the developmentof monitoringprograms.In bothschoolingsystemsthereis an explicit agendato developindicatorsfor purposesof accountability.

Chapter 3 describesthe input-process-outputmodel, the theoreticalmodel on which systemsof monitoring performanceare based. It alsodiscusseshow threedifferent typesof monitoring systems,definedby theirpurposes,are relatedto this model. I suggestthreeways in which the modelcanbe strengthened.Theseconceptsunderliemany of the argumentsin thechaptersthat follow regarding the kind of data to be collected and theapproachto analysis.

Chapter4 describesfour ways that researchersand educatorsuse theterm ‘school effects’. Using the definition relevant to the comparisonofschools, I suggest there are two types of school effects that should beconsidered.Thesearedefinedanda modelfor theirestimationis presented.Ialso discusssomeof the technical issuesraisedby the estimationof schooleffectsandthecomparisonof schools.

Chapters5, 6 and7 outline thesubstantiveandtechnicalissuesconcern-ing the measurementof schoolinginputs, processes,and outcomes,respect-ively. In the first two of thesechaptersI distinguishbetweenschoolinginputsandprocesses.‘Schooling inputs’ is usedto refer to factorsexogenousto theschoolingsystem;that is, factors associatedwith pupils’ family backgrounds,and the social, economic,andpolitical factorsthat affect schoolingoutcomesbut lie outsidethecontrolof teachersandeducationaladministrators.‘School-ing processes’is usedto refer to factorsdirectly relatedto schoolpoliciesandpractices.This distinction is not always a comfortable one becausemanyfactorsare relatedto schoolpolicy and practiceand are influenced alsoby

9

Page 12: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

MonitoringSchoolPerformance

forcesoutsideof schooling.Thedistinctionis importantmainly for the statist-ical modellingof schooleffects; Chapters5 and 6 discussthe reasonsfor thisdistinctionin detail.

The purposeof Chapter5 is to discuss the role of input measuresinanalysisand to makerecommendationsfor their measurement.I accomplishthis to somedegreeby employingdatadescribingpupils’ family backgrounds,cognitive abilities, and schoolingoutcomesfor a largesampleof pupils thatattendedprimary and secondaryschoolsin Fife. The chapteralso discussesstrategiesfor handlingmissingdata.

Chapter6 attemptsto specifya ‘bestset’ of processindicators.But hereIattempt to accomplishthis by reviewing the literatureand proposingsomecriteria associatedwith their coverageof the domain, their usefulness,andtheir measurability.Themeasurementof schoolingprocessesis in many waysmoredifficult than themeasurementof inputsor outcomes,andthereforethechapterincludesconsiderablediscussionaboutthe problemsentailedin meas-uringand interpretingschoolprocessdata.

Chapter7 describessomeof the outcomemeasuresthat canbe includedin a monitoringsystem.I arguethatmonitoringsystemsshouldbe basedon awide rangeof outcomemeasures.Thechapterdiscussesissuesconcerningtheidentificationof the goalsof schooling,andoutlinesthe majorconsiderationsin selectingappropriatetestsandconstructingindicators.Oneimportantissueconcernswhetherindicatorsemphasizeequity or excellencein schooling.Forexample,,an indicatorof the percentageof pupils that achievedsomemin-imumlevel of competencyunderscoresthe importanceof equitymorethananindicator of the percentageof pupils that achievedoutstandingresults on astatewideachievementtest.A discussionof validity andreliability of outcomemeasuresis includedhere. Readersunfamiliar with thesetermsmay wish toreadChapter7 beforeChapters5 and6.

Thediscussionin Chapters4 to 7 suggeststhat it is impossibleto specifya setof definitive principleson how to developa systemfor monitoringschoolor district performance.The developmentof a systemrequiresmany inter-relateddecisions,most of which have political ramifications. At the end ofeachof thesechaptersI offer a set of guidelinesfor the developmentof amonitoring system. These should be consideredguidelines, not definitiveprinciples.

Chapter8 presentsa design of a systemfor monitoring schoolsat thedistrictor EA level.Thisdesignis basedon earlierdesignssetoutfor a schooldistrict in Canadaand an educationalauthority in Scotland.Variantsof thisdesignarenow beingimplementedin thesesettings.Thepurposeof the chap-ter is not to presenta fully comprehensivedesign,but ratherto providesomestarting points for a school district or educationauthority beginning theprocess.The chapterproposesthe kind of data to be collected at variouslevels,anddiscussesthe problemof confidentiality.It alsospecifiesthe stagesfor developinga systemanda time line.

Chapter9 doestwo things.It delineatesthe typesof analysesthat couldbe included in an annualreport, anddescribesthe statisticaland graphicaltechniquesconnectedwith eachtype of analysis.Thesetechniquescould beusedfor describingthe performanceof individual schools or entire schooldistricts. Most of the analysescan be done with commercially available

Page 13: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary

Introduction

statistical software packagessuch as SPSSIPCor SYSTAT, and graphicalpackagessuchasHarvardGraphics.

Chapter10 discusseshow information from a monitoring programcanbe usedto developa district-level researchprogram aimedat answeringre-searchquestionsrelevantto a district’s needs.I beginby specifyingfour basicquestionsthat pertainto nearly all researchon school effectiveness.Thesequestionsprovide a framework for discussingthe strengthsand limitationsof various designs.The first type of design discussedrequiresonly cross-sectionaldata; it could be employedafter the first year of operationof aprogram.After two or threeyearsof operatinga monitoringsystem,longit-udinal datacould be usedto assessthe effectsof particularinterventions,orto examinewhethercertainpoliciesor practicesimproveoutcomesor reduceinequalities between high- and low-status groups. The chapter includesexamplesfrom the researchprogramsin Scotlandand Canada.This is themost technically demandingchapter.I havestrived to make it easierfor thereaderwho preferswords overequationsby moving the technicaldescriptionof multilevel modellingto an appendix.

Thefinal chapterprovidesanexecutivesummaryof the materialcoveredin the first ten chapters.It alsosuggestshow administratorsandpolicy-makersmight tacklesomeof the political issuesconcerningaccountability,reductionof thecurriculum,andteachers’professionalautonomy.

Page 14: Monitoring School PerformanceThe US Department of Education, through its Center for Education Statistics, collects a variety of indicators describing the ‘health’ of the elementary