The evaluation and measurement of library services - Vol. 30(2008).pdf
description
Transcript of The evaluation and measurement of library services - Vol. 30(2008).pdf
![Page 1: The evaluation and measurement of library services - Vol. 30(2008).pdf](https://reader036.fdocuments.net/reader036/viewer/2022081817/55cf9c52550346d033a96c43/html5/thumbnails/1.jpg)
The evaluation and measurement of library services. By JosephR. Matthews. Westport, CT: Libraries Unlimited, 2007. 372 pp. $50.00.ISBN 978-1-59158-532-9.
Library directors and managers are increasingly expected toprovide evidence of the effectiveness and efficiency of their libraryprograms and services. They are challenged to demonstrate theirmanagerial leadership in creating agile and high performing libraryorganizations, in which evidence-based decisions are the norm.Success also requires them to understand whom their library serves,what these clients expect and value, and how to communicate to theirvarious stakeholders the benefits their library offers in terms that areclear, concise, and powerful to gain engagement. To meet thesedemands, library administrators need to be knowledgeable inevaluation and measurement, and be versed in research tools togather information about library services for the purpose of makinginformed decisions about them.
In a fraction of a second, a Google search of “library evaluation”identifies more than 2.7 million hits. This might suggest that there isan overwhelming amount of information on systematically gatheringdata about libraries in order to improve them. And yet, we arereminded even as we read the foreword to The Evaluation andMeasurement of Library Services, that librarians generally are not wellequipped to make evidence-based decisions about services and theimpact of libraries in their communities. Matthews addresses thisshortfall with an extensive familiarity of published evaluations oflibrary services, research methodologies, and theories addressing themanagement of service quality and library performance. Intended forlibrary directors, managers, and library school students, the statedpurpose of the book is “to provide a set of tools that will assist anylibrary in evaluating a particular library service” and “to remove someof the mysteries surrounding the process of evaluation so that manylibrarians will see the value of performing evaluation in their libraries”(pp. xx). The book's scope applies to all types of libraries, althoughmost examples come from academic and public libraries.
The book's content is organized into four parts. Part I, “Evaluation:Process and Models,” discusses issues surrounding the process ofconducting an evaluation, including the basic steps from identifyingthe problem, to reporting and using the results for service improve-ment. The reader does not need a prior familiarity with evaluationresearch or measurement techniques to follow the many differentapproaches Matthews describes in this section of the book. His reviewmay also stimulate new approaches to conceiving and conductingevaluations by those experienced in traditional approaches ofcounting and reporting library inputs and outputs.
Part II, “Methodology Concerns,” consists of four chapters thatcover tools for activity-based costing, statistical process control, andprocess improvement; qualitative and quantitative tools; and analysisof data, including review of descriptive and inferential statistics, aswell as presentation of data and preparing the evaluation report. Only64 pages long, this section offers a cursory introduction to basicconcepts and techniques but may not provide a novice adequateguidance to systematically gather and analyze data confidently andaccurately. The discussion of focus groups (p. 54), for example, beginswith the imprecise shorthand that a group equates with a specificinterview methodology; continues to suggest that participants are arepresentative sample; and asserts that interview recordings aretranscribed manually and can be analyzed through computer soft-ware, acknowledging neither voice recognition software that offersautomated transcripts nor the time consuming work of interpretingcomments to draw themes. The discussion of statistical analyses is notalways clear. For example, the basic description of measures of centraltendency follows a simple example but an illustration boxed insertwith library data about “what is an average?” (p. 89) offers littleguidance to follow how the results were calculated. Description ofcalculating a more complex statistic, standard deviation, is misleading
when the second step calls for a doubling rather than a squaring of thevariation of each score from the mean (p.92). Numerous otherreference books and Web sites provide detailed descriptions of howto select and conduct data-gathering methodologies which the authorcites in his lists of references.
Part III, “Evaluation of Library Services,” constitutes the majorportion of the book and perhaps is its best feature. Here, each of tenchapters systematically provides the reader with a review of servicedefinitions, description of evaluation questions andmethods, and verygood literature reviews that describe prior evaluation and research.Collectively this section identifies issues addressed to date as well asfuture topics of research for the following service evaluation issues:library users and nonusers; physical collections; electronic resources;reference services; technical services; interlibrary loan; onlinesystems; bibliographic and library instruction, and informationliteracy; and customer services. The author purposely excludedseveral topics because they would require “substantial discussion”(p. xx). Examples include evaluations of the information retrievalprocess, automated systems, and library buildings, as well as cataloguse studies; the latter however is addressed briefly (pp. 222–223). Hegives brief coverage of issues related to evaluating services offeringelectronic resources which can be found better addressed inmonographs dedicated to the topic (e.g., White & Kamal, 2006).
Part IV, “Evaluation of the Library,” provides ideas for how toevaluate and present the value of the collective services and functionsof a library for the purpose of communicating to funding stakeholders.Five chapters focus on the broader perspective of evaluations by typesof library; evaluation of individual accomplishments in the library'srole in teaching, learning, and research; models of valuing economicimpact of libraries, with illustrated applications in specific libraries;and methods for evaluating social impacts of libraries. The sectioncloses with a descriptive review of methods of presenting evaluationdata to communicate the value of the library, including presentation ofperformance measures, the balanced scorecard, the performanceprism, and the Three R's of Performance (resources, reach, andresults). These methods are covered in sufficient detail for readers toutilize.
Two tools are reproduced in appendices. The Raward LibraryUsability Analysis Tool offers helpful checklists of categories ofinformation to consider in evaluating Web sites. The core twenty-two statements used in the LibQUAL+ survey questionnaire arereplicated, but with little guidance in the text on how toadminister or use the three categories of ratings noted in theinstructions. Both author/title and subject indices are included atthe end of the book.
The author writes from practical experience, teaching, andresearch. Currently an instructor at the San Jose State UniversitySchool of Library and Information Science, he continues his longcareer as a consultant and speaker with expertise in strategicplanning, assessment and evaluation of library services, performancemeasures, and library automation. Educated in business management,Matthews worked for many years in the commercial library automa-tion sector, including holding administrative positions with GeacComputers and EOS International.
This is a highly readable book. The writing is clear and concise andis accompanied by bulleted lists, figures, and tables to summarize keypoints. The references at the end of each chapter provide the reader aset of excellent bibliographies. The book can serve as a textbook forthose teaching introductory library evaluation and management, aselectable review for those seeking ideas for conducting an evaluationof specific services, or a reference tool for identifying sources and briefdescriptions in response to questions about library evaluation andmeasurement. This book compliments numerous other books andarticles the author has written on measuring value and effectiveness(Matthews, 2002, 2003, 2005, 2007) as well as the work of others thatdescribe specific cases of evaluations (Wallace & Van Fleet, 2001) as
322 Reviews
![Page 2: The evaluation and measurement of library services - Vol. 30(2008).pdf](https://reader036.fdocuments.net/reader036/viewer/2022081817/55cf9c52550346d033a96c43/html5/thumbnails/2.jpg)
well as more detailed descriptions of methodologies, available bothonline1 and in print.2
References
Matthews, J. R. (2002). The Bottom Line: Measuring the Value of the Special Library orInformation Center. Englewood, CO: Libraries Unlimited.
Matthews, J. R. (2003).Measuring for results: the dimensions of public library effectiveness.Westport, CT: Libraries Unlimited.
Matthews, J. R. (2005). Strategic planning and management for library managers.Westport, CT: Libraries Unlimited.
Matthews, J. R. (2007). Library assessment in higher education. Westport, CT: LibrariesUnlimited.
U.S. General Accounting Office, Program Evaluation and Methodology Division. (1991).Designing evaluations. Washington, DC: General Accounting Office. Retrieved April29, 2008, from http://archive.gao.gov/t2pbat7/144040.pdf
Wallace, D. P., & Van Fleet, C. (2001). Library Evaluation: A Casebook and Can-do Guide.Westport, CT: Libraries Unlimited.
White, A., & Kamal, E. D. (2006). E-metrics for library and information professionals. NewYork: Neal-Schuman Publishers.
Danuta A. NiteckiYale University Library, P.O. Box 208240,
New Haven, CT 06520-8240, USAE-mail address: [email protected].
doi:10.1016/j.lisr.2008.05.002
1 See, for example, the online series of program evaluations provided by the U.S.General Accounting Office (now called the Government Accountability Office), ProgramEvaluation and Methodology Division, including, for example, Designing Evaluations(May 1991). Retrieved April 29, 2008, from http://archive.gao.gov/t2pbat7/144040.pdf.
2 See, for example, the numerous Sage reference books on research methods andevaluation, http://www.sagepub.com.
International Journal of Internet Research Ethics.Edited by ElizabethA. Buchanan and Charles Ess. Milwaukee, WI: Center for InformationPolicy Research, School of Information Studies, University ofWisconsin-Milwaukee, 2008. $0.00/Free/Open Access. ISSN: 1936-492X. Availableat http://www.uwm.edu/Dept/SLIS/cipr/ijire/index.html
The rapid development and adoption of Internet applications haveafforded researchers with opportunities for conducting inquiries innew ways and into new areas. There are a number of outlets for workon ethical dimensions of the Internet (e.g., Ethics and InformationTechnology and Journal of Information Ethics), but until now there wasno recognized journal on Internet research ethics (IRE). This is theunique selling point of the International Journal of Internet ResearchEthics (IJIRE). There will be relief among members of researchcommunities that this journal has come to fruition.
That IJIRE is under the stewardship of Elizabeth Buchanan andCharles Ess, as editors-in-chief, will assure readers of standards, andI was encouraged by the variety of contributions to the inauguralissue. Gove Allen, Dan Burk and Charles Ess (“Ethical approaches todata gathering in academic research”) provide an extensive discus-sion on the ethics of using automated data collection tools (“bots”).Without a body of literature on the ethics of using bots, the authorshave to clarify the problem, settling on the phenomena of theresearcher or programmer rather than the bot itself. Identifyingconflict between utilitarian and deontological approaches todeploying bots is a useful contribution and will initiate furtherdebate.
Annamaria Carusi (“Data as representation: Beyond anonymity ine-research ethics”) anticipates ethical problems for Internet-basedresearch beyond issues of anonymity, confidentiality and privacy.
Carusi's paper turns on her appropriation of the distinction between“thick” and “thin” identities, already an arbitrary dualism in theresearch literature that does not help Carusi move “beyond anonym-ity”: as the “Mass Society” studies of the 1950s and 1960s showed,anonymity does not guarantee individuals cannot be identifieddefinitively.
Sara Grimes (“Researching the researchers: Market researchers,child subjects and the problem of ‘informed' consent”) examines thecontradictions between the regulation of online data collectionregarding children for (ostensibly) market research purposes andthe academic study of marketing practices that collect and usechildren's personal information. Grimes' paper is a thought-provokingseries of accounts that will be of interest to various researchcommunities, e.g. on childhood, marketing, media studies, researchmethods, as well as Internet studies.
Perhaps the most useful contribution for readers designingresearch projects is Tomas Lipinski's discussion (“Emerging legalissues in the collection and dissemination of Internet-sourcedresearch data: Part I, basic tort law issues and negligence”) of thelegal liability of researchers who use posts (on blogs, chat rooms, etc.)as data. This is the first of four articles by Lipinski on legal issuesarising from Internet research and, on this evidence, it promises tobuild into an essential reference source for those who use the Internetfor data collection and dissemination.
Erin Hvizdak's (“Creating a web of attribution in the feministblogosphere”) paper stands out and is appropriately positioned afterLipinski's paper. Ownership — who is the owner of knowledge: theresearcher or the informant? — has been a controversial issue forethnologists and cultural anthropologists ever since Paul Radin's earlyfieldwork with the Winnebago tribe; Hvizdak should be invited towrite a follow-up paper broadening out these concerns. Given thatblogs constitute her source of data, I expect that Hvizdak's paper willgenerate plenty of comment.
The overall standard of articles is compromised by “Small Talk,” aconversation “with a leading voice in Internet research ethics”(“Editorial,” p. 3); this is to be a regular feature, apparently, andscheduled to be with members of the editorial board. (Why do noteditors of new journals learn that providing space to boardmembers really does not give a good impression?) I think thisfeature is a good idea but the initial entry does not bode well: notonly does this format unbalance the regularity of papers; AnnetteMarkham's contribution is less sophisticated than the editorial ledme to expect. Her discussion of identities exhibits the exaggeratedanalytic claims for Internet research exploded elsewhere (Greiffen-hagen & Watson, 2005); and regards interviews as unproblematicconduits for information gathering. Online interviews are subject tothe same “methodological troubles” as traditionally administeredinterviews: emphasizing e.g. non-verbal communication distractsfrom the fundamental distortions that interview questions imposeupon responses.
Reviewing a journal from a single issue is difficult but there arefeatures of the IJIRE that are open to supportive comment. The searchfor a brand identity must continue. The journal homepage could alsolink to other resources, such as a rolling bibliography of IRE papers thatare “scattered” within discipline-specific journals, which builds uponthe bibliographies at the Center for Information Policy Research(http://www.uwm.edu/Dept/SOIS/cipr/ire_bibs.html); relevant bib-liographic databases and portals for academics and practitioners(e.g., the Kennedy Institute of Ethics (http://bioethics.georgetown.edu) and the Chartered Institute for Information and LibraryProfessionals (CILIP) information ethics site (http://www.infoethics.org.uk/CILIP/admin/index.htm)). Greater flexibility of the homepagewould add value to the journal and to its forthcoming features, such asthe “practical scenario or ethical dilemma” promised in the editorial(p. 3). Currently, the IJIRE is a set of inert pages that fails to takeadvantage of the capabilities which the Internet provides over paper-
323Reviews