Post on 30-Dec-2015
description
Evaluation of NCknows,a Statewide
Collaborative Chat Reference Service:
What Users and Others Told UsCharles R. McClure, Francis Eppes Professor,
andDirector, Information Institute
www.ii.fsu.edu School of Information Studies
Florida State Universitycmcclure@lis.fsu.edu
Background
Evaluation: September 2003 – February 2005
NCknows launched: 16 February 2004
Funded by an LSTA grant, overseen by the SLNC Library Development Section
18 libraries during pilot phase
Stakeholders: users, libraries, NCknows, SLNC
Check it out: www.ncknows.org
Evaluation question
Is collaborative virtual reference an effective way to meet the information needs of North Carolinians?
Secondary evaluation questions
What is required of a library that wishes to offer virtual reference?
What is the value added if different types of institutions work together?
What is the impact on libraries that provide virtual reference service?
Is virtual reference expandable to the whole state?
How will this project increase our knowledge of effective organizational models?
Secondary evaluation questions
How can the quality of the reference service provided be measured?
Can staff from different types of libraries provide quality reference service to users from other types of libraries?
How will the project further greater use of existing resources such as NC LIVE?
What partnering or leveraging opportunities exist?
Are users satisfied with the service?
Mixed-method evaluation
1.Service: statistical analysis
2.Chat sessions: peer review of transcripts
3.Patrons: exit surveys & follow-up phone interviews
4.Librarians: phone interviews
Results: patrons
Satisfaction was high
Question motivation: for a work or school task, personal curiosity, known-item search (⅓ each)
Use of the info provided: personal, home, office, school, business, and more
Discovered NCknows: recommended by teacher/professor (10%), search engine (20%), library materials (70%)
Results: transcript evaluation
NCknows librarians perceived by users as providing better service than 24/7 staff
NCknows librarians perceived equally to 24/7 in terms of knowledge of sources
Public librarians answer questions, academic librarians provide resources
Public librarians’ sessions got better evaluations
Results: librarian interviews
Policies & procedures: scheduling, handling email follow-ups, quality control
Thoughts on chat: inferior to desk reference, but good for quick answers to well-defined questions
Additional researchCost/benefit analysis
Sustainability
Scalability to the entire state
Situational and contextual factors unique to specific libraries that affect quality of chat reference
MISs & databases to relate reference statistics to other library statistics
Longitudinal data & the need for ongoing evaluation
User logs
Funding Models
Key evaluation issues
Understanding the importance of evaluation, and the impacts and applications of evaluation
Importance of a statewide initiative in digital reference
Ongoing funding/support for the evaluation effort
Ongoing evaluation & longitudinal data collection
How will evaluation data be used?
Quality of data: both the data reported here, and in other data collection activities
What Do We Think We Learned?
Virtual reference, for some libraries, was somewhat of a hard sell; also for some users
Older, more “professional” folks tend to be users
Technology and software still have much to be desired
Current funding models may not sustain current services and delivery approaches
Need for Champions
Lots of services competing against virtual reference
Interactive real time video next step?
Next steps
Initial Evaluation effort ended February 2005; Phase II will start in September 2005
Establishing a “culture of assessment” in NCknows
Importance of meta-analysis across states & institution types
More first-hand user-based information and assessment
Alternative Funding models
Chuck’s Parting ShotsObtaining accurate user perspectives about digital reference is difficult for a host of reasons
Assessing “Quality,” “Usefulness,” “Impact,” of and “need” for digital reference is complicated
Users want an easy way to get answers, not clear if digital reference is perceived as “easy”
Training… training… training
To some degree for digital reference, “we have met the enemy and they are us!”
The jury is still out on this one
Additional Information
McClure, C. R. et. al. (2002). Statistics, Measures, and Quality Standards for Assessing Digital Reference Library Services: Guidelines and Procedures. Syracuse, NY: Syracuse University Information Institute. Available: http://quartz.syr.edu/quality/
Pomerantz, J. and McClure, C. R. (2004). Evaluation of a Statewide Collaborative Chat-based Reference Service: Approaches and Directions, in Proceedings of the American Society for Information Science and Technology, Medford, NJ: Information Today, pp. 102-106.
Pomerantz, J., Luo, L., & McClure, C. R. Peer Review of Chat Reference Transcripts: Approaches and Strategies. Submitted to: Library & Information Science Research, vol. 27 (in press).
Pomerantz, J., Luo, L, & McClure, C. R. (2005). Evaluation of the NC Knows, Statewide Virtual Reference Project: Final Report. Chapel Hill: University of North Carolina, School of Library and Information Science [for the North Carolina State Library).
www.ncknows.org
Questions and Comments?
Charles R. McClure, Francis Eppes Professor, and
Director, Information Institutewww.ii.fsu.edu
School of Information StudiesFlorida State University
cmcclure@lis.fsu.edu
Acknowledgement:
Jeff Pomerantz
University of North Carolina at Chapel Hill