Sei sulla pagina 1di 15

QUT Digital Repository: http://eprints.qut.edu.

au/

Shaw, Kate and Spink, Amanda H. (2009) University library virtual reference services : best practices and continuous improvement. Australian Academic & Research Libraries, 40(3). pp. 192-205.

Copyright 2009 Australian Library and Information Association Ltd and


Authors

UNIVERSITY LIBRARY VIRTUAL REFERENCE SERVICES: BEST PRACTICES AND CONTINUOUS IMPROVEMENT
Kate Shaw, Amanda Spink
The inclusion or not of chat services within Virtual Reference (VR) is an important topic for university libraries. Increasingly, email supported by a Frequently Asked Questions (FAQ) database is suggested in the scholarly literature as the preferred, cost-effective means for providing university VR services. This paper examines these issues and identifies some best practices for university library VR services relating to chat and email service, collaborative service provision, services staffing, and staff training. Further studies are required to more completely identify best practices for the complete range of VR services. AARL September 2009 vol 40 no 3 pp192205 Kate Shaw, Queensland University of Technology Library, Gardens Point Campus, Brisbane, Qld, 4001. email: k2.shaw@qut.edu.au Amanda Spink, Research Capacity Building Professor of Information Science, Queensland University of Technology, Gardens Point Campus, Brisbane, Qld, 4001. email: ah.spink@qut.edu.au IntRoduCtIon

T
192

his paper discusses the concepts of Virtual Reference (VR) and best practice, outlines VR service tools and service provision, and examines the evaluation of VR services. It provides a definition of best practice in email and chat service provision, compares independent and collaborative VR approaches, discusses staffing and staff training, and analyses methods for evaluating and continuously improving VR services. Much of the international scholarly literature on university library VR best practice analysed within this paper has been published in the last eight years. Of the papers examined, twenty three were identified as empirical papers, and sixteen were opinion pieces.

Volume 40 Number 3

Australian Academic & Research Libraries

University library virtual reference services: best practices and continuous improvement

vIRtuAL ReFeRenCe seRvICes And Best pRACtICe Virtual Reference has been defined as a reference service initiated electronically often in real-time, where patrons employ computers or other Internet technology 1 to communicate with reference staff, without being physically present. Virtual reference is also called digital reference, e-reference, electronic reference, remote 2 reference, Internet information services, live reference, and real-time reference. This paper focuses on email and online chat services and instant messaging as the predominant VR service tools used and thus the most discussed in the literature. Morin suggests that best practices and guidelines outline a process, practice, 3 or method that can improve effectiveness and efficiency in several situations. Although best practice is not easy to define, Morin believes library best practices 4 should be task specific. Best practices and guidelines provide opportunities to define quality services and introduce new assessment measures, assist with task delineation, save staff time by outlining scenarios and options, set expectation 5 levels clearly, smooth staffing transitions, and identify gaps in understanding. Lankes suggests that establishing best practice within VR services is advantageous 6 to fostering better linkage between research and practice. Lankes also raises the concern that money is being invested in services, software, and planning without benefit of clear empirical research pointing to best practices and without benefit of clear understanding of the advantages and shortcomings of these reference 7 services. Morin suggests that VR service best practice can be used as a guide to ensure consistency during the implementation of a VR service, and goes on to recommend that best practice and guidelines can be developed by: reviewing digital reference guidelines for comparable reference services; assessing existing reference policies and guidelines to identify what is being emphasised; examining current practices by checking with librarians tacit knowledge; comparing the organisations mission and goals to patrons needs, and utilising any new software or recent organisational changes to improve 8 processes.

Kresh supports the need to establish standards based on best practice when developing a VR service. She also emphasises that the services vision or mission, staffing needs, and patron needs should be used to assess what is best for it to be 9 effective. Strong proposes there are likely no formula answers to delivering quality VR services and encourages people to think creatively about solutions in our virtual 10 space. Strong also suggests that an evolving digital reference services model 11 should be adopted. Additionally, Wasik provides a structured six step process that organisations should follow in order to implement any successful VR service: informing, where preliminary research into areas of expertise and existing service areas is conducted;
September 2009 193

Australian Academic & Research Libraries

Kate Shaw, Amanda Spink

planning, where procedures, methods, and policies that reflect the overall organisational goals are developed; training, including developing a training plan to prepare staff for the service; prototyping, where the service is tested and modified before launch; contributing, which involves publicity and resource development for service support; and evaluating, which includes regular service evaluation to identify 12 improvement opportunities.

Prior to establishing a chat service, Radford and Kern advocate conducting a user needs assessment to determine the optimal hours and days of a service. They also recommend evaluating software needs, vendor options, institutional technology needs, and suggest speaking to similar institutions that have already introduced a VR service. They go on to suggest that it is not unreasonable to allow a minimum of two years to pilot or establish a new virtual chat service 13 before determining its success or otherwise. Currently, further research is required in order to determine the most effective tools a VR service should provide and how a VR service should operate in order to achieve best practice. Research is needed to determine the types of questions best suited to different media and the questions that can be adequately 14 answered without human intervention. Gross, McClure, and Lankes state that the literature indicates that email reference works best for ready reference or simple factual questions that require few resources and can be stated in an 15 unambiguous way. However, they note that Short Messaging Services (SMS), video conferencing, chat, and other applications from the commercial sector still require evaluation from libraries in order to determine their usefulness across the range of user questions. They go on to propose that new VR service models are needed for state of the art reference service and improvement in quality across 16 the range of VR services. vIRtuAL ReFeRenCe seRvICe tooLs Email and chat Controversy exists within the literature regarding the viability of chat services in cost/benefit terms. Increasingly, email supported by a Frequently Asked Questions (FAQ) database is suggested as a preferred and more cost effective means of VR service provision. There is also discussion regarding the number 17 of support hours a VR service should provide as patrons grow to expect that 18 email and chat VR services will be available over extended hours of operation. The inclusion of a chat facility within the VR service has been proposed by Zanin-Yost who suggests chat and email patrons should:
194

be asked questions to clarify the librarians understanding of the patrons needs;


Australian Academic & Research Libraries

Volume 40 Number 3

University library virtual reference services: best practices and continuous improvement

be provided with a variety of accessible information; be asked if their question was answered; be referred to a library subject specialist if required, and for chat patrons, be sent messages which summarise the librarians 19 activities.

Ciccone and VanScoy suggest patrons should be sent scripted please hold messages and instant messaging should be utilised to ask other librarians for 20 back up assistance during busy chat periods. Barbier and Ward suggest that the best chat of the month chats could be used to highlight good chat techniques and to monitor quality. They suggest that chat transcripts that indicate software difficulties, difficulty with chat techniques, or rudeness should be referred to the 21 service coordinator who can assist reference staff to handle such problems. Coffman and Arret note that there are ways to reduce the cost of chat reference services and that alternative services may be preferred. Chat reference service costs may be reduced by staffing the service from the regular reference desk, contracting out staff, joining consortia, or by using free or low-cost systems like 22 AOL, MSN, or Yahoo! Instant Messaging to provide the service. Similarly, Radford and Kern suggest using free instant messaging software to reduce chat 23 service cost. Coffman and Arret point out that telephone reference services, better email services and self-service capabilities exist as alternatives to a chat 24 reference service. In contrast, Radford and Kern recommend providing telephone, email, and chat within the VR service, and encouraging overlap and 25 referrals between the range of VR service tools. An FAQ database has been proposed as a way of improving VR services. Allowing patrons to use self service can reduce staff numbers and hours required for a chat or email VR service, and reduce the number of repetitive or straightforward operational questions handled by staff. Pomerantz, Nicholson, Belanger, and Lankes studied the paths VR services take in a general process model of asynchronous VR. They found that 35% of respondents wanted the FAQ to be automatically searched when a question was 26 received and 15% of respondents wanted access to stored FAQs. Although, Ciccone and VanScoy state that developing a knowledge base of reference answers is useful for answering questions and making referrals, they suggest that it is not useful if staff are primarily conducting reference interviews and instructing patrons. While Gross, McClure, and Lankes suggest that providing canned question responses and developing FAQ pages may meet user needs in specific situations, these approaches need to be assessed for their overall effectiveness and 27 positive impact on the service more generally. vIRtuAL ReFeRenCe seRvICe pRovIsIon The literature identifies a range of staff attributes, service levels, and training that are necessary to provide a successful VR service which may be delivered as a stand-alone operation or in cooperation with other agencies.
Australian Academic & Research Libraries September 2009 195

Kate Shaw, Amanda Spink

Independent versus collaborative service provision Virtual reference can be provided as an independent operation or as a collaborative venture with others. A collaborative VR service comprises an online network of libraries using their cumulative local knowledge and collections to provide 28 VR services to patrons from any of their members. Independent chat service provision refers to the provision of a service within the one location. Radford and Kerns case study of six independent and three collaborative chat reference services identified that the major reasons for chat services 29 discontinuation are funding problems and low volumes of patronage. BaileyHainer argues that collaborative chat services provide increased chat service 30 hours for a lower cost and are thus more likely to remain viable. However, Gross, McClure, and Lankes report that collaborative chat services present their own cost issues. One example is the matter of how the cost of developing, managing, and providing a collaborative chat service is to be shared amongst 31 partners. In terms of what is required to provide a successful collaborative chat service, Radford and Kern suggest a full time coordinator, a variety of libraries, and a strong commitment from all libraries within the service to be necessary for successful service provision. They go on to report that libraries within a collaborative service need to encourage ongoing communication and support amongst themselves; have aligned institutional missions, staffing patterns, and 32 user groups; and share a collaborative chat service vision. Staffing and staff training In any service, Strong suggests that staff should be proactive, knowledgeable, well-trained active listeners, able to respond to all inquires, neutral in their opinions about the information provided, or, in cases where personal opinions are 33 provided, willing to acknowledge them as personal opinions. Stormont, Boyer, and Francoeur suggest that staff demonstrate keyboarding skills, a multi-tasking ability, good searching skills, an ability to write concise messages, and an ability 34 to deal with stress and demanding users. Radford recommends staff also possess interpersonal skills, such as techniques for rapport building, compensation for lack of nonverbal cues, strategies for relationship development, evidence of 35 deference and respect, face-saving tactics, and greeting and closing rituals. Virtual Reference staff service models have been proposed. Chat is often performed away from the physical reference desk with additional staff hired for 36 service provision due to the demands of chat services. McClennen and Memmot suggest the VR service be staffed using a specified role model for information professionals, including filterer, answerer, administrator, and coordinator. Different staff being employed for each of the four roles: the filterer responding to repeated and trivial questions; an answerer concentrating on more complicated questions; the administrator ensuring smooth service operation, adding answers to the FAQ database, creating user accounts, and performing other technical tasks and
Australian Academic & Research Libraries

196

Volume 40 Number 3

University library virtual reference services: best practices and continuous improvement

a coordinator defining and implementing policies and procedures, and 37 initiating service improvement.

Similarly, Strong proposes a triage model for VR service staffing. Virtual reference questions being grouped into three broad categories, directional and inquiry; strategy and tutorial; and research assistance. Directional and inquiry questions are answered by library assistants at the first level, strategy and tutorial question are answered by second-level librarians, while questions that require 38 subject expertise are answered by third-level subject specialists. All of these approaches require considerable management, staffing resources, and administrative effort in order to ensure they can be implemented in a practical and cost effective manner. VR service staff training recommendations have also been described. Cheat sheets, online tutorials, the provision of an on call person to assist with technical difficulties, flexible exercise completion times, and a variety of training techniques to cater for different learning styles have been identified as effective approaches 39 for conducting online VR training. Traditional presentations, hands-on practice, written training documentation, and one-on-one training sessions have 40 also been recommended. Ciccone and VanScoy suggest practicing responses to VR queries, discussing strategies for dealing with multiple patrons at once, 41 and considering ways of handling problem patrons. Furthermore, provision of 54 centralised and similar training levels for all VR service staff and staff training that includes train the trainer sessions, ongoing training opportunities, transcript 42 review, and high levels of staff communication have been recommended. evALuAtIng vIRtuAL ReFeRenCe seRvICes To identify VR service best practice requires service evaluation. Findings in VR service tools, service provision, staffing, and staff training highlight some best practice recommendations. Service evaluation methods include techniques borrowed from traditional reference, evaluation guidelines, new approaches, and assessment to move 43 beyond traditional conceptions of reference services. Gross, McClure, and Lankes propose Murfin and Bunges four methods for assessing cost effectiveness in academic libraries as a starting point for VR service evaluation focusing on 44 costs. These methods include: 1. measuring the full cost of reference transactions; 2. using a reference service cost effectiveness index based on success, helpfulness, accessibility, and time or cost; 3. determining the cost or time taken for each successful question, and 4. developing a cost-benefit formula.
45

Other traditional reference service measures include the input/output model that measures inputs and service utilisation in small libraries, and functional cost 46 analysis to define and allocate the costs of service provision.
Australian Academic & Research Libraries September 2009 197

Kate Shaw, Amanda Spink

In addition, Morin, Nilsen, and Shachaf and Horowitz propose using the International Federation of Library Associations (IFLA) digital reference guidelines, and the Reference and User Services Association (RUSA) guidelines for implementing and maintaining VR services in order to assess service 47 effectiveness. IFLA digital reference guidelines cover service administration and 48 the practice of VR, and are designed to promote international best practice. These guidelines include general principles for the practice of VR, content guidelines for the service, chat guidelines, and specific guidelines for conducting chat sessions. RUSA guidelines for implementing and maintaining VR services are also designed to assist libraries and consortia to establish VR operations. The RUSA guidelines advise on preparing, providing and organising VR services, 49 including privacy issues. According to Morin, although the transition from implementation to a sustainable digital reference service can take years, best practices and guidelines can help ensure consistent service by reflecting practical 50 customs, procedures, and habits during the implementation period. McClure and Lankes propose four types of measurement to understand in order to set VR service benchmarks. These include outcome measures, process measures, economic measures, and user satisfaction: outcome measures are described as the quality of the answers and should be assessed based on response accuracy, response appropriateness to the user audience, the opportunities for interactivity, and the level of instruction provided in the response; process measures as process effectiveness and efficiency, based upon service accessibility, response timeliness, clarity of service procedures, percentage of questions answered, staff training and review methods, and the services review and evaluation methods; economic measures to assess cost effectiveness, taking into account VR session costs, the infrastructure required to support quality VR services, and the impact on other library expenditures, and user satisfaction, the degree that users of a VR service are satisfied with the process and results, which can be assessed using indicators such as accuracy, timeliness, staff behaviour, technical considerations, 51 and physical facilities.

In their consideration of cost effectiveness, Lankes, Gross, and McClure suggest both utilisation and technical standards be considered with costing, statistics, and 52 measuring for VR services. They recommend embedding quality standards and assessment data into software and infrastructure and go on to describe utilisation 53 standards that deal with VR service use, delivery, and success. Technical standards include hard tools use, such as software, hardware, protocols and other standards, as well as soft tools, such as metadata and organisational schema. They propose six quality standards, including courtesy, accuracy of responses, satisfaction level of users interaction with the service, percentage of repeat users, user groups service awareness, and cost per service transaction. They also propose five types of performance success measures for quality standards,
198 Volume 40 Number 3 Australian Academic & Research Libraries

University library virtual reference services: best practices and continuous improvement

including descriptive statistics and measures to determine service scale and scope, log analysis, user satisfaction measures, ongoing service cost analysis, and staff 54 time expended. Rigby, Smithers and Zhang propose additional methods of VR service evaluation. They outline three key performance indicators (KPIs) to enable evaluation and comparison of operators in New South Wales (Unison) and Council of Australian University Librarians (CAUL) libraries collaborative VR services. KPIs include attributes, analytical skills, support and staffing required to run the service; as well as the knowledge and ability to match resources to user needs. They identify new service standardised measures, including a client satisfaction survey, Rodski survey, service staff training methods and weekly statistical surveys to measure 55 KPIs. In addition, Lou also recommends using a chat reference evaluation framework to evaluate the service and its reception by users (see Appendix 1) . He suggests chat reference be evaluated from the services perspective using statistics, and the content of chat reference transactions and proposes user acceptance be evaluated by measuring their awareness, preferences, status, and feedback perceptions. Lou also recommends using chat reference evaluation manuals to 56 supplement this general framework. Continuous improvement of Virtual Reference services Although significant work has been done in this area, further research is needed to determine the most effective models for service evaluation and continuous improvement. Gross, McClure, and Lankes highlight that most VR literature is anecdotal or editorial in nature, includes weak methods, and provides limited analysis of services. They suggest that case studies often do not provide measures or assess services, propose that evaluation should be approached from a conceptual point of view, and indicate that few people are actually evaluating services. They also found that the scholarly literature includes few economic models, ways to account for VR services, or participation from users in service design, development, and implementation. In addition, they note that VR services need to do better in assigning costs and assessing the most efficient way of providing services. For collaborative approaches, they stress the importance of an awareness of how costs are shared in setting up the service, and emphasise the importance of using input from users to improve service delivery and assess 57 acceptable turnaround times and the design of VR service marketing. Radford found that most studies focus on question accuracy and efficiency. She proposes that more research is needed to understand and improve chat encounter quality and that more empirical studies are needed to address unanswered or partially answered research questions. Future research should involve online surveys, face to face interviews, and focus groups with librarians, clients, and nonusers to provide more definitive answers about the relationship between content 58 and the quality of responses.

Australian Academic & Research Libraries

September 2009

199

Kate Shaw, Amanda Spink

dIsCussIon While some best practices have been identified within the scholarly literature for chat and email service staffing, staff training, and independent and collaborative provision of services, further evaluation of services is needed to identify a clearer picture of VR service best practice. Debate exists within the literature about the use of chat and whether chat services are viable in cost benefit terms. Although other suggestions exist for reducing chat service cost and improving chat service effectiveness, increasingly, email supported by an FAQ database is seen as the preferred and most cost effective means for providing this service. This approach can be argued to be more cost effective than providing a chat service because patrons are able to self-service, and, generally, less time is spent responding to an email inquiry in comparison to time spent responding to chat inquiries. Email supported by an FAQ database encourages patrons to search for information within the database before contacting library staff. More time is generally spent answering a chat enquiry in comparison to an email enquiry because chat enquiries are often less clearly defined and more interactive, resulting in a prolonged encounter. It is also likely that chat enquiries often require time to be spent communicating back and forth with patrons in order to clarify the patrons question before a comprehensive answer can be provided. Further research is required in order to determine whether or not chat services are an effective and economical option for resource constrained libraries. University libraries interested in providing chat services should consider utilising free chat service providers as one option to help reduce their VR service costs. ReCoMMendAtIons Based on this review, further VR service evaluation needs to be undertaken in order to better understand the most effective approaches to adopt. Specifically, online chat service evaluation should be undertaken in order to determine the cost effectiveness of such services. This review of the literature also suggests an email service supported by a Frequently Asked Questions database is seen as the most cost effective approach to VR service. Using the IFLA and RUSA guidelines to is a useful approach to adopt in order to evaluate service provision. While traditional reference evaluation methods, the use of guidelines and development of new evaluation methods are suggested as tools for assessing service effectiveness, these methods have been criticised. Further research into appropriate methods for evaluating VR services is recommended.

200

Volume 40 Number 3

Australian Academic & Research Libraries

University library virtual reference services: best practices and continuous improvement

notes 1. American Library Association Reference and User Services Association, Guidelines for Implementing and Maintaining Virtual Reference Services 2004 at http:// www.ala.org/ala/mgrps/divs/rusa/resources/guidelines/ALA_print_ layout_1_512081_512081.cfm viewed 21 August 2008. K Nilsen & C Ross Evaluating virtual reference from the users perspective Reference Librarian vol 46 no 95/96 2006 pp53-79 at http://www.haworthpress. com.ezp01.library.qut.edu.au/store/E-Text/View_EText.asp?sid=WP45G TWEF8TR8PDW9EPNECWH0QG3D146&a=3&s=J120&v=46&i=95% 2F96&fn=J120v46n95%5F05 viewed 13 August 2008. A C Morin Chapter 9: Approaching Best Practices and Guidelines for Digital Reference R Lankes, J Janes, L C Smith & C M Finneran (eds) The Virtual Reference Experience: Integrating Theory into Practice London Neal-Schuman Publishers Inc 2004. Morin ibid. Morin loc cit. R D Lankes Digital Reference Research: Fusing Research and Practice Reference & User Services Quarterly vol 44 no 4 2005 pp320-326. Lankes ibid., p320. Morin loc cit. D Kresh Virtually Yours: Thoughts on Where We Have Been and Where We Are Going with Virtual Reference Services in Libraries B Katz (ed) Digital Reference Services USA The Haworth Information Press 2002/2003.

2.

3.

4. 5. 6. 7. 8. 9.

10. G E Strong If we change it will they come? Reference Services Review vol 24 no 3 2006 p339. 11. Strong ibid., pp333-339. 12. J M Wasik Building and maintaining digital reference services Eric Digest 2000 at http://www.ericdigests.org/1999-4/digital.htm viewed 6 November 2008. 13. M L Radford & K M Kern A multiple-case study investigation of the discontinuation of nine chat reference services Library and Information Science Research vol 28 no 4 2006 pp521-547. 14. M Gross, C R McClure & R D Lankes Assessing Quality in Digital Reference Services: An Overview of the Key Literature on Digital Reference R D Lankes, C R McClure, M Gross & J Pomerantz (eds) Implementing Digital Reference Services: setting standards and making it real London Facet Publishing 2003. 15. Gross, McClure & Lankes op cit., p178. 16. Gross, McClure & Lankes loc cit. 17. Radford & Kern loc cit. 18. M Ruppel & J C Fagan Instant messaging reference: Users evaluation of library chat Reference Services Review vol 30 no 3 2002 pp183197.
Australian Academic & Research Libraries September 2009 201

Kate Shaw, Amanda Spink

19. A Zanin-Yost Digital Reference: What the past has taught us and what the future will hold Library Philosophy and Practice vol 7 no 1 2004 pp1-16. 20. K Ciccone & A VanScoy Managing an established virtual reference service Internet Reference Services Quarterly vol 8 no 1/ 2 2003 pp95-105. 21. P Barbier & J Ward Ensuring quality in a Virtual Reference environment Community and Junior College Libraries vol 13 no 1 2004 pp55-71. 22. Coffman & Arret To chat or not chat: Taking yet another look at virtual reference loc cit; Tenopir Chats positive side loc cit. 23. Radford & Kern loc cit. 24. Coffman & Arret To chat or not chat: Taking yet another look at virtual reference loc cit. 25. Radford & Kern loc cit. 26. Coffman & Arret To chat or not chat: Taking yet another look at virtual reference loc cit; G Chowdhury & S Margariti Digital reference services: a snapshot of the current practices in Scottish libraries Library Review vol 53 no 1 2004 pp50-60. 27. M Gross, C R McClure & R D Lankes loc cit. 28. Ciccone & VanScoy loc cit. 29. Radford & Kern loc cit. 30. B Bailey-Hainer Virtual Reference: Alive and well Library Journal vol 130 no 1 2005 pp46-47. 31. Gross, McClure & Lankes loc cit. 32. Radford & Kern loc cit. 33. Strong loc cit. 34. S Stormont Going where the users are: Live digital reference Information technology and libraries vol 20 no 3 2001 pp129-134; J Boyer Virtual reference at North Carolina State: The first one hundred days Information technology and libraries vol 20 no 3 2001 pp122-129; S Francoeur An analytical survey of chat reference services Reference Services Review vol 29 no 3 2001 pp198203. 35. M L Radford Encountering Virtual Users: A Qualitative Investigation of Interpersonal Communication in Chat Reference Journal of the American Society for Information Science and Technology vol 57 no 8 2006 pp1046-1059. 36. Boyer loc cit; S Coffman Well take it from here: further developments wed like to see in virtual reference software Information Technology and Libraries vol 20 no 3 2001 pp149-153; M Marsteller & P Neuhaus The chat reference experience at Carnegie Mellon University 2001 at http://www.contrib. andrew.cmu.edu/~matthewm/ALA_2001_chat.html viewed 5 November 2008; Stormont loc cit. 37. M McClennen & P Memmot Roles in digital reference Information Technology and Libraries vol 20 no 3 2001 pp143-148. 38. Strong loc cit.
202 Volume 40 Number 3 Australian Academic & Research Libraries

University library virtual reference services: best practices and continuous improvement

39. E G Abels, P B Kantor & T Saracevic Studying the cost and value of library and information science: Applying functional cost analysis to the library in transition Journal of the American Society for Information Science vol 47 no 3 1996 pp217-228. 40. A Kawakami & P Swartz Digital reference: training and assessment for service improvement Reference Services Review vol 31 no 3 2003 pp227-236. 41. Ciccone & VanScoy loc cit. 42. T A Stewart Mystified by training? Here are some clues Fortune vol 143 no 7 2001 p184; Kawakami & Swartz loc cit. 43. Radford & Kern loc cit. 44. A Kasowitz, B A Bennett & R D Lankes Quality standards for digital reference consortia Reference and User Services Quarterly vol 39 no 4 2000 pp355-363; P Shachaf & S Horowitz Virtual reference service evaluation: Adherence to RUSA behavioural guidelines and IFLA digital reference guidelines Library and Information Science Research vol 30 2008 pp122-137; T A Peters Current opportunities for the effective meta-assessment of online reference services Library Trends vol 49 no 2 2000 pp334-349; Morin loc cit, K Nilsen Comparing users perspectives of in-person and virtual reference New Library World vol 107 no 3/4 2006 pp91-104; T Saracevic & L Covi Challenges for digital library evaluation N K Roderer & D H Kraft (eds) Proceedings of the 37th American Society for Information Science Annual Meeting Washington D.C American Society for Information Science 2000. 45. Gross, McClure & Lankes loc cit. 46. M Murfin & C Bunge A cost effectiveness formula for reference service in academic libraries Washington DC Council of Library Resources 1989. 47. Morin loc cit; Nilsen loc cit; Shachaf & Horowitz loc cit. 48. International Federation of Library Associations Reference and Information Services Section, IFLA Digital Reference Guidelines 2005 at http://www.ifla.org/ VII/s36/pubs/drg03.htm viewed 21 August 2008. 49. American Library Association loc cit. 50. Morin op cit., p185. 51. C R McClure & R D Lankes Assessing quality in digital reference services: A research prospectus 2001 at http://quartz.syr.edu/quality/Overview.htm viewed 5 November 2008. 52. R D Lankes, M Gross & C R McClure Cost, statistics, measures, and standards for digital reference services: A preliminary view Library Trends vol 51 no 3 2003 pp401-413. 53. Lankes, Gross & McClure ibid. 54. Lankes, Gross & McClure loc cit. 55. A Rigby, A Smithers, & S Zhang Unison Digital Reference Key Performance Indicators Project Report 2006 at http://www.caul.edu.au/best-practice/ digitalreference2006.doc viewed 8 August 2008;

Australian Academic & Research Libraries

September 2009

203

Kate Shaw, Amanda Spink

56. L Lou Chat reference evaluation: a framework of perspectives and measures Reference Services Review vol 36 no 1 pp71-85. 57. Gross, McClure & Lankes loc cit. 58. Radford loc cit.

204

Volume 40 Number 3

Australian Academic & Research Libraries

University library virtual reference services: best practices and continuous improvement

APPENDIX 1: Chat reference evaluation framework


Evaluation perspective Service perspective Evaluation angle Variable Measurement System logs or manual archiving System logs or manual archiving System logs or manual archiving System logs or manual archiving Analysis of transcripts Analysis of transcripts Systems logs Analysis of transcripts Analysis of transcripts

Total number of chat sessions within the Statistics of chat reference period of time that was examined transactions Number of successfully and unsuccessfully connected chat sessions Frequency of chat sessions Chat session length Turns taken in chat session Word number in users question and librarians response Referral webpage Type of users questions Content of chat reference Quality of chat reference transactions transactions 1. Quality of the answer to users question Completeness of the answer Correctness of the answer Appropriateness of referral Delivery of the answer Effectiveness of communication Tones and attitudes exhibited by the librarian 2. Evidence of reference interview 3. Completeness of chat session

User perspective Users awareness and preferences User status User perceptions or feedback

Users preferences of reference service venues Users awareness of the availability of chat reference service The role in which users ask their questions to chat reference service Users motivation to use chat reference service

Survey Survey Survey Survey

User perceptions on the interface of chat User studies reference service User satisfaction with chat reference service Users willingness to return User perceptions on the advantages of chat reference service User perception on the disadvantages of chat reference service Users use of information received from chat reference service Survey & analysis of transcripts Survey Survey Survey Follow-up survey or interview

Australian Academic & Research Libraries

September 2009

205

Potrebbero piacerti anche