Access SPARC resources related to COVID here    
Resource

Usage Statistics

Jian Wang, Portland State University Library

Hannah McKelvey, Montana State University Library

The purpose of this document is to meet the charge of the SPARC Data Analysis Working Group (DAWG) and provide a high-level overview of why libraries collect online resource usage statistics, what types of usage statistics they can collect, and ways they collect this usage data. 

Usage statistics in libraries

Usage statistics deal with collecting, analyzing, and presenting data (Mood, 2018). Libraries collect various statistics for planning, developing, and evaluating their services. Some examples include circulation, visits, collection, acquisitions, electronic resource usage, reference/chat transactions, and library instruction sessions. This document focuses on electronic resources usage statistics in relation to budget allocation, collection development, deselection and cancellation decisions, and negotiations with vendors. In this context, library usage statistics can be defined as a “specified set of data elements to help libraries measure the use of their electronic resources” (Baker & Read, 2008) to evaluate subscriptions and to determine value and return on investment (ROI) for libraries.

Why do we need to collect usage statistics?

Libraries collect usage statistics for a variety of reasons, from helping them understand the value of their subscription resources to requirements for high-level reporting. Below are common scenarios where usage statistics would be of use, examples of standard reports that libraries are asked to complete, and a non-exhaustive list of metrics that can be calculated with usage statistics.

Common reporting scenarios

Often, libraries are required to report usage statistics at the state and university levels. For example, Portland State University (PSU) Library has a separate, off-site storage facility which houses a significant number of books. The university’s Office of Risk Management requests that the PSU Library sends an annual storage inventory report for insurance purposes (End-of Year Insurance Evaluation Stats). If the storage facility were to “go up in flames,” the University would need as much detail as possible about what was lost in order to file an insurance claim. In addition to insurance purposes, library usage statistics can be used:

  • for negotiations with publishers
  • to view trends over time
  • to inform collection management decision
  • to measure turnaways for renewals/deselections/package swaps decisions
  • to help with justifying library materials budget allocations, etc.
  • to advance university enrollment, retention, and recruiting effort

Standard reports

Each year, libraries are asked to complete standard reports so that library data can be aggregated and shared nationally. The two most common national reports are from ACRL and IPEDS:

  • Total Digital/Electronic Circulation or Usage

Usage statistics calculations and uses

Usage statistics can be used to calculate specific metrics or generate visuals that help libraries better understand and assess how their online resources are being used. Some examples include:

  • Data visualization: term used to describe all of the ways people transform data into visual representations (Duke University Libraries, n.d.).
  • Cost Per Use: the total subscription cost divided by the number of times a resource was used will generate a cost-per-use figure. Issues with cost per use include failure to take into consideration variability in the nature of usage. Unsub offers an alternative data-analysis tool for libraries to analyze and estimate the cost and value of serials subscriptions (Hinchliffe, 2020; SPARC, 2020; Kendrick, 2019; Arthur, 2018; Bucknall et al., 2014; Glasser, 2018; Huffine, 2015; Hulbert et al., 2011; King & Tenopir, 2013; Smulewitz et al., 2013).
  • Turnaways: usage data for turnaways can help libraries identify gaps or an “unmet need”.  There are some caveats concerning turnaway counts. Please review the Turnaways article for more information about key factors to consider when reviewing vendors’ turnaway reports.

Please review the Data Analysis Glossary for further examples or metric definitions.

What types of usage statistics do we (and can we) collect?

There are several specific metrics that libraries can collect to measure the usage of their online resources.

COUNTER usage metrics

COUNTER (Counting Online Usage of NeTworked Electronic Resources): COUNTER provides the standard enabling vendors and publishers to supply their library customers with consistent, credible, and comparable usage data. Using COUNTER reports, libraries can get statistics about number of downloads, turnaways, and more. For more information about COUNTER reports, please review the COUNTER 4 to 5 Overview.

Link resolver

Libraries using a link resolver may be able to obtain and utilize link resolver usage data. This data can provide valuable insight into how their patrons are accessing and using library resources, including what resources patrons are using to access the link resolver.

Proxy server logs

A proxy server is a service that libraries use to authenticate their users to provide access to many online databases and publisher websites (Day, 2017). Libraries can utilize proxy server logs to learn more about how their users login to their systems, use the data to show how library resources contribute to student success, note excessive downloads, or see the most accessed journals. EXProxy, a service provided by OCLC, offers EZProxy Analytics software (for an additional fee) to libraries who use EZProxy. Additionally, Brian Erb, of the Florida Academic Library Services Cooperative, has developed a Libguide that explains how to use EZProxy login data for library assessment.

Google Analytics

Google Analytics was originally created for commercial websites. However, libraries are now exploring strategic benchmarks, also known as key performance indicators (KPIs), which provide value to understand how users interact with resources from the library website. For example, KPIs can help libraries monitor user activities with research databases by looking at bounce rates, visits, selections per page view, average time on page, and visit depth (Yeager, 2017). More information can be found at the Google website.

How do we gather collection usage statistics?

There are several ways that libraries can access their usage statistics such as openly available resources, subscription tools that will aggregate usage data, or through their library management systems.

Third-party administrative sites

Most publishers (ProQuest, Springer Nature, IEEE, Elsevier, etc.) offer an administrative console for libraries to pull their own usage statistics. Along with publisher-developed sites, platforms with such reporting capabilities include:

Usage statistic aggregators (*requires a subscription)

There are several subscription and open source products that libraries can utilize to help them gather their usage statistics in one location:

Library management systems

Many Library Management Systems (LMS) include access to analytics software that libraries can utilize. Examples of LMSs with this feature include, but are not limited to:

To see a full list and descriptions of LMSs, please review Marshall Breeding’s 2021 Library Systems Report.

Protocols and standards

Lastly, there are protocols and standards in place that guide and help define how library usage statistics should be generated, gathered, and reported. Two of the most well-known protocols and standards are from NISO (National Information Standards Organization):

  • SUSHI (Standardized Usage Statistics Harvesting Initiative): Standardized Usage Statistics Harvesting Initiative, also known as SUSHI, is a standard protocol (ANSI/NISO Z39.93-2003) that can be used by electronic resource management (ERM) systems (and other systems) to automate the transport of COUNTER formatted usage statistics. It can also be used to retrieve non-COUNTER reports that meet the specified requirements for retrieval by SUSHI.

References & resources

Arthur, M. (2018). Managing a comprehensive cost-per-use project in a large academic library. Serials Review, 44(4). https://doi.org/10.1080/00987913.2018.1558936

Association of College & Research Libraries. (2020). ACRL Academic Library Trends and Statistics: 2020 Survey Information. https://acrl.libguides.com/stats/surveyhelp

Baker, G., & Read, E. (2008). Vendor‐supplied usage data for electronic resources: a survey of academic libraries. Learned Publishing, 21, 48–57. Retrieved 15 September 2020, from https://onlinelibrary.wiley.com/doi/epdf/10.1087/095315108X247276.

Bernhardt, B. (2017). Leveraging vendor assessments of usage data. Serials Review, 43(3–4). https://doi.org/10.1080/00987913.2017.1369610

Blanchat, K. M. (2018). Beyond COUNTER-compliant: Ways to assess e-resources reporting tools. Serials Librarian, 74(1–4). https://doi.org/10.1080/0361526X.2018.1428464

Bergstrom, T., Uhrig, R., & Antelman, K. (2018). Looking under the COUNTER for overcounted downloads. UC Santa Barbara: Department of Economics. Retrieved from https://escholarship.org/uc/item/0vf2k2p0

Boukacem, C., & Schöpfel, J. (2012). Statistics usage by French academic libraries: A survey. Learned Publishing, 25, 271–278. Retrieved 15 September 2020, from https://onlinelibrary.wiley.com/doi/epdf/10.1087/20120406.

Bucknall, T., Bernhardt, B., & Johnson, A. (2014). Using cost per use to assess big deals. Serials Review, 40, 194–196. https://doi.org/10.1080/00987913.2014.949398

Conyers, A. (2010). Usage statistics and online behaviour. The E-Resources Management Handbook – UKSG. http://dx.doi.org/10.1629/9552448-0-3.2.2

Day, J. (2017). Proxy servers: Basics and resources. https://libtechlaunchpad.com/2017/04/25/proxy-servers-basics-and-resources/#:~:text=A%20proxy%20server%20is%20a%20service%20that%20libraries%20use%20to,online%20databases%20and%20publisher%20websites.

Duke University Libraries. (n.d.). Data visualization. https://library.duke.edu/data/data-visualization#:~:text=Data%20visualization%20is%20the%20term,our%20Duke%20Data%20Visualization%20LibGuide.

Glasser, S. (2018). Judging big deals—take two. Journal of Electronic Resources Librarianship, 30(1), 27–33. https://doi.org/10.1080/1941126X.2018.1443904.

Hamid, R. J., Nicholas, D., & Huntington, P. (2005). The use and users of scholarly e-journals: A review of log analysis studies. Aslib Proceedings: New Information Perspectives, 57(6), 554–571. https://doi.org/10.1108/00012530510634271.

Hinchcliffe, L. (2020). Taking a big bite out of the big deal. The Scholarly Kitchen. Retrieved 15 September 2020, from https://scholarlykitchen.sspnet.org/2020/05/19/taking-a-big-bite-out-of-the-big-deal/

Huffine, R. (2015). Going beyond cost-per-use: Assessing the value of purchased resources. Online Searcher, 39, 54–58.

Hulbert, L., Roach, D., & Julian, G. (2011). Integrating usage statistics into collection development decisions. The Serials Librarian, 60, 158–163. http://doi.org/10.1080/0361526X.2011.556027

Integrated Postsecondary Education Data System. (n.d.). Academic Libraries Information Center. https://nces.ed.gov/ipeds/report-your-data/resource-center-academic-libraries

Kaufman, P., & Watstein, S. B. (2008). Library value (return on investment, ROI) and the challenge of placing a value on public services. Reference Services Review, 36(3), 226–231. https://doi.org/10.1108/00907320810895314

Kelly, B., Hamasu, C., & Jones, B. (2012). Applying return on investment (ROI) in libraries. Journal of Library Administration, 52(8), 656–671. https://doi.org/10.1080/01930826.2012.747383

Kendrick, C. (2019). Cost per use overvalues journal subscriptions. The Scholarly Kitchen. Retrieved from https://scholarlykitchen.sspnet.org/2019/09/05/guest-post-cost-per-use-overvalues-journal-subscriptions

King, D., & Tenopir, C. (2013). Linking information seeking patterns with purpose, use, value, and return on investment of academic library journals. Evidence Based Library and Information Practice, 8, 153–162. https://doi.org/10.18438/B8B02M

Lamothe, A. (2014). The importance of identifying and accommodating e-resource usage data for the presence of outliers. Information Technology & Libraries, 33(2), 31–44. https://doi.org/10.6017/ital.v33i2.5341

Luther, J. (2008). University investment in the library: What’s the return? A case study at the University of Illinois at Urbana-Champaign. Library Connect White Paper. http://hdl.handle.net/2142/3587

Malapela, T., & De Jager, K. (2018). Theories of value and demonstrating their practical implementation in academic library services. The Journal of Academic Librarianship. 44(6), 775-780. https://doi.org/10.1016/j.acalib.2018.09.018

Mood, A. (2018). Statistics. Access Science. https://doi.org/10.1036/1097-8542.652400

Mezick, E. M. (2007). Return on investment: Libraries and student retention. The Journal of Academic Librarianship, 33(5), 561–566. https://doi.org/10.1016/j.acalib.2007.05.002

Nicholas, D., & Huntington. P. (2003). Micro-mining and segmented log file analysis: A method for enriching the data yield from Internet log files. Journal of Information Science, 29(5), 391–404. https://doi.org/10.1177%2F01655515030295005

Pan, D., Ferrer-Vinent, I. J., & Bruehl, M. (2014). Library value in the classroom: Assessing student learning outcomes from instruction and collections. The Journal of Academic Librarianship, 40(3–4), 332–338. https://doi.org/10.1016/j.acalib.2014.04.011

Pan, D., Wiersma, G., Williams, L., & Fong, Y. S. (2013) More than a number: Unexpected benefits of return on investment analysis. The Journal of Academic Librarianship, 39(6), 566–572. https://doi.org/10.1016/j.acalib.2013.05.002

Peters, T. (2002). What’s the use? The value of e-resource usage statistics. New Library World, 103, 39-47. https://doi.org/10.1108/03074800210415050.

Tay, A. (2017). 4 different ways of measuring library eresource usage. http://musingsaboutlibrarianship.blogspot.com/2017/02/4-different-ways-of-measuring-library.html

Schufreider, B., & Romaine, S. (2008). Making sense of your usage statistics. The Serials Librarian, 54(3–4), 223–227. https://doi.org/10.1080/03615260801974164

Smith, K., & Arneson, J. (2017). Determining usage when vendors do not provide data. Serials Review, 43(1), 46–50. https://doi.org/10.1080/00987913.2017.1281788.

Smith, M. M., & Smith, J. A. (2016). What’s the use? A cost-per-use study of selected business databases. The International Information & Library Review, 48(1), 11–20. https://doi.org/10.1080/10572317.2016.1146037

Smulewitz, G., Celano, D., Andrade, J., & Lesher, M. (2013). ROI or bust: A glimpse into how librarians, publishers, and agents create value for survival. The Serials Librarian, 64, 216–223. https://doi.org/10.1080/0361526X.2013.761064

SPARC. (2020). Unsub gives libraries powerful evidence to walk away from big deals. https://sparcopen.org/news/2020/unsub-gives-libraries-powerful-evidence-to-walk-away-from-big-deals

Tucker, J.C. (2009). Collection assessment of monograph purchases at the University of Nevada, Las Vegas Libraries. Collection Management, 34(3), 157–181. https://doi.org/10.1080/01462670902962959

Watson, K., Evans, J., Karvonen, A., & Whitley, T. (2016). Capturing the social value of buildings: The promise of Social Return on Investment (SROI). Building and Environment, 103, 289–301. https://doi.org/10.1016/j.buildenv.2016.04.007

Yeager, H. J. (2017). Using EZproxy and Google Analytics to evaluate electronic serials usage. Serials Review: North Carolina Serials Conference, 43(3–4), 208–215. https://doi.org/10.1080/00987913.2017.1350312

Learn more about our work