Interview with the author: David Stuart on Web Metrics for Librarians

Lauren Hays

Lauren Hays

January 02, 2024

David Stuart wrote the second edition of Web Metrics for Library and Information Professionals that will be available from ALA Publishing. My interview with him is below.

Please introduce yourself to our readers.

My name is David Stuart and I am currently a bibliometrics officer at the University of St Andrews in Scotland. I previously worked as a Research Fellow in the Department of Digital Humanities at King’s College London and in the Statistical Cybermetrics Research Group at the University of Wolverhampton, where I also completed my PhD in webometrics (2004-2008).

Briefly summarize Web Metrics for Library and Information Professionals, Second Edition.

Web Metrics for Library and Information Professionals is an introduction to the wide range of web metrics that might provide useful insights to library and information professionals, and their users. From exploring the impact of a web site to investigating the networks on GitHub, it shows how we can gain evaluative and relational insights can from a wide range of tools that are often freely available.

Why did you decide to write the first edition in 2014 and update it now?

When I wrote the first edition of Web Metrics for Library and Information Professionals altmetrics was beginning to gain a lot of attention in the library community following the publication of the altmetrics manifesto (https://altmetrics.org/manifesto/). The rise of social media had provided large quantities of similarly structured content that could be mined for a wide range of text and network analysis, and this was beginning to be used for filtering and evaluative purposes. I wanted to put those altmetric discussions within the wider context of other web metric applications and methodologies that had come before and made use of different kinds of data.

Over the past ten years, many sites and services have changed, and new technologies have emerged, so a new edition was already overdue from a technical perspective.

What are two things that have changed the most with web metrics in the last 10 years?

For me, two of the biggest differences are the increased emphasis on metrics within organizations and the amount of code that is now openly available.

There has been growing interest in the use of metrics within organizations over the past decade and web metrics is inevitably part of that. If an organization has taken the time to publish on the web, whether it’s through a blog, a Twitter/X account, or dropping whitepapers in an online repository, then they inevitably want to know the impact it is having and whether they are investing their time wisely. There is also a desire to tap into the insights that are available from the vast quantities of data that are available online, to get a competitive advantage by understanding what everyone is talking about or searching for. This means that whereas ten years ago the focus was primarily on promoting the potential of web metrics, now it is equally important to temper the expectations around metrics and what they can show, and to make sure people don’t merely reach for those metrics that are most readily available.

On the technical side, the open sharing of code and software packages has made a huge difference to the variety of data that is available, the ease with which it can be collected, and the type of analysis that is possible. Building on the work of others in the open code community means that the number of lines of code you need to write has become vastly reduced, and the types of analysis possible has become increasingly sophisticated. A machine learning technique such a topic modelling can now be achieved with just a dozen or so short lines of code, which means that it is now open to a wider range of people, and it is possible to include short useful R examples in the book.

Why is understanding web metrics important for librarians?

Web metrics are a natural extension of the bibliometrics already a part of a librarian’s expertise. Whether helping to demonstrate the impact of research, or discovering new content, web metrics can provide additional insights. Just as importantly, in organizations that are increasingly focused on metrics, a greater understanding of metrics allows the librarian to argue against their misuse.

What is one area you discuss as the future of web metrics in the last section of the book?

One of the areas covered in the last section of the book is the impact of generative artificial intelligence for web metrics. While artificial intelligence provides the basis for a host of increasingly sophisticated analysis, it also has the potential to undermine many web metrics. As an increasing quantity of online content is automatically generated, we will need to revisit many of our assumptions about what we are counting and why it is important.

Is there anything else you would like to share?

AI is just the latest in a long line of technological disruptions facing web metrics. In the twenty years since I started studying the web there has never really been a period of stability, technologies have come and gone, and the web metrics community have always adapted. To adapt however, it is important to have an overview of as much of the terrain as possible and not be tied to a single technology—that is the sort of overview Web Metrics for Library and Information Professionals is designed to provide.

Lauren Hays

Lauren Hays

Dr. Lauren Hays is an Assistant Professor of Instructional Technology at the University of Central Missouri, and a frequent presenter and interviewer on topics related to libraries and librarianship. Please read Lauren’s other posts relevant to special librarians. Take a look at Lucidea’s powerful integrated library systems, SydneyEnterprise, and GeniePlus, used daily by innovative special librarians in libraries of all types, sizes and budgets.

Similar Posts

Leave a Comment

Comments are reviewed and must adhere to our comments policy.

0 Comments

Pin It on Pinterest

Share This