Interview with the Authors: Demystifying Scholarly Metrics

Lauren Hays

Lauren Hays

January 31, 2023

Marc W. Vinyard and Jaimie Beth Colvin co-wrote Demystifying Scholarly Metrics: A Practical Guide published in 2022 by ABC-CLIO. My interview with them is below. 

Lauren: Please introduce yourself to our readers:

I’m Marc Vinyard. I have worked at Pepperdine University for 24 years as the subject liaison for the business, economics and history divisions. I was recently promoted to Associate University Librarian for Graduate Campus Libraries. As a liberal arts major who became a liaison to the business department, I taught myself how to create SWOT analyses and interpret financial ratios. I became an expert in the topics I avoided during my undergraduate education but now love. When faculty began asking me to locate journal impact factors and citation metrics, that sparked an interest in understanding scholarly metrics, and so with the same tenacity it took to learn about financial statements, I set out on the daunting task of understanding scholarly metrics. 

I’m Jaimie Beth Colvin. After working at Pepperdine for nine years as a sociology, theater arts and women studies liaison, I recently started a position at Lipscomb University as the health sciences librarian. User experience is a professional interest of mine and I enjoy making information more accessible. When I assist people with scholarly metrics I think about how someone who is new to all of the jargon and specialized resources would view this data. I like to employ user-friendly analogies to make this information less intimidating.

Lauren: Briefly summarize Demystifying Scholarly Metrics: A Practical Guide.

Marc and Jaimie: In our experience, many talented librarians and faculty find scholarly metrics confusing. We introduce readers to the various types of bibliometric and altmetric indicators and most importantly provide advice on interpreting scholarly metrics by placing the data in context. Raw numbers aren’t meaningful without accounting for information such as the academic discipline, type of paper and the year it was written.

We provide practical examples and scenarios of how to utilize scholarly metrics based on our experiences assisting faculty with tasks such as selecting a journal for publication, identifying predatory publishers and assessing research impact. Additionally, we introduce important scholarly metrics resources and highlight their opportunities and obstacles. While the topics we cover are based on practical experiences, the book is also evidenced-based and our recommendations are supported by scholarship. 

Lauren: Why did you decide to write this book?

Marc and Jaimie: We began our journey into scholarly metrics after receiving questions from faculty about impact factors for journals and assessing research impact. Initially, these questions were difficult since our library didn’t have access to the Journal Impact Factor (JIF). We had to explore freely available sources evaluating journals. This search led us into the world of scholarly metrics beyond JIF which turned into us creating a resource guide, addressing faculty, conducting research, revising said guide, uncovering common misconceptions of metrics, and writing an article. 

We wrote the book we wished we had when we began assisting faculty with their citation questions; our hope is to help librarians and faculty avoid misunderstanding and misusing scholarly metrics. 

Lauren: In your work, what have you found most people find difficult about scholarly metrics? How does this book address that difficulty? 

Marc and Jaimie:

Language & Structure  

Scholarly metrics involve a completely new language that is understandably intimidating. Our lengthiest chapter titled “H-Indexes, Altmetrics, and Impact Factors- Oh My!” introduces users to this vocabulary with a user-friendly tone. On top of that, the jargon differs according to which platform you are consulting. When faculty reach out to librarians for help, they often mention the terminology used by a specific vendor. A professor inquiring about journal metrics will usually ask about the JIF and is not aware that it is both a brand and a measurement. Much like Kleenex, the JIF name has become ubiquitous even though competing products exist. Additionally, these competing platforms create confusion about which ones are more authoritative. 

Misuse

Navigating the scholarly metrics maze is particularly confusing if you aren’t familiar with the categories of scholarly metrics, which leads to misuse of metrics. One example of this confusion is the common practice of using journal level metrics to evaluate the scholarly output of individual researchers. Our book lists the categories of metrics and their appropriate use. 

Interpreting/understanding

Another barrier is the difficulty of interpreting metrics. A researcher finds a number but they aren’t sure what it means. What does it mean if I have an h-index of five? A major goal of our book is helping users understand the data and placing it in context. Our hope is that readers will be equipped with the skills and confidence to make sense of the scholarly metrics they locate.

Lauren: How have scholarly metrics changed? Do you see them changing in the future? 

Marc and Jaimie:

Increase in free resources:

In the past, researchers needed subscription products that only larger libraries could afford. There are a growing number of free sources for evaluating journals like Scimago and Scopus’ journal list (which doesn’t require a subscription). However, users will run into limitations if they are completely dependent on free sources because the subscription products have powerful features for filtering results and obtaining large datasets.

Identifying predatory journals:

 Identifying predatory journals has evolved since Jeffrey Beall’s List. Beall made an important contribution when he coined the phrase “predatory publishers” and brought attention to a serious problem. However, there is a move away from lists of predatory journals towards both approved lists of journals and educating researchers to identify the credibility of a journal on their own without the aid of lists, the same way students are taught to evaluate the credibility of sources. Our book provides criteria for evaluating journals.

Open Peer Review 

An emerging trend is transparent peer review, which is the practice of peer reviewers’ comments being open to the public. If transparent peer reviewing is integrated into the traditional review process, this could benefit the journals, authors, and reviewers. A journal’s reputation of rigor can be monitored to avoid established journals skirting by on their name recognition alone while their standards slip in the shadows. Quality peer reviewers can receive credit for their contribution to the scholarly conversation while petty reviewers could be exposed and hopefully removed. All of this transparency would help the authors.  In terms of scholarly metrics, reviews and comments from sources like the FT1000 Research platform could add qualitative metrics to an overwhelming quantitative field. Transparent peer reviews is a fascinating development and definitely a trend to watch.

Lauren: What is one thing you hope readers takeaway?

Marc and Jaimie: We want readers to understand that researchers can never be reduced to a single number. One of the biggest misuses of scholarly metrics has been judging faculty on the JIF ranking of the journals they publish in. Each researcher will need to find the scholarly metrics that help them explain their scholarly output and those metrics will vary from researcher to researcher. There aren’t any “one size fits all” metrics that every scholar should be mandated to produce or achieve. Metrics should only be one of the tools a scholar uses to demonstrate their impact in the scholarly community.

Lauren: Is there anything else you would like to share?

Marc and Jaimie: When most librarians think of the value of scholarly metrics they primarily think of helping faculty measure their scholarly impact. We have discovered that learning more about scholarly metrics has improved our research skills. We can use scholarly metrics resources to help researchers identify highly cited papers in their disciplines and steer them clear of predatory or low quality journals.

Lauren Hays

Lauren Hays

Lauren Hays, PhD, is an Assistant Professor of Instructional Technology at the University of Central Missouri, and a frequent presenter and interviewer on topics related to libraries and librarianship. Her expertise includes information literacy, educational technology, and library and information science education.  Please read Lauren’s other posts relevant to special librarians. And take a look at Lucidea’s powerful integrated library systems, SydneyEnterprise, and GeniePlus, used daily by innovative special librarians in libraries of all types, sizes and budgets.

Similar Posts

Leave a Comment

Comments are reviewed and must adhere to our comments policy.

0 Comments

Pin It on Pinterest

Share This