Discover
/
Article

How should bibliometrics be used in assessing research quality?

DEC 22, 2014
Essays in Science and Nature present diametrically contrasting answers.

DOI: 10.1063/PT.5.8088

Within the space of a few days in early December, two starkly opposed, high-visibility commentaries combined unintentionally to spotlight the vexing nature of bibliometrics, the statistical approach to judging research via data about publications.

At Science, editor-in-chief Marcia McNutt wrote less than a month after the Alexander von Humboldt Foundation held a conference in Berlin called “Beyond Bibliometrics—Identifying the Best.” A conference web page summarizes the issue:

Against the background of tighter budgets all over the world, there is increasing pressure for research funding organizations to make the right funding decisions, and for research performing institutions to recruit the right researchers. Yet there are no generally agreed criteria for evaluating scientific output and for evaluating the people performing research. Driven by a desire to employ empirically sound and easy-to-apply methods, decision-makers throughout the world tend to use bibliometrics as a putatively accurate instrument. However, while the reduction of complexity through bibliometrics might appear attractive at first sight, there is an increasing awareness of the shortcomings: With new forms of scientific output entering the scene, and with an increase in expectations considering the contribution of research to society as a whole, research impact and the identification of the best research(ers) has become an increasingly complex multidimensional affair that goes beyond citation impact. And perhaps even more importantly, bibliometrics might inadvertently have normative and regulatory, and not necessarily positive, effects on the way scientific research itself is done.

McNutt argues that it’s “time to remedy a flawed bibliometric-based assessment for young scientists.” She both begins and ends by emphasizing that since research is a trillion-dollar enterprise worldwide, research quality assessment matters hugely. “Current assessment,” she writes, “is largely based on counting publications, counting citations, taking note of the impact factor of the journals where researchers publish, and derivatives of these such as the h-index.” She reports that at the Berlin conference, these “approaches were severely criticized for numerous reasons, with shortcomings particularly apparent when assessing young scientists for prestigious, interdisciplinary awards.” It’s time, she asserts, “to develop more appropriate measures and to use the scientific method itself to help in this endeavor.”

She spurns journal impact factors, observes that citation counting isn’t much better, and questions the use of altmetrics, which she summarizes as “measuring downloads, page views, tweets, and other social media attention to published research.” Then she challenges readers: “Consider a rather outrageous proposal.” She continues:

Perhaps there has been too much emphasis on bibliometric measures that either distort the process or minimally distinguish between qualified candidates. What if, instead, we assess young scientists according to their willingness to take risks, ability to work as part of a diverse team, creativity in complex problem-solving, and work ethic? There may be other attributes like these that separate the superstars from the merely successful. It could be quite insightful to commission a retrospective analysis of former awardees with some career track record since their awards, to improve our understanding of what constitutes good selection criteria. One could then ascertain whether those qualities were apparent in their backgrounds when they were candidates for their awards.

At Nature less than a week later, University of Southampton geography professor Peter M. Atkinson wrote not about assessing the work of young scientists, but about the UK’s Research Excellence Framework. He explains the REF:

Run every five years or so, the REF system grades the quality of research in dozens of fields across more than 100 institutions, and allocates government grant money accordingly. The winners enjoy high-quality ratings for their academic departments and the guarantee of a hefty chunk of cash to support their research. A poor rating can see a department starved of money or even closed down.

His target is not only the “heavy cost” in time and effort invested in preparing REF submissions, but the complexity, imprecision, and uncertainty involved. His solution? He advocates the direct opposite of McNutt’s thinking: “More of the process could be automated, using ‘big data’ and bibliometric and machine-learning approaches.”

Machines, he stipulates, “cannot yet judge the quality of research output.” (“Yet”? Will future machines judge research quality?) Atkinson asserts that at present, however, “there are surrogates” and that for “many subjects, bibliometric analysis can leverage the peer-review process that already occurs through publication, as well as the peer assessment implicit in citation data.”

---

Steven T. Corneliussen, a media analyst for the American Institute of Physics, monitors three national newspapers, the weeklies Nature and Science, and occasionally other publications. He has published op-eds in the Washington Post and other newspapers, has written for NASA’s history program, and is a science writer at a particle-accelerator laboratory.

Related content
/
Article
The scientific enterprise is under attack. Being a physicist means speaking out for it.
/
Article
Clogging can take place whenever a suspension of discrete objects flows through a confined space.
/
Article
A listing of newly published books spanning several genres of the physical sciences.
/
Article
Unusual Arctic fire activity in 2019–21 was driven by, among other factors, earlier snowmelt and varying atmospheric conditions brought about by rising temperatures.

Get PT in your inbox

Physics Today - The Week in Physics

The Week in Physics" is likely a reference to the regular updates or summaries of new physics research, such as those found in publications like Physics Today from AIP Publishing or on news aggregators like Phys.org.

Physics Today - Table of Contents
Physics Today - Whitepapers & Webinars
By signing up you agree to allow AIP to send you email newsletters. You further agree to our privacy policy and terms of service.