Distributed peer review passes test for allocating telescope slots
Demand far outpaces availability for use of the Multi Unit Spectroscopic Explorer, a visible-light spectrograph on the European Southern Observatory’s Very Large Telescope in Chile.
ESO
Each year in March and September, the European Southern Observatory (ESO) in Garching, Germany, is flooded with applications for observing time at its telescopes. The demand is around 3 to 4 times as much as the time available for most ESO instruments, and 8 to 10 times as much for those in high demand, such as the Very Large Telescope’s Multi Unit Spectroscopic Explorer. Each member of the peer review panel appointed to sift through the applications often ends up looking at around 70 proposals, which makes the process arduous, laborious, and somewhat unreliable.
But a different way of allocating telescope slots—one that potentially can also be applied to grant funding—seems to have made life easier for reviewers. Dubbed distributed peer review, the process puts the onus on applicants to review one another’s proposals, making that a condition for having their own applications considered. As a result, the refereeing burden gets evenly spread among a larger group of academics.
In a study
The study’s results echo those of a 2016 trial of distributed peer review at the US National Institute of Food and Agriculture (NIFA). That study too found no significant differences in the ranking of proposals that went through both distributed peer review and the agency’s standard practice.
One benefit of distributed peer review is that it can be scaled up quite easily, regardless of the number of applications, says Wolfgang Kerzendorf, an astronomer at Michigan State University who coauthored the analysis. “The number of reviewers grows automatically with the number of proposals,” he says.
In a 2009 manuscript
Priyamvada Natarajan, an astrophysicist at Yale University who was not involved in the study, says she likes the idea of “communal accountability” that distributed peer review may offer. It also could help candidates understand what it takes to be successful, she adds. Additionally, it could help train researchers in refereeing; Kerzendorf notes that students and postdoctoral researchers often lead telescope time proposals.
Despite the promising results of the Nature Astronomy study, ESO has not committed to implementing distributed peer review. The Gemini Observatory does use the practice, allocating 10% of its observing time through what it calls the Fast Turnaround program
Outside of astronomy, NIFA has not rolled out distributed peer review to its programs since its 2016 trial, though chief of staff William Hoffman says the agency sees it as an option going forward. In 2013 NSF tested the idea in a pilot for grant funding but didn’t release the results.
Kerzendorf hypothesizes that distributed peer review isn’t already used widely in part because people may have reservations about their competitors judging them. “But the same can be said about any panel,” he says.
To lessen the amount of bias in the reviewing process—such as that stemming from differences in gender, race, institution, and career stage—Kerzendorf and his colleagues used an algorithm that matched researchers with proposals that fit their expertise. The algorithm compared the words in proposals with the language in studies authored by the referee.
To test the efficacy of the algorithm, the researchers gave reviewers four papers that the algorithm found to match their expertise closely, another two that were far away from their mastery, and a final two somewhere in the middle. The idea was to see if the system could distinguish experts from nonexperts and to measure the importance of having expertise in an area when judging applications, Kerzendorf says. The researchers found that the algorithm could identify experts with a high degree of accuracy and that referees with little or no expertise in the area they judged agreed with one another at a high rate—around 80% of the time.
Natarajan says choosing referees using algorithms will introduce more objectivity into the reviewing process. Though she warns that algorithms “are never quite perfect,” she says they do help with preventing biases from creeping in when deciding who is competent enough to review. A previous study
Last year Natarajan played an integral part in the rollout of double-blind peer review
Overall, Natarajan says the new study is a step in the right direction in “very, very consciously trying to reduce unconscious biases in the allocation of scarce resources.”