Discover
/
Article

Distributed peer review passes test for allocating telescope slots

MAY 13, 2020
A trial at the European Southern Observatory suggests that applicants can effectively referee one another’s research proposals.

DOI: 10.1063/PT.6.2.20200513a

Dalmeet Singh Chawla
22725/figure1-2.jpg

Demand far outpaces availability for use of the Multi Unit Spectroscopic Explorer, a visible-light spectrograph on the European Southern Observatory’s Very Large Telescope in Chile.

ESO

Each year in March and September, the European Southern Observatory (ESO) in Garching, Germany, is flooded with applications for observing time at its telescopes. The demand is around 3 to 4 times as much as the time available for most ESO instruments, and 8 to 10 times as much for those in high demand, such as the Very Large Telescope’s Multi Unit Spectroscopic Explorer. Each member of the peer review panel appointed to sift through the applications often ends up looking at around 70 proposals, which makes the process arduous, laborious, and somewhat unreliable.

But a different way of allocating telescope slots—one that potentially can also be applied to grant funding—seems to have made life easier for reviewers. Dubbed distributed peer review, the process puts the onus on applicants to review one another’s proposals, making that a condition for having their own applications considered. As a result, the refereeing burden gets evenly spread among a larger group of academics.

In a study published last month in Nature Astronomy, researchers reported no significant difference between traditional time-allocation committees and distributed peer reviewers in terms of how often they agreed with one another on given proposals. For the study, 167 ESO applicants evaluated eight other proposals that were also going through traditional peer review. To reduce opportunities for bias, the referees were assigned proposals using an algorithm.

The study’s results echo those of a 2016 trial of distributed peer review at the US National Institute of Food and Agriculture (NIFA). That study too found no significant differences in the ranking of proposals that went through both distributed peer review and the agency’s standard practice.

One benefit of distributed peer review is that it can be scaled up quite easily, regardless of the number of applications, says Wolfgang Kerzendorf, an astronomer at Michigan State University who coauthored the analysis. “The number of reviewers grows automatically with the number of proposals,” he says.

In a 2009 manuscript , Michael Merrifield, an astronomer at the University of Nottingham, UK, who wasn’t involved in the new study, suggested allocating telescope time via distributed peer review. “This is a problem that’s not going to go away—in fact, it’s going to get progressively worse,” he says of the reviewing burden. In the near future there is likely to be extreme competition for use of the world’s largest telescopes, he says, and for the upcoming James Webb Space Telescope, which has a limited lifetime. “The pressure on these resources is getting higher and higher, and we need to get better and more efficient at giving out the time,” Merrifield says.

Priyamvada Natarajan, an astrophysicist at Yale University who was not involved in the study, says she likes the idea of “communal accountability” that distributed peer review may offer. It also could help candidates understand what it takes to be successful, she adds. Additionally, it could help train researchers in refereeing; Kerzendorf notes that students and postdoctoral researchers often lead telescope time proposals.

Despite the promising results of the Nature Astronomy study, ESO has not committed to implementing distributed peer review. The Gemini Observatory does use the practice, allocating 10% of its observing time through what it calls the Fast Turnaround program .

Outside of astronomy, NIFA has not rolled out distributed peer review to its programs since its 2016 trial, though chief of staff William Hoffman says the agency sees it as an option going forward. In 2013 NSF tested the idea in a pilot for grant funding but didn’t release the results.

Kerzendorf hypothesizes that distributed peer review isn’t already used widely in part because people may have reservations about their competitors judging them. “But the same can be said about any panel,” he says.

To lessen the amount of bias in the reviewing process—such as that stemming from differences in gender, race, institution, and career stage—Kerzendorf and his colleagues used an algorithm that matched researchers with proposals that fit their expertise. The algorithm compared the words in proposals with the language in studies authored by the referee.

To test the efficacy of the algorithm, the researchers gave reviewers four papers that the algorithm found to match their expertise closely, another two that were far away from their mastery, and a final two somewhere in the middle. The idea was to see if the system could distinguish experts from nonexperts and to measure the importance of having expertise in an area when judging applications, Kerzendorf says. The researchers found that the algorithm could identify experts with a high degree of accuracy and that referees with little or no expertise in the area they judged agreed with one another at a high rate—around 80% of the time.

Natarajan says choosing referees using algorithms will introduce more objectivity into the reviewing process. Though she warns that algorithms “are never quite perfect,” she says they do help with preventing biases from creeping in when deciding who is competent enough to review. A previous study indicated that ESO’s traditional peer-review process may be prone to gender biases.

Last year Natarajan played an integral part in the rollout of double-blind peer review for allocating observing time on the Hubble Space Telescope; for the first time since record keeping began 19 years ago, more women principal investigators obtained telescope time than men. The new distributed peer review study didn’t blind the authors’ identity from the reviewers, but the researchers did shuffle the order of the applicants’ names and moved them to the end of the proposals. One reason for not blinding was to give referees the chance to own up when they had a conflict of interest, Kerzendorf says.

Overall, Natarajan says the new study is a step in the right direction in “very, very consciously trying to reduce unconscious biases in the allocation of scarce resources.”

Related content
/
Article
/
Article
/
Article

Get PT in your inbox

Physics Today - The Week in Physics

The Week in Physics" is likely a reference to the regular updates or summaries of new physics research, such as those found in publications like Physics Today from AIP Publishing or on news aggregators like Phys.org.

Physics Today - Table of Contents
Physics Today - Whitepapers & Webinars
By signing up you agree to allow AIP to send you email newsletters. You further agree to our privacy policy and terms of service.