Unexpected consequences of journal rank
DOI: 10.1063/PT.5.010209
In October 2008 I attended an international workshop
Despite the new superconductor’s modest critical temperature, the discovery was surprising. At first glance, iron’s magnetism should disrupt, rather than promote, the electron pairing that underlies superconductivity. The discovery was also exciting because it soon looked likely that the superconductivity was not of the common, low-temperature sort that Heike Kamerlingh Onnes had discovered in metals in 1911. Rather, it appeared to resemble the exotic, high-temperature sort that Alex Müller and Georg Bednorz had discovered in copper oxides in 1986.
Figure 1 from my August 2009 Physics Today article, “Iron-based superconductors.”
By the time of the IOP workshop in October, various groups around the world, notably in China, had been feverishly searching
Where to publish?
Physicists who were working iron-based superconductors in those early months faced a dilemma when it came to publishing their results. Most of them opted to post their results on the arXiv e-print server before or just after submitting their manuscripts to journals—for reasons selfish and unselfish: Not only would arXiv stamp the eprints with a priority-establishing date, but the e-prints would become available to everyone working in the field.
But if researchers submitted their papers to Nature or Science, they risked losing priority. At the time, the two high-impact journals forbade prepublication on arXiv. Science still does. Researchers would therefore have to delay posting on arXiv until their paper had been published or, worse, rejected.
The dilemma was not hypothetical. In his talk at the workshop, IOP’s Xing-Jiang Zhou lamented with wry humor the publishing fate of his study of potassium-doped SrFe2As2 using angle-resolved photoemission spectroscopy (ARPES). Rejected first by Nature and then by Physical Review Letters, the paper appeared in Physical Review B in November, five months after he’d originally submitted it.
That outcome is hardly bad. Phys. Rev. B bills itself as the world’s largest, most comprehensive journal in condensed-matter physics. But the resubmissions cost Zhou time and, perhaps, citations. As he observed in his talk, his IOP colleague, Hong Ding, sent his ARPES study of potassium-doped BaFe2As2 to Europhysics Letters, which published it on 14 July, nine days after the journal had received it. Whereas Zhou’s paper has garnered 37 citations, Ding’s has garnered 664.
Deep impact
Zhou’s cautionary tale reentered my mind last week when I encountered an e-print
Impact factors apply to journals, not individual papers. It is not justifiable, Brembs and Munafò claim, to transfer the cachet of a journal’s truly high-impact papers to the journal itself and, by association, to all the journal’s papers. What’s more, the correlation between a journal’s impact factor and a paper’s citations, which was never strong, is currently weakening.
The perceived value of journal rank amplifies a general tendency among researchers: the urge to publish results that, while purporting to advance science significantly, are supported by uncertain empirical evidence. That urge, brought on by competition for prestige and funding, could also be behind what Brembs and Munafò’s study uncovered: that journal rank correlates with the rates of both retractions and fraud.
Brembs and Munafò also object to the additional expense, in time and money, of getting papers into high-impact journals and to what they decry as the negotiated—and therefore unscientific—nature of the impact factor. According to Brembs and Munafò, Thomson Reuters, the keeper of the impact factor, is willing to accede to publishers’ requests to alter how a journal’s impact factor is calculated—to the publishers’ advantage, of course.
At face value, Brembs and Munafò’s study, which is 5300 words long and has 120 references, amounts to a crushing indictment of journal rank and its influence on science. Whether or not you agree with their conclusions, the study is certainly thought-provoking and worth reading. Indeed, for me the most interesting aspect is not their assault on journal rank, but their discussion of alternatives.
Surprisingly perhaps, Brembs and Munafò are not fans of either flavor of open access
Rather than reform journals or make them open access, Brembs and Munafò advocate dispensing with them altogether. In place of journals, they favor
bringing scholarly communication back to the research institutions in an archival publication system in which both software, raw data and their text descriptions are archived and made accessible, after peer-review and with scientifically-tested metrics accruing reputation in a constantly improving reputation system. This reputation system would be subjected to the same standards of scientific scrutiny as are commonly applied to all scientific matters and evolve to minimize gaming and maximize the alignment of researchers’ interests with those of science (which are currently misaligned).