Canada

Does peer review do more harm than good?

Peer review may be a central tenet of academic life, but Luc Rinaldi explains why it’s being compromised by profit-driven predators

Photo Illustration by Lauren Cattermole and Richard Redditt

Photo Illustration by Lauren Cattermole and Richard Redditt

On March 24, Mark Maslin, like the other members of Scientific Reports editorial board, received an email with huge ramifications. The message—from the academic journal’s publisher, Nature Publishing Group—told Maslin that his publication was doing a pilot project for a new article-evaluation process. For $750, authors could now fast-track papers through peer review and get a yay-or-nay verdict from a paid pool of third-party reviewers within three weeks.

Maslin, a climatology professor at University College London, was taken aback, not because of the short time span—peer review, an anonymous and voluntary inevitability of academic life, is a notoriously protracted procedure—but for its implications.

“This wasn’t how I thought the journal, or any journal, should operate,” he says, arguing that fast-tracking would exacerbate existing inequality: Well-funded labs could buy their way into the express lane to get published sooner (and, with more titles to their names, increase their odds of securing funding and grants), while cash-strapped universities and poorer researchers in low-income countries, particularly in Asia, would have to wait. Moreover, Maslin thought that tapping a limited group of reviewers—rather than being able to seek out the most qualified people worldwide—would diminish the quality of the review. So, he quit.

Then, roughly 150 other Scientific Reports editors threatened to do the same (the journal has more than 2,700 editorial board members) if concerns were not addressed. Two followed through. The month-long pilot, now complete, had intended to fast-track just 40 biology papers. Instead, it ignited a firestorm.

Peer reviewing is the academic equivalent of an oil change: a necessary annoyance. When an author submits an article to a journal, an editor finds a leading expert and asks him to scour its methodology and findings for flaws. Because reviewing is done by busy people for no pay, reviewers often stretch deadlines. That quickly turns into piles of manuscripts waiting to be reviewed.

In a 2014 survey, 70 per cent of 30,000 Nature Publishing Group researchers said they were frustrated with peer-review wait times, and almost as many thought it was time to try new methods. The backlog was at its worst at Scientific Reports, a generalist journal with a mandate to publish anything that is scientifically sound. (The broad scope attracted heaps of articles from researchers in non-Western scientific communities, eager to tack Nature’s respectable name onto their papers.)

The fast-track trial was supposed to solve the problem. Without consulting editorial staff, the publisher approved the pilot, claiming that “standard service provided by Scientific Reports will be unaffected.” But, even for change-hungry editors, it was a step too far.

The debacle must have seemed like déjà vu to Alex Holcombe, a University of Sydney psychology professor. In 2011, he and a small group of academics wrote a protest letter to seven journals that employ paid fast-track peer review; a few later dropped the policy. “It ran contrary to many of the scientific values that I hold dear,” says Holcombe, “which is: What appears in scientific journals is determined not by money, but rather the merit of the actual science.” He says fast-tracking is a formula for taking shortcuts—such tight timelines may force reviewers and editors to make decisions without proper scrutiny—and worries it will jeopardize reviewers’ neutrality. “I’m in psychology,” he says, “so I’ve got research suggesting people are influenced by money, even when they implicitly think money doesn’t inform decisions.”

Fast-tracking is just the latest problem with peer review to be identified. There’s no shortage of scathing criticisms—peer reviews of the peer-review system, if you will—claiming that, behind the symbol of scientific integrity, there is a flawed system that can do more harm than good. An infamous example: the peer-reviewed, since retracted, 1998 Lancet article that suggested the measles, mumps and rubella vaccine could cause autism.

“We have no convincing evidence of [peer review’s] benefits, but a lot of evidence of its flaws,” Richard Smith, a journal editor and champion of radical publishing reform, wrote in a 2010 paper. He argues that peer review is inherently opposed to originality, ineffective at catching errors and open to abuse (reviewers can steal others’ ideas) and should be done away with entirely. The audience, he believes, can sort the good science from the bad. But Smith’s “publish-then-filter” solution, which may overestimate the layperson’s scientific literacy, isn’t widely supported.

Like democracy, peer review is often considered to be the least-poor available option, but crucial to the reliability of academic publishing. The field itself is in an awkward transition between tradition—where journals make money from subscribers and libraries, not authors—and the rise of peer-reviewed, open-access journals, such as those in the respected Public Library of Science (PLoS) family, legitimate free-to-read publications that charge authors to publish their papers (many waive the fee for scientists who can’t afford to pay). In this Wild West of academic publishing, new publications pop up daily, and bad science—and disreputable journals—are sneaking in.

Late last year, Mark Shrime, a researcher at Harvard University, created a nonsensical paper titled “Cuckoo for cocoa puffs? The surgical and neoplastic role of cacao extract in breakfast cereals.” Its authors were Pinkerton A. LeBrain and Orson Welles, and its pages were filled with content from a random-text generator. (“In an intention dependent on questions on elsewhere, we betrayed possible jointure in throwing cocoa,” the opening line reads.) Shrime submitted the paper to three dozen journals. Eighteen accepted it; they would publish it, they said, as soon as he paid a publishing fee.

Shrime’s 18 suitors are predatory publishers—a term coined by the University of Colorado at Denver’s Jeffrey Beall to describe profit-driven journals that accept anything, as long as they can make money doing it. Many feign legitimacy by falsely claiming to be peer reviewed and exploiting open-access’s accepted author-must-pay system. Beall maintains a list of them; he estimates that one in four open-access journals are now predatory.

Like most academics, Arthur Caplan, a bioethics professor at New York University School of Medicine, receives emails from predatory journals daily that solicit articles from him. “It can be anything . . . from bridge engineering to cancer therapy,” he says with a laugh. “They seem to think I’m very accomplished.” To Caplan, the emails are obvious fakes—a bothersome click of the delete button. He knows the difference between legitimate and counterfeit journals. But, he warns, journalists and politicians may not, allowing for poor reporting and bad policies that create “opportunities for the continued power of crackpot views that corrode many areas of public life.” If these journals continue unchecked, he wrote in a Mayo Clinic paper, “the trustworthiness, utility and value of science and medicine will be irreparably damaged.”

Just as problematic as these rogue publishers are the scientists who use them. Some researchers, trying to increase the number of publications on their CVs, happily pay several hundred dollars for the service, a pressing concern, given the increasing volume of papers—some more scientifically robust than others—published from countries such as China, India and Vietnam, Caplan says.

That influx is causing problems even for legitimate publishers. Since last November, open-access publisher BioMed Central has retracted 43 papers, most by Chinese doctors, after discovering that authors tampered with peer review and created fictitious reviewers. “The good news is they’ve learned the [academic] techniques,” Caplan says. “The bad news is they haven’t necessarily learned the ethics infrastructure to go with them.”

Caplan believes the solution starts with revamping peer review. “We don’t train people to do it,” he explains; new academics are simply told, “Here’s your first paper. Review it.” He’d like reviewers to be taught proper evaluation methods, what predatory journals look like and that papers published in them won’t count. Right now, he says, “You kind of learn about that world in your inbox.”

Maslin, the former Scientific Reports editor, says peer review would also benefit from a credit system, in which quality reviews delivered on time would be recognized with a universal metric. It would take a global effort, he admits, but he says it’s time for the U.K.’s Royal Society and the American Association for the Advancement of Science to take action—not just on fast-tracking, but on the entire field. “As a scientific community, we need to really think about the ethics of publishing.”

Until then, Maslin says he’d be happy to take his old job back if Scientific Reports dropped fast-tracking. He’s still nursing two final papers anyway, and they’re likely to need major revisions. Done right, peer reviewing improves papers, he says, even when that means months of extra work. “Reviews like that, at first you look at them and go, ‘Oh, my word.’ But then you go through and say, ‘Yeah, but they’re right.’ ”

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.
FILED UNDER: