Science is science, right? Rigorous peer review and the scientific method ensure that what gets published in scientific journals is true or at least as close to truth as we can get.
Unfortunately, that just doesn’t seem to be the case. Peer review often fails to detect major problems with research and that even the most stringent efforts to eliminate variables often can’t to control random chance.
In a fascinating article in the New Yorker, neuroscientist Jonah Lehrer explores something called the “decline effect.” When scientists repeat experiments they frequently find that their results are harder and harder to replicate and that differences which once seemed stark become smaller and smaller. Lehrer points to several experiments that were published in prestigious journals and adhered to rigorous use of controls in their design, yet can no longer be replicated.
In early December, NASA announced that some of their scientists had discovered a bacteria that grew using arsenic instead of phosphorus – like all other life – the results were peer reviewed and published in the journal Science.
But within days other scientists were criticizing the study.
University of British Columbia professor Rosie Redfield, who studies “the ability of many bacteria to take up DNA from their surroundings,” told Slate that she “was outraged at how bad the science was.”
In the same article, University of Colorado molecular, cellular and developmental biology professor Shelley Copley said “this paper should not have been published.”
Earlier today, the British medical journal BMJ announced that they’d found a retracted study, which linked autism to vaccine, wasn’t just bad science, it was outright fraud. Despite this, the discredited study passed a peer review and was originally published in the Lancet.
This isn’t the first time peer review has seemed useless to prevent the publication of bad science. In 2001, the Journal of Reproductive Medicine published a study that claimed “women were twice as likely to get pregnant when Christians prayed for them.” In 2004, the lead author took his name off the study, saying he hadn’t actually participated in the research. One of the other two authors wasn’t even a scientist and was incarcerated in an American federal prison for (unrelated) fraud.
I’m not a scientist, so I’m not going to try and suggest what might be a better way to ensure that bad, or fraudulent, science doesn’t get published but these problems are serious and need to be addressed by the scientific community.
It’s important to remember that bad science has real world effects, after the study linking vaccines to autism was published, vaccine rates dropped and cases of childhood measles and whooping cough rose.
But instead of accepting these problems and working to find solutions, some scientists are trying to bury their heads in the sand. In a follow-up blog post to his story, Lehrer quotes from a letter that criticizes him for exposing the “decline effect” in a public forum because it will provide fodder for creationists and climate-change skeptics. I think it’s actually the opposite, attitudes like this will make science more open to question than being open and trying to solve these problems.
Serious science, based on finding the truth, should have nothing to fear from better scrutiny, in fact it will only make it better.