Last week at the Harvard School of Public Health, Dr. John Ioannidis – a Stanford professor and Science-ish hero – told a room filled with Harvard doctors (and one journalist) that they can’t trust most of the research findings science has to offer. “In science, we are very eager to make big stories, big claims,” he opened his lecture, with a mischievous grin. “The question is: are those claims accurate?”
According to Ioannidis, the answer – at least most of the time – is an unequivocal ‘no.’
A compact man in his 40s with stooped shoulders and thinning brown hair, Ioannidis has made a career researching research – or “meta-research” – examining not just single studies but many studies across fields as diverse as disease prevention, neuroscience and genomics. His boyish nerdiness and good nature belie the thorn in the side of science that he has become. For the last 20 years, he has amassed an internationally regarded body of research about all the ways science isn’t actually science-based. For this work, he’s considered “one of the most influential scientists alive.”
At a time when scientific knowledge is being produced at an unprecedented rate and global spending on life sciences research alone has topped $240 billion US, the need for people like Ioannidis – who can take a step back and examine trends, gaps, biases, waste and flaws – becomes more urgent than ever. If science continually fails at self-correction, Ioannidis is the closest thing this field has to a one-man self-correction machine.
In the Harvard class, he gave students an overview of his work and all the ways research goes off the rails. Here are some highlights:
1) Why every diet supposedly causes cancer:
In one of his studies – appropriately titled “Is everything we eat associated with cancer?” – Ioannidis and a co-author randomly selected 50 ingredients from recipes in the The Boston Cooking-School Cook Book. They then looked at whether those ingredients were associated with an increased or decreased risk of cancer. At least one study was identified for 40 of the ingredients – from bacon and bread to sherry and sugar – and most of the claims made in the studies contradicted each other or were based on weak evidence. “Most of the ingredients had results on both sides, positive and negative,” he said, making the point that many studies about cancer and nutrition are poorly designed. There were studies to support just about every claim on the popular topic – and many of them are too good to be true. “With one more serving of tomatoes,” he told his class with a smirk, “half the burden of cancer in the world would go away.”
2) Why most published research findings are false:
For Ioannidis, the key reason for this exaggeration and misrepresentation in research can be summed up in one word: bias. “This can be conscious, subconscious, or unconscious,” he said of these deviations from the truth – beyond chance or error – that pervert science. His favourite offender is ‘publication bias,’ which gives a falsely exaggerated impression of the science on a subject because not all studies that get conducted get published and the ones that do tend to have extreme results. It’s like doing a bunch of tests to find out whether your new vacuum works, and even though most tests fail, only reporting the one time the vacuum turned on.
Ioannidis is well known for taking on the entire research enterprise in an essay entitled ‘Why Most Published Research Findings are False.’ In the paper, he described how a combination of uncertainty (no scientific finding is ever final) and publication bias creates a maelstrom of spurious findings that don’t hold up to scrutiny over the long-term.
3) Why you need to be cautious about early studies with big claims:
For another paper on the twists and turns in research, Ioannidis examined the reliability of findings in highly-cited original studies, focusing in particular on those which had been contradicted by later, more rigorous research. These influential studies were not about cold and abstract issues; many focused on the very questions that we all grapple with every day, such as whether to take supplements or not, and whether common medications – like aspirin for blood pressure – really work.
Here, he concluded, “Contradicted and potentially exaggerated findings are not uncommon in the most visible and most influential original clinical research.” In other words, splashy early studies with big effects were often found to be exaggerated or completely wrong. He also found that the original research continued to be cited, sometimes with complete silence on the more recent, contradictory evidence. For example, an early observational study revealed a supposed link between vitamin A supplementation and breast cancer, only to be overturned by a later, much higher-quality randomized controlled trial – yet the debunked observational study remained more highly cited and influential.
In a study, Ioannidis looked at six highly-cited journals between 1979 and 1983, combing for papers in which researchers claimed their basic scientific findings were going to lead to useful treatments. Out of 25,190 studies he identified, 101 made such claims. Yet, the vast majority of these studies were never followed up with randomized controlled trials to test those claims. Of the 27 that did, only five resulted in technologies that were licensed for clinical use in 2003 and only one has been widely used for the purposes for which it was licensed. This means the chances that someone promising a breakthrough and actually delivering one are about as slim as the chances of winning the lottery.
4) How to make science less science-ish:
At the end of the course, Ioannidis shared a few ideas about how to improve the status quo in science. He suggested first that researchers need to learn to live with small effects in their studies. “Having worked in different fields, most of the effects that are of interest are small,” he said. Most effects of a big magnitude – like the link between smoking and lung cancer – have already been recognized. To reduce the signal-to-noise ratio, he said, scientists need to design their studies accounting for the fact that the effect sizes they are chasing may be tiny.
He also suggested that even if studies aren’t going to be replicated, researchers should at least try repeating their findings by getting an independent investigator to vet their raw data sets. Other fixes for science, which Ioannidis outlined in a new Lancet series on reducing inefficiency in research, include revamping the reward system for research and making data publicly available.
5) Why science, if flawed, is still the best alternative:
At the end of his week-long visit to Harvard, Science-ish asked Ioannidis whether he ever tired of poking holes in science, whether all his work has caused him to lose faith in the scientific process. With wide eyes, he exclaimed, “I remain as enthusiastic about science as ever!” He went on to describe all the benefits of science, why it is “the best thing that can happen to humans”: the value of rational thinking, of evidence over ideology, religious belief and dogma. “We have effective treatments and interventions and useful tests we can apply. We have both theoretical and empirical evidence that science is beneficial to humans and it’s a wonderful construct of thinking. . . Science is beautiful because it’s falsifiable.”
“There’s plenty of room to apply the very same (scientific) tools to the way science is done,” he added. “The question is: can we get there faster and more efficiently without wasting effort?”
Science-ish is a joint project of Maclean’s, the Medical Post and the McMaster Health Forum. Julia Belluz is senior editor at the Medical Post. She is currently on a Knight Science Journalism Fellowship at the Massachusetts Institute of Technology. Reach her at email@example.com or on Twitter @juliaoftoronto
Friday, January 17, 2014