Uncategorized

The Oprah effect and why not all scientific evidence is valuable

Some studies are more equal than others

One of the inspirations behind Science-ish was the seemingly endless barrage of complaints by friends in medicine regarding the “Oprah effect” on their offices and hospital wards: patients making important decisions about a lifestyle choice or treatment option based on something they had seen on Queen of daytime talk.

Now, the Oprah Winfrey Show is off the air, but the after-effects of her work on childhood vaccination and menopause will surely haunt doctors’ visits for years to come. Of course, other media—before and after Oprah—have a powerful sway over patient decisions. Every day, newspapers dole out advice on how much alcohol and coffee to consume, how best to manage your diabetes, and the benefits of probiotics. New media play a big role in purveying health knowledge, too. In research into YouTube as a source of information on immunization, the investigators found that about half of the videos posted had anti-immunization messages, and the negative videos were more highly rated and viewed more often than those backed by science.

Understanding one simple concept can help protect you from bad health advice: not all evidence is created equally, and so not all evidence should be given equal weight.

There are “evidence hierarchies” in science, and scores of very smart people around the world working out how to appraise evidence and separate the good-quality stuff from the bad. In this useful (and short) article, “How to read a paper,” Dr. Trisha Greenhalgh, a professor at the London School of Medicine and Dentistry in the U.K., explains why a lot of scientific research is rejected by peer-reviewed journals (“a significant conflict of interest,” “the study was uncontrolled or inadequately controlled,” etc.) and of the published works, how to tell whether a study is relevant or valuable.

According to Greenhalgh, there are three questions to ask when assessing a paper:

1. Why was the study done, and what clinical question were the authors addressing?
2. What type of study was done?
3. Was the [study] design appropriate to the research?

On the question of type, it’s important to differentiate between primary research (such as control studies and clinical trials) and secondary research (meta-analyses and systematic reviews). In the media, you often read about primary research, like this jewel from earlier this week: “Study touts new way to spot babies at risk for obesity.” Greenhalgh points to a useful “evidence hierarchy” that ranks the relative weight of research from highest to lowest:

1. Systematic reviews and meta-analyses
2. Randomised controlled trials with definitive results (confidence intervals that do not overlap the threshold clinically significant effect)
3. Randomised controlled trials with non-definitive results (a point estimate that suggests a clinically significant effect but with confidence intervals overlapping the threshold for this effect)
4. Cohort studies
5. Case-control studies
6. Cross sectional surveys
7. Case reports

The basic principle here is that syntheses of evidence (these are papers that apply a systematic approach to summarizing research on a given topic in its totality) generally tell more about the efficacy of a treatment or intervention than one-off studies (which are more prone to bias). This is why syntheses are regarded as the highest form of evidence. One example that illustrates why this type of research can be more revealing than single studies is the evidence on vitamin supplements. In the book Testing Treatments (which you can download for free), Science-ish patron saint Dr. Ben Goldacre points out that, despite all the single studies that concluded that taking a daily antioxidant vitamin was good for your health, systematic reviews on the subject revealed there was no evidence to support the use of these pills for prolonging life. In fact, when the research was taken as a whole, Vitamin A, beta-carotene, and vitamin E were shown to possibly increase mortality! (Goldacre goes on to note that the systematic review “is quietly one of the most important innovations in medicine over the past 30 years.”)

Not so fast, though: some types of research that rank lower on the hierarchy are still useful in answering some clinical questions, sometimes even more so than the more highly ranked types of evidence. Greenhalgh gives the example of case reports about thalidomide, a drug pregnant women used to combat morning sickness, which was withdrawn in the 1960s because it caused birth defects and nerve-system damage. She writes: “A doctor notices that two newborn babies in his hospital have absent limbs (phocomelia). Both mothers had taken a new drug (thalidomide) in early pregnancy. The doctor wishes to alert his colleagues worldwide to the possibility of drug-related damage as quickly as possible.” Here, the case report about the possible side-effects of the drug conveyed urgent clinical information a trial would have taken years to uncover.

In fact, while randomized-controlled trials rank high on the evidence hierarchy, they can have their flaws too. Some things that make for a weak trial include: too few participants to be clinically relevant, failure to blind participants or assessors, or imperfect randomization. There are many ways external forces can bias a trial, as well. One obvious one: drug industry funding. Dr. Joel Lexchin, of York University, has been studying this subject for years. In his decades of research, a key finding that emerged is that industry-funded studies are four times more likely to have a positive result for the sponsor’s drug than independently funded trials. Also, journals tend to feature trials with positive outcomes or big scientific breakthroughs, which means we don’t often hear about failed drugs or interventions.

You’ll notice a lot of reporting on health is generated on the basis of cohort studies, a type of observational study, which look at a group of people over a long period with the aim of testing a possible correlation between a certain lifestyle choice (diet, exercise) and health outcome. (“Do people who live in Mediterranean countries and consume olive oil live longer than people who do not?”) While these studies rank lower on the evidence hierarchy, this is a very useful type of research: observational studies on tobacco are what gave way to the realization that smoking is linked to lung cancer. But the problem with the reportage on observational studies is that correlation is often reported as causation: “eating olive oil will make you live longer.”

Research also tends to be reported without context, in isolation from other similar studies. “If people lived by the prescriptions of study headlines alone, they would likely be eating lots of chocolate one day and then none the next,” Steven Hoffman, assistant professor at McMaster University (and Science-ish colleague), said. “Individual studies are only helpful to a limit.”

So science is messy and reporting on science is messier still. But Science-ish would like to leave you with a useful how-to for deciphering media stories about health from the Harvard School of Public Health. They lay out a few questions to ask yourself when reading about a reported study:

• Are they simply reporting the results of a single study? If so, where does it fit in with other studies on the topic?
• How large is the study?
• Was the study done in animals or humans?
• Did the study look at real disease endpoints, like heart disease or osteoporosis?
• How was diet assessed?

Science-ish would add two questions: What type of study are you looking at? Who funded the research?

These simple questions can go a long way in critically appraising the unprecedented pile of evidence (studies, research) being generated and reported on. And remember: a lot of research out there is simply junk, despite the import it is given in the 24/7 news cycle. Another of the esteemed academics who advise on this blog, Dr. Brian Haynes, helped to create a free service sponsored by the British Medical Journal called EvidenceUpdates, which collects reviews from clinical journals and critically appraises them for their relevancy and newsworthiness. The aim of the service is to help clinicians and researchers sift through the junk science out there. Of 50,000 articles from 120 premier clinical journals reviewed by EvidenceUpdates each year, only 3,000 (or 6 per cent) measure up. That means 94 per cent of the 50,000 articles are rejected. That’s a whole lot of crap.

*Thank you to Donna Ciliska, Maureen Dobbins, Gordon Guyatt, Brian Haynes, Steven Hoffman, John Lavis, and Michael Wilson at McMaster University for their invaluable guidance on Science-ish and helping me wade through the evidence every week.

Science-ish is a joint project of Maclean’s, The Medical Post, and the McMaster Health Forum. Julia Belluz is the associate editor at The Medical Post. Got a tip? Seen something that’s Science-ish? Message her at [email protected] or on Twitter @juliaoftoronto

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.