Good science vs. bad science

How do you tell the difference? Science-ish has six red flags to watch for

CSA Images/Getty

What has been the real driver of violent crime in America? Not unemployment, or guns, or wealth disparities, or lack of access to education. According to a fascinating new Mother Jones article, it’s exposure to lead.

The piece builds a case around this thesis: “Gasoline lead is responsible for a good share of the rise and fall of violent crime over the past half century.” This isn’t totally crazy, since we know that lead is a destructive neurotoxin. But any skeptic out there would immediately wonder about the evidence behind such an encompassing claim, mainly because it rests mostly on population-level observational studies, which look at links between lead exposure in the environment and crime rates. As Dr. David Juurlink, a physician and researcher at the Sunnybrook Health Sciences Centre, told Science-ish, while lead could be the missing element in violent crime, “Many, many other factors also could, particularly in concert. Perhaps lead is one contributing factor, but it’s an abuse of the basic tenets of epidemiology to ascribe so much of the blame to lead.”

In other words, he added, a correlation between the two phenomena does not equal cause-and-effect.

Almost as soon as the article was published, debunkers were out in full force, picking apart the evidence behind the story. Even the author of the article, Kevin Drum, followed up with a blog post outlining the different types of evidence in the piece and the need for further research. But many a reader may have left the article thinking that the lead theory is more than just a theory; that it’s the little-known cause of criminal activity.

And it’s not the last, or even most offensive, article that will contain an incredible thesis about new science. So how do you avoid falling prey to bad science in health stories? This is a question Science-ish gets asked a lot, most recently related to an article published in Slate about Dr. Oz’s dubious use of medical evidence. Understanding evidence-based medicine and statistics takes years of practice and study, but the good news is there are a few very basic red flags to keep in mind when you’re reading about the health sciences.

1. Is the sample representative?

The population that’s studied is not always generalizable to you, despite how it’s framed by the journalist writing about the study. For example, while animal studies are important, and have added a lot to our understanding of treatments and disease, they are only a starting point. As this fine paper [PDF] on writing health stories points out, “Findings in animals might suggest health effects in human beings, but they seldom prove them.” So any extrapolation on effects in people should be read with a critical gaze. Dr. Victor Montori, evidence-based medicine guru at the Mayo Clinic in Minnesota, told Science-ish that the next question to ask is: Were the right humans involved? “A study on healthy volunteers is not the same as a study on sick people. A study on people with a single disease is not the same as a study on people with multiple morbidities.” As well, the sample population might not be representative in other ways. The experiment may have been done in adults, so results may not apply to children, or it may have been done in men, and results don’t apply to women. “Participants in the study become important. Are they the same as the ones inferences are being applied to?” asked Dr. Montori. For these reasons, you always want to ask who was studied.

2. How would this study square in the real world?

The next thing you want to ask is whether the treatment or exposure that the study looked at is consistent with the way it would exist in the real world. Dr. Montori used the example of studying whether drinking soda gives people gas. “If the study exposed people to 29 cans of Coke drunk within a minute, and they got gas, that’s different than if someone drinks them over a year.” In studies of diet, weight-loss results may be staggering over four weeks but that may not be long enough to know how the diet will play out in real life. In studies of pharmaceuticals, drugs are often compared against placebo to establish that they work, but that’s not necessarily a fair comparison when there are other similar treatments available and clinicians and patients want to know whether the new drug is better than the best-available drug.

3. Who funded the research?

Thinking about the vested interests behind the science is a key way to distinguish good science from bad. You always want to, as Deep Throat told Bob Woodward, follow the money. Doing so will help you root out checkbook science, secret spokespeople, and industry-influenced policies. For example, many news outlets reported on a Lancet study that showed that Weight Watchers works. There was little made of the fact that the study was sponsored by the weight-loss company, and that its design was flawed. (The key problem: If you compare people on a rigorous weight-loss program to folks who get standard care from their doctors, it seems intuitive that the program group will win. That’s not really a fair comparison.) Studies of pharmaceutical research have, time and again, demonstrated what this new systematic review on the subject sums up: “that industry sponsored drug and device studies are more often favorable to the sponsor’s products than non-industry sponsored drug and device studies.”

4. Was the report based on an experiment or observational science?

The reason skeptics immediately attacked the lead story in Mother Jones was because the hypothesis partly hinged on observational science, not controlled experiments. The key difference between the two is that controlled experiments involve exposing one group to an intervention (a pill or treatment) and another group to a placebo, and then seeing whether the two fare differently. If the two groups were randomly assigned to the treatment or intervention, and they have different outcomes, you can be reasonably confident that the outcomes were the result of exposure to that treatment or intervention. This is because outside variables that the scientists don’t control for—or confounding factors— would be equally distributed between the two groups.

For this reason, scientists generally place more confidence in experiments—especially randomzied ones—than in observational studies. In the latter, researchers can never control for all confounding factors because no intervention is introduced and researchers are simply looking for links between an exposure to something that’s already occurring and a certain outcome. This isn’t to say observational science is bad. It’s not. It can even tell us more about how something plays out in the real world than a controlled experiment. It’s just important to be aware of the limitations of different types of studies and the claims that can be made about them.

 5. How big is the study?

Larger studies involving more people at different sites are generally better than smaller studies, especially when it comes to changing clinical practice or altering your behaviour. That’s because big studies are usually more representative of the broader population and less influenced by extreme cases than small studies. As Steven Hoffman, assistant professor at McMaster University, put it: “Results from a randomized controlled trial of 10  conveniently-chosen people, for example, are very likely to be influenced by extreme cases and outliers, whereas a randomly-selected representative sample of 10,000 people is not.”

6. What about the other evidence?

Our function in media is to bring you the news. But reporting on new, single studies actually isn’t the best way to tell you about what’s happening in science. In fact, this kind of reporting is probably misrepresentative and misleading. A single study will not change clinical practice or thinking about health. Many studies, in different contexts, using different methods, on different populations, will.

Dr. Montori gave this example: “Imagine one of my colleagues, sitting here, playing with a data set, finds an interesting association, and decides to write an abstract. The Mayo Clinic puts out a press release, and the abstract gets important coverage in the New York Times.” His imaginary colleague is interviewed about his contribution to science, but the reader has no idea about all the other similar papers that may have had different outcomes that didn’t get any coverage. They don’t know about the other, perhaps less sexy, studies that the researcher undertook with different results that were never published. “The reader is looking only at a select set of research that makes a headline and that’s not a good way of finding out what’s going on,” he added. For this reason, evidence-based medicine practitioners like Dr. Montori, and evidence-based blogs like Science-ish, advocate for seeking out syntheses of evidence on clinical questions. They minimize bias, they sum up all the research on a subject, and they will get you closer to the truth than any single study can.

Links to further reading about how to tell good science from bad:
How to Read a Paper
Testing Treatments
10 Questions To Distinguish Real From Fake Science
A Journalists’ Guide to Writing Health Stories

Nutrition Research and Mass Media: An Introduction

The Oprah effect and why not all scientific evidence is valuable
Survival of the Wrongest
Tips for Understanding Studies
Bad Science

Have any other favourite resources for weeding out pseudoscience? Comment below or message Julia at julia.belluz@medicalpost.rogers.com or on Twitter @juliaoftoronto




Browse

Good science vs. bad science

  1. It it also a coincidence that violent crime rates dropped 18 years after abortion became legal?

    • Considering the amount of egg that would be on people’s faces if you suggest that reducing the number of disenfranchised persons reduces crime rates and the implication criminal behavior isn’t simply a function of Rational Actor Theory… Maybe.

      • I wonder how well the Rational Actor Theory functions in abandoned, abused, addicted, neglected adolescents, young people?

    • In order for that theory in “Freakonomics” to work, you would have to have poor and marginalized people have children at a reduced birth rate compared to wealthier people. Which doesn’t seem to be the case.

      Correlation does not mean causation, though most of the social scientists seem to forget that when it is convenient.

      • No you wouldn’t. If the assumption is that poor and marginalized people are more likely to commit crimes, what their birth rate is *in comparison to* the birth rate of any other group is irrelevant. All that matters is their birth rate.

  2. The problem is, most things in life aren’t that simplistic. So a lot of these theories are somewhat interesting to read, at one level, but with each individual person, there are so many variables which makes it rather difficult to be so absolute. As well as, it is so very easy to make something sound one way, when it is really not. For instance, I was listening to CBC talk to one person who claimed, “Dementia is on the rise.” That very well might be true. However, the percentage of seniors is also on the rise. My final thought? I have no idea if dementia is truly on the rise.

  3. It would be only fair to make an announcement with this critical advice on CBC’s classical music radio program. I love the musc but I hear “scientific?” results with the most outlandish information seriously proclaimed with authoritative voices.
    Siurely this practice must dummys down some listeners, at a time when we are trying to help citizens become more savvy about such claims. .

  4. Well written.

    But if those six points were truly applied to our thinking, no one could possibly believe in a god, or an allah, or so called holy books, or prophecies, or saviours, heavens, hells, and all the rest of the superstitious nonsense that humans burden themselves with. And, tragically, indoctrinate their children in.
    I can only hope we are slowly moving towards species enlightenment, the end of superstition, and a world wide secular humanism, based on human and civil rights, social democracy and evidence based science.
    And I hope we have long enough, as a species, to have this become a reality. It’s not looking good.

    • This message brought to you by the fundamentalist church of ‘Science is God’. To be a member of this church simply:

      -Dismiss all religions, at any given time, for any reason.
      -Believe ‘evidence based science’ is infallible and resistant to human fault.
      -Believe you’re better than most other people because you have faith in something that isn’t organised religion.
      -Think that you have enough knowledge about religion to dismiss it as being nonsense.

      -Assume that human and civil rights movements come from science, logic and reason.
      -Boast about your beliefs whenever you can.

      • Even so, it’s preferable to actual religion, no?

      • Religion is disappearing, so you’d better get used to it.

  5. I like your balanced comments about observational studies, Julia. Environmental lead exposure is one of those areas where we can’t ethically do experiments (we can’t expose people to lead on purpose), so we need to rely on observational studies.

    Can we conclude from this one study that lead plays a role in crime rates? No. But this is not the first study to suggest this link. The level of lead exposure considered harmful by experts has been decreasing over the years, and the associations with decreased IQ and behaviour strengthening. All that has happened through observational studies.

    To not act to reduce lead exposure because there is no definitive proof, no experimental data, would be irresponsible. The default in that case would be allowing lead exposure despite no experimental data showing that exposure to be safe enough. We need to take preventative action when we have enough evidence, even if it can’t be experimental, rather than dismiss it as “only correlational.” The question is, and will always be: how much is enough evidence?

  6. Interesting how nasty the attacks on religion can be. Those who belong to the community of faith know the difference it makes in our lives. If you choose not to join us in our faith and beliefs please have the courtesy to not attack something which you obviously do not understand.

    • People understand abuse and negligence dogmatic belief in one god versus the 100+ others. Condemnation for Heresy for questioning dogma. There is also the other side the subjugation of women, of religious wars the dogmatic belief (God wont let us destabilize our planets climate commit ecocide ) Their role is abuse of aboriginal people around the work in the name of one god or another. The Catholic Church cover up of sexual abuse.

  7. Why do discussions about science seem to always devolve into religious bickering? Science is about what is observable in the real world and religion is about how we ought to live in the real world. apples and oranges

Sign in to comment.