Uncategorized

A Science-ish Q&A: Dr. Ben Goldacre

The ‘Bad Science’ columnist on quacks, scaremongering journalists, and the importance of good research

Photograph by Rhys Stacker

With his “Bad Science” column in the Guardian newspaper and a best-selling book of the same title, U.K. physician Ben Goldacre has been leading the international charge in quack-busting, unpicking dubious scientific claims made by everyone from politicians to alternative-medicine practitioners and nutritionists. But Dr. Goldacre doesn’t scrutinize only the most obvious quacks among us. As he told an audience of health professionals, policy-makers, and researchers at the Evidence2011 evidence-based medicine conference in London, “We’re on a quack continuum and our work here today is unpicking the details of evidence to make sure we stay at the saintly end of that continuum rather than the dodgy one.”

As of this fall, Dr. Goldacre was on a break from the bedside to work as a research fellow on clinical trials and publication bias at the London School of Hygiene and Tropical Medicine. (He’s also the Science-ish patron saint.) Julia Belluz sat down with him in London to learn about how other doctors can undertake similar quack-busting work, about his forthcoming book on the pharmaceutical industry, and why understanding the mechanics of bad science is the best way to arrive at good science.

Q: In a presentation here, you said we can put all evidence on a “quack continuum.” Can you explain what that is?

A: I write about misuses of evidence in plenty of different spheres: scaremongering journalists, obvious quacks and naturopaths, and flaws in the way that evidence is used in mainstream academia, medicine and in (government) policy. One of the things I always found interesting is the same tricks are used to distort medicine in all of those domains.

There are tricks used to distort evidence in medical academia that are more sophisticated, and they are more sophisticated because they are used to bamboozle and confuse a more sophisticated and more evidence-savvy audience-but they are nonetheless in the same basic category as the distortion of evidence from outright quacks.

Q: So what are some examples of obvious quackery compared with more covert quackery?

A: One example is how you can make your treatment look better by comparing it with something that’s really rubbish. We laugh at acupuncturists who do trials where acupuncture comes out as being brilliantly effective when you compare it with no treatment at all. But we have the same problem in mainstream medicine. It’s common to compare your drug with placebo, which is basically the same as comparing your drug against nothing. And that’s justifiable if there’s no currently available treatment for the disease your tablet is there to address. But if there is a currently available treatment, we don’t care if your new treatment is better than nothing; we care whether it’s better than the currently best available treatment.

And yet, there have been several studies on this, the most recent published a couple of months ago, which showed that about one-third of all treatments approved by the U.S. Food and Drug Administration have evidence only that they are better than placebo, even when you’re looking at tablets where we already have a currently available treatment. That’s one problem, as dramatic in its prevalence in the world of mainstream academia as it is in the world of quackery.

Q: Your focus has shifted over the years-from the low-hanging fruit of alternative medicine to the more intricate and complex dealings of pharmaceutical companies. Why the change?

A: I think the trajectory is probably from easy aspects of research methodology to more complicated aspects of research methodology. There are some obvious things—particularly in the U.K.—such as quacks and scaremongering journalists who were getting away with making extraordinary howlers and being treated with the utmost respect and credulousness. The first things I wrote about were the basics of how to do a trial. How do you know if something works? How do you know if something is good or bad for you?

So something like homeopathy, where you’re getting a dummy sugar pill that has no medicine in it, is a perfect teaching tool for evidence-based medicine because when homeopaths ran this trial showing that their dummy sugar pill works better than placebo, that’s exactly the problem you’re trying to avoid in real evidence-based medicine. You’re trying to avoid seeing a positive treatment effect where there clearly is none.

By going through the ways a trial can be flawed by design—by not being properly randomized, by not being properly blinded—you can use homeopathy as a brilliant teaching tool for how crap studies can get. It’s also a very good teaching tool for more complicated topics, such as cherry-picking results.

So once you cover the basics of how trials work, you can move on to how trials can be badly designed, how trial outcomes can be selectively reported, and all the fascinating areas of how people can set out claiming that they’re measuring one outcome as their primary outcome and then suddenly a completely different outcome gets reported as the primary outcome when the paper is published.

Q: You’ve worked to explain evidence in that systematic way because you’ve said you always tried to get away from arguing from a position of authority. Why do you find authority so offensive?

A: The thing that interests me is not whether something is wrong but whether something is interestingly wrong, whether there is an aspect of research methodology that can be explained using somebody getting something wrong and being an idiot as a kind of emotional hook for making that quirk of research methodology relevant and interesting to peoples’ lives.

Because of that, I have never felt comfortable charging in and saying, “You know, here are some drugs that don’t work.” I’m not really interested in the answers of research, I’m interested in the methods and the structures of it. How do you know if something is good for you or bad for you? Unless you explain all the evidence, all you’re left with is an authority play.

Q: Who or what is your next target?

A: I’ve already written a lot about problems in the information architecture of academic medicine, the most extreme end of that being publication bias. So I’m writing a book about how the pharmaceutical industry distorts its evidence, and more relevantly, how doctors, academics, regulators and governments have acquiesced in the face of that, and how we’ve failed to address some very obvious problems. (This book, The Drug Pushers, is to be published in Canada by the end of the year.)

Q: Any findings from your upcoming book that you can share?

A: One of the things that is so interesting about writing in this area is that the outcomes that you have, the information that you have, is always three to five years behind the curve because it takes time for a drug to be widely adopted, (to) kill people if necessary, and for that signal to be detected with the very imperfect, post-marketing pharmaceutical company vigilance strategies.

Also, (it takes time) to try to get clues from outside an organization that there was bad behaviour within an organization which could have exacerbated the harm.

And then you have a long process of going to court. And finally, only in a small select number of cases, (finding) some internal documentation (suggesting distortion of evidence). Because that’s five years behind the curve, you always have people saying, “That’s an isolated incident,” or, more likely, “That’s an old problem.”

One thing I’ve done in the book is document how in the past people have said, “Oh that’s an old problem which we have fixed now.” Each time people say it’s fixed, it’s not. It keeps happening even now.

Q: Do you have examples of apparent solutions to real, live problems with the pharmaceutical evidence base?

A: Two years ago, I was on a BBC program up against a chap who previously worked for Merck in the U.K. I was explaining the problems around publication bias. And he said this problem of negative trials going missing in action had been fixed because you now have to register your study.

That sounds really good, but there’s a paper from 2009 which goes through every single trial published in the top 10 journals in 2008, looking at whether the trials were properly registered before they started and on completion. About one-third of them weren’t. You’ve got journal editors saying we’re not going to toe the line and publish unregistered trials anymore. But when you look at it, demonstrably, journal editors failed in their role as gatekeepers. So the history of the distortion of evidence in medicine is littered with these failed solutions.

Q: You’ve been quite outspoken as a physician, raising your voice when you see misreported studies or politicians perpetuating bad science, at a time when many doctors are afraid to speak out. What advice would you give to your colleagues who want to stand up about the distortion of science and evidence?

A: Firstly, nobody should feel under pressure. There’s no obligation to stand up and communicate. But if you want to, it’s very easy and more people could and should do it.

I got my (Guardian) column by ringing up a switchboard number on the letters page of the newspaper. A lot of times, editors are very pleased to hear from people who know about epidemiology or evidence-based medicine or medical statistics or medicine.

You also don’t really need to worry about whether you can write or not. This is one of the great untold secrets of journalism: a lot of copy by people who self-identify as professional writers is complete rubbish and it gets knocked into shape by very good editors on magazine and news desks.

Or you can set up a blog. People can be snotty about blogs but really, 1,000 blogs getting 400 views each is 400,000 views in total. And that compares very favourably with the mainstream media.

So a vast army of nerds, each working on their issues, catching a small corner of the world interested in those issues, in total are every bit as powerful a resource as media outlets.

Q: Has your finger-pointing ever got you into trouble?

A: Sir Iain Chalmers (a physician and one of the founders of the Cochrane Collaboration, a non-profit group that produces systematic reviews on health-care interventions), who has been very outspoken for a long time about flaws in evidence-based medicine, describes what he has as “terminal candour.” Toward the end of his career, he said he can risk saying literally whatever he wants.

I have been doing this since I was 29. I’m 37 now. Nothing bad has happened to me. You get homeopaths and anti-vaccination campaigners and conspiracy theory bullies who bizarrely assert that I am somehow a servant of Big Pharma when in reality, if you’ve read my stuff, you couldn’t find a bigger critic.

I hope I never missed out on research funding or missed out on a job just from standing up and communicating about what the real evidence shows. The adverse-risk outcomes that people fear from writing sensible stuff about evidence-even when it involves being critical-aren’t as bad as people say and I think are outweighed by the benefits.

Q: Any final words of advice to mobilize an army of nerdy, would-be quack-busters?

A: Doctors need to grow a bit of oomph about setting out evidence clearly in the way I think their patients would expect them to.

It’s quite common in a one-to-one medical consultation for there to be a conflict between the doctor and patient in what they want. For example, patients come wanting benzodiazepine to get to sleep. The doctors won’t want to prescribe that because they think it’s not in the long-term interest of their patients—they think it will cause more harm than good.

We have an obligation to stand up not just to patients but administrators and legislators. Not in a pompous, childish, warfare way. But to stand up and set out the facts clearly and not let issues of values and evidence get confused, as they so often do.

This article first appeared in The Medical Post. To register for the website, click here

Science-ish is a joint project of Maclean’s, The Medical Post, and the McMaster Health Forum. Julia Belluz is the associate editor at The Medical Post. Do you have a burning question about science or a health claim you’ve seen this year that seems dubious? Message Julia at [email protected] or on Twitter @juliaoftoronto by December 13 to participate in a year-in-review Science-ish column.

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.