Logging off this mortal coil: Will social media decide who lives and who dies?

Julia Belluz on life and death in the intersection of health and social media

This is the final part of a series of articles adapted from the 2012 Hancock Lecture, “Who Live and Who Dies, Will Social Media Decide?” delivered at the University of Toronto by Julia Belluz. Read parts one, two, and three.

In many ways, social media is already determining who lives and who dies. Arijit Guha might have been another statistic demonstrating the brutality of the U.S. health system without his serendipitous Twitter exchange. For better or worse, YouTube videos have prompted MS sufferers’ to travel the world for Liberation Therapy or parents to shield their kids from vaccines. Facebook campaigns are driving the research agenda around organ donation, and social reporting may change the trajectory of the next pandemic.

So the question becomes: How can we emphasize the upsides of this intersection of social media and health, and minimize the harms? As I reflected on this problem, I realized it’s not a new one at all. We’re talking about credibility—who or what to trust, what evidence will shape us, our society. This is an age-old question, but maybe it’s more urgent with the volume of info we are dealing with now, and the speed at which it reaches us.

While the internet may have changed the nature and speed of our social interactions, our neighbours, friends, and networks—have always influenced how we live and our thinking about health—long before social media, all the way back to the Savannah in Africa. Information circulated around the village, elders passing wisdom down through the generations about treatments and cures.

For a long time, this is how medicine worked, too. Doctors didn’t always practice in an evidence-based manner. Their actions were informed by outdated information they received in medical school, the supposed wisdom of elders. Up until pretty recently, doctors and treatments killed more people than they helped.

Only in the last 30 years did the idea that doctors ought to be practicing evidence-based medicine take hold and get into action. Now, we rigorously test new treatments, usually with double-blind randomized controlled trials to make sure we know they work. The expectation is that doctors will use the best, most up-to-date evidence conscientiously to make decisions for patients and the delivery of health services.

This principle is based on a simple idea: Not all evidence is created equally, and so not all types should be acted on in the same manner—or acted upon at all. For example, anecdotes and opinions are powerfully persuasive, they are the stuff that moves us, that goes viral on social media, and even changes policy directions. But that power can be problematic. Stories can be cherry-picked or manipulated to promote a particular point of view, they can be biased, or they can sometimes be driven by interest groups of which we aren’t aware. They aren’t necessarily the whole story or even part of the story, and while heart warming or moving, they can lead us astray.

Take the Frasier star Kelsey Grammer and his then wife’s confessions of her life with irritable bowel syndrome a few years ago.* They made the media rounds – including an appearance Oprah – talking about her discomfort. While it appeared to viewers that the couple was speaking independently, they were actually being paid to deliver these charming anecdotes by the pharmaceutical giant GlaxoSmithKline, part of a well-timed public-awareness campaign.

Not without accident, shortly after the Grammers hit the airwaves with their bowel tales, the company released the IBS drug Lotronex. And unfortunately for everyone involved, this drug was later linked to serious adverse events, including severe gastrointestinal side-effects and death. It was eventually pulled from pharmacy shelves.

Going back a bit further, consider why end-stage renal dialysis is covered by Medicare in the US. In 1971, a kidney failure patient testified before Congress while attached to a dialysis machine, and told his story. He said if they didn’t cover the cost of his treatment, he would die within a week, and thousands of patients like him would die, too.

A moving testimony, and surely enough, a year later, in 1972, Congress passed the special Medicare dialysis entitlement—still in place today—and widely seen as a non-cost effective, badly supervised program.

This man’s anecdote was not the best evidence on which to base the allocation of billions of dollars of health funding, in the same way Grammer should not be the font of wisdom for information about IBS.

There are many other examples like this. Anecdotes feed our human desire for connecting real people and real stories to names and faces rather than numbers or statistics. They move us to act. But we too often rely on anecdotes instead of evidence that may be more credible, more applicable to our own circumstances.

Because of the limitations of single cases or studies, the scientific community began synthesizing evidence, integrating and summarizing multiple sources of information coming from different contexts, settings and methods in an explicit and systematic way, to answer research questions.

Cochrane systematic reviews are the highest quality example of these research syntheses, and they have been deemed one of the greatest scientific inventions of our age. Multiple researchers tackle one review according to predefined protocols, and their separate conclusions are brought together and weighted against one another to limit bias and random error.

These reviews reflect the iterative and incremental nature of scientific discovery: answers aren’t final, the evidence is evolving and studies need to be put in context.

Perhaps most importantly: breakthroughs are rare. A recent review of systematic reviews demonstrated that the “too-good-to-be-true” big effects and breakthroughs of initial studies often melt away as further research is done.

The Cochrane Collaboration, a non-profit collective of researchers who do systematic reviews, was founded in 1993, named after a Scottish doctor called Archie Cochrane. He had a simple but profound idea: health resources would always be limited, so we should invest in those that have been shown—through careful and systematic study—to be effective.

This thinking should inform our health decisions. Instead of relying on what the woman who works in the health-food store tells us about what supplements we should buy, or taking the health advice of celebrity doctors, we should be skeptics about the advice that informs crucial decisions about our health.

And this idea ought to spread beyond the medicine cabinet and bedside to other areas that impact our health. Right now, there’s an international chorus arguing that the culture of policy development ought to move in a more evidence-based direction. At present, there’s a discrepancy between how we decide what drugs work and how we decide on what policies and programs work.

With the drugs, we use science, testing them with randomized controlled trials to try to get unbiased answers about their effectiveness. This is not always the case with policies, which affect the health of thousands, millions of people at a time.

The evidence policymakers use is usually indirect, because we don’t rigorously evaluate the way existing policies are working. One way to make headway is to gradually test new programs and policies, one by one, in the same way we began testing treatments. We learn, and we know when something is effective—or not.

To be sure, there are flaws in the scientific evidence-based process. Reports of scientific fraud and misconduct abound. There are cases of ghostwriting in academia—where pharma pays academics to put their names of scientific papers. The perverting influence of industry on science is well documented, as are the huge limitations with the evidence doctors use to prescribe drugs because of hidden and missing data from clinical trials.

But as these problems are uncovered, and identified, there’s hope that they will be addressed and reforms will be undertaken—especially if we hold to account those who are accountable. Aiming to have a healthy skepticism about health information, to think in an evidence-based manner, is a worthwhile goal. And better than the alternative.

In low and middle-income countries, this is already becoming the norm, maybe because they don’t have the resources to squander. I saw it first-hand this summer, when I visited Ethiopia for a World Health Organization meeting of researchers, journalists, and policymakers from across the developing world.

While there, I met Dr. Lely Solari, a physician-researcher from Peru who told me about a big public health problem in her country: there are areas where anemia in children reaches 70 per cent prevalence. If you’re anemic, you’re body does not have enough healthy red blood cells, which provide oxygen to your tissues. Anemia makes people feel weak and dizzy. They can’t focus in the classroom or at work, and anemic kids may sustain developmental delays.

Still, we have an effective treatment for anemia—micronutrient powder. It can decrease the prevalence of anemia in places where the problem is widespread. Yet in Peru, a pilot with micronutrient powder had to be stopped because it seemed it wasn’t working. Dr. Solari was asked to go in and find out. She studied the program, and discovered that it wasn’t the micronutrient powders that were the problem. It was how people were taking them.

The sachets of powder were being delivered by the ministry of health without any counseling and people didn’t know what to do with them. No one had been evaluating the impact of the program, and there was little community buy in. In other words, the system by which an effective treatment was being delivered was broken so the treatment didn’t work.

Dr. Solari took all these issues and her proposed solutions to the ministry. Now, because of her work, not only is the micronutrient program in Peru saved; it’s being scaled up to 14 regions in the country. Lives will likely be improved, even saved. The program will continue to be evaluated and that data can inform other programs in the future.

This was not an easy problem to solve. The real problem wasn’t even seen at first glance. It certainly wasn’t condensable to a sound bite. It required careful study. It required careful consideration. With Dr. Solari’s research, policymakers were able to weigh their options carefully before proceeding.

This story is a reminder that we also must carefully consider the evidence that we use to make health decisions‚ whether it comes from a Tweet or the mouths of celebrity doctors. We have to proceed with caution, with a healthy skepticism, at both the individual and societal level. We have to be evidence nerds. If we want to protect our health, there is simply no alternative.

*Correction: An earlier version of this article suggested Kelsey Grammer, and not his wife, had IBS.

Science-ish is a joint project of Maclean’s, the Medical Post and the McMaster Health Forum. Julia Belluz is the associate editor at the Medical Post. Got a tip? Seen something that’s Science-ish? Message her at julia.belluz@medicalpost.rogers.com or on Twitter @juliaoftoronto




Browse

Logging off this mortal coil: Will social media decide who lives and who dies?

  1. I think future shocks are bad if they make people criminals, but especially it is useful to look at bad events as causing mental illnesses that reduce human capital. So there should be measures of coping mechanisms and capabilities among present actors. The Fukushima tapes should be in english. A mass psychology metric would be nice to have. Tyrants and specific technologies will be issues, so different capabilities should be a part of a nation’s universal education/media. Some classfied files that are educational, should be opened up.

Sign in to comment.