In his 2004 Massey Lectures, the novelist, historian and essayist Ronald Wright warned us about the “myth of progress”—the idea that change, especially technological change, always moves in the direction of improvement. Although technology generally has improved human well-being over time, some specific technologies bring change without improvement. Some technologies lead us into what Wright calls “progress traps.” They create problems that can be solved only with additional technology. We end up spending time and resources on problems that we could have avoided altogether had we reflected on the foreseeable implications of a technology before unleashing it on the world.
Facebook Live is falling into a progress trap. Available for nearly two years, it allows people to use their smartphones to broadcast videos of virtually any content they want in real time. Users can livestream their children’s birthday parties, soccer games and dance recitals for grandparents in other cities, or community meetings and political debates to public audiences, without editorial intervention by conventional media. Facebook markets the service as allowing you to “tell your story, your way”—a message designed to resonate with a creative, libertarian ethos.
The problem is that some of the “stories” being broadcast amount to the worst kinds of human behavior. Facebook Live has been used to live-stream murders, assaults, rapes, and suicides—largely because there is no built-in, real-time review and censoring process. Review and removal of objectionable content depends on user reports, which means that the decision to censor comes only after content has been streamed and seen by others. Just weeks ago, in one of the most horrific cases, Facebook Live was used to broadcast the murder of an 11-month old girl by her father. By the time Facebook censors removed the video, an estimated 370,000 people had viewed it.
Facebook thinks that the service has value. Users seem to like it and it can be used for beneficial social, political and humanitarian purposes. Live-streamed interactions with law enforcement officials might encourage better adherence to proper procedures. Broadcasts of the aftermath of natural disasters can increase public sympathy and assistance to survivors. Facebook acknowledges that violent broadcasts occur, but notes that they are rare and that it is taking steps to address them. Last week it announced that it will hire 3,000 people to review reports of objectionable material, adding to the 4,500 people it already has in these roles. But review will still occur only after users make reports. Objectionable content might be removed faster, but countless users will still be able to view the live broadcasts.
A key ethical question, then, is whether Facebook Live generates harms that would not otherwise exist and whether the benefits, such as they are, outweigh these harms. Murder, assault, rape and suicide have always existed, but easier access to videos of these activities could normalize them and contribute to more. Does watching a live suicide, for example, increase or decrease the likelihood of additional copycat suicides? Will viewers watching live suicides and other violent broadcasts experience psychological trauma? These are questions that Facebook should have addressed before introducing Facebook Live.
It is very likely that many of Facebook’s 7,500 reviewers—whose full-time job is to view and censor objectionable content—will experience psychological harm. To be sure, Facebook could, and likely will, offer support to these people, but this sounds like the kind of technology progress trap Wright encourages us to avoid. All thing considered, a better option might be to press pause on Facebook Live and reintroduce it only after steps to minimize its harmful effects have been identified.
Many will balk at this suggestion. Skeptics will say that technology is neutral and should not be blamed for the ill-conceived uses people make of it. Others will say that taking Facebook Live offline would violate free speech. Neither of these arguments hits the mark. Technology may be neutral in a strict sense, but if its foreseeable consequences are negative and preventable, then introducing it constitutes a lapse in ethical judgment. Putting Facebook Live back in the box might reduce opportunities to broadcast messages, but that’s not a limit on speech, only a limit on its reach. Free speech does not entail a corresponding right to an audience of a given size.
We live in a culture that values speed and discounts ethical reflection. Facebook encourages its employees to think about impact and social value, but evidently prioritizes being bold, open and fast. Few companies actively set out to behave unethically. But too few engage in sustained, rigorous reflection on the ethical implications of their products prior to introducing them. The urge to be first—first to market, first to adopt, first to share—is understandable, but dangerous. Our tendency to make poor decisions when pressed for time is precisely why we have introduced mechanisms like pre-market evaluation and editorial review. It slows us down, but slowness might be exactly what we need to make better ethical decisions and avoid progress traps.
Listen to Daniel Munro on The Ethics Lab on Ottawa Today with Mark Sutcliffe, Thursdays at 11 EST. http://www.1310news.com/ottawa-today-live/