Technology

The ethics of restricting speech on social media

There are good reasons for tech companies to prohibit hate speech on their platforms. But their status as private organizations is not one of them.

Marchers at a white-supremacy rally encircle counter protestors at the base of a statue of Thomas Jefferson after marching through the University of Virginia campus with torches in Charlottesville, Va., USA on August 11, 2017. (Shay Horse/NurPhoto via Getty Images)

Marchers at a white-supremacy rally encircle counter protestors at the base of a statue of Thomas Jefferson after marching through the University of Virginia campus with torches in Charlottesville, Va., USA on August 11, 2017. (Shay Horse/NurPhoto via Getty Images)

As we mourn the death of Heather Heyer, who was run down by a car driven by an alleged white supremacist in Charlottesville, Va., last weekend, many tech companies are reflecting on the role they may have played in enabling white supremacists and other far-right extremists to organize, promote hate and advocate violence. While a number of companies have taken steps to censor the messages of, and refuse service to, extremists, some argue that more needs to be done. Critics worry about the effects of limiting speech and the harm it could do to public political discourse. Is there a case for banning extremist groups from social media and internet platforms? And what should we think about it?

READ MORE: Charlottesville and the politics of fear

A common approach to thinking about the permissibility of limiting speech focuses on the kind of organization considering the restriction and the opportunities and barriers it faces to doing so. For governments, legal and constitutional protections of speech create a narrow space for restriction. Reasonable limits are sometimes imposed—such as Canada’s prohibitions on hate propaganda in the Criminal Code (specifically, sections 318, 319 and 320)—but the standard of justification for those limits is quite high.

By contrast, when businesses and other private actors consider restrictions, they have more room to act. The flip side of Silicon Valley’s libertarian, content-neutral ethos—which permits all kinds of offensive content—is a libertarian, property-rights-protecting disposition—which, arguably, allows them to impose restrictions on content arbitrarily. A case in point is the justification Cloudflare CEO, Matthew Prince, offered in a memo to employees for his decision to boot the extremist site Daily Stormer from its platforms. “The people behind the Daily Stormer are assholes and I’d had enough,” he wrote. “I woke up this morning in a bad mood… It was a decision I could make because I’m the CEO of a major Internet infrastructure company.”

For some, the fact that Prince and other technology CEOs face fewer barriers to banning extremists provides sufficient reason to ban them. But focusing on the nature of the organization considering limits gets us only so far. The challenge is determining where to draw the line between offensive speech and hate speech, and what reasons might be offered for banning the latter and permitting the former. That, in turn, means that we need to think about why speech matters and confront both the benefits and risks of restrictions.

Consider John Stuart Mill’s classic defence of what he called the liberty of thought and discussion. Mill argued that we ought to tolerate offensive opinions because of the benefits such opinions produce for human and social progress. In some cases, an opinion may be offensive, but true—in which case silencing it would rob us of access to the, admittedly uncomfortable, truth of some matter. In other cases, an offensive opinion may be partly true and, when combined with our partly true popular opinion, produce a greater understanding of the whole truth. Unrestricted speech creates an opportunity to learn and correct errors.

RELATED: Democracy can’t be taken for granted. Charlottesville proves that.

But even in cases where an offensive opinion is false, Mill says that hearing and responding to it can help us better understand the reasons why our current views might be correct and justified. If a strongly held opinion or conviction is not “fully, frequently, and fearlessly discussed, it will be held as a dead dogma, not a living truth.” For example, we might develop a better understanding of exactly why slavery, racism and sexism are wrong—and thereby put ourselves in a stronger position to recognize and dismiss arguments for them—by occasionally grappling with the opinions of jerks who think otherwise.

But notice that if the value of liberty of thought and discussion resides in its contribution to individual and social development, then we should be just as conflicted about restrictions on speech imposed by private individuals and organizations as by restrictions imposed by governments and political institutions. The silencing of an opinion itself is the primary issue, according to Mill, not the status or identity of the censor. “Protection, therefore, against the tyranny of the magistrate is not enough,” he writes. “There needs protection also against the tyranny of the prevailing opinion and feeling; against the tendency of society to impose, by other means than civil penalties, its own ideas and practices as rules of conduct on those who dissent from them.” On what grounds, then, can speech be restricted?

Mill famously argued that liberty—and not just liberty of thought, but of action as well—ought to be restricted only when its exercise harms others. “The sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number,” he says, “is self-protection…[T]he only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” To restrict speech, in Mill’s view, it is not enough that an opinion is offensive; there must be a reasonable expectation that not restricting the message could lead to tangible harm.

READ MORE: It wasn’t a lone, unusual flare-up. Charlottesville really is America.

The challenge is determining exactly what constitutes harm and, therefore, what might justify limits on speech. I take offence when someone calls me an idiot. But while that happens more often than I’d like, I’m not harmed in any meaningful sense. When someone encourages someone else to kill me (which happens less frequently, but still more often than I’d like), there is a case for censoring that message—because it threatens substantial harm. But there is a wide open field between those two cases. Exposure to relentless racist, sexist or other vile messages—short of encouraging violence—might constitute a kind of harm. Or it might not. Distinguishing between cases that are merely offensive and those that constitute harm requires thoughtful deliberation and judgment. Mill offers a useful principle, but not an algorithm, for sorting through them.

In thinking about restrictions, then, both advocates and critics should pay less attention to what private organizations are permitted to ban, and more on what they may or may not have reason to ban. If there is something to worry about when tech companies think about speech and censorship, it is that they have enormous power to shape discourse, but little corresponding obligation to provide a formal account of how and why they shape that discourse. In banning hate speech, they’re on solid ground. But what’s less clear is whether tech companies are equipped to distinguish between genuinely harmful and merely offensive speech. Tech companies and their CEOs need to think about good, defensible reasons for acting, and not simply on how they feel when they wake up in the morning.

Dan Munro is Visiting Scholar and Director of Policy Projects in the Innovation Policy Lab at the Munk School of Global Affairs at the University of Toronto. Listen to The Ethics Lab on Ottawa Today with Mark Sutcliffe, Thursdays at 11 a.m. EST. @dk_munro

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.