Elizabeth Dubois is an assistant professor at the University of Ottawa. Her work examines digital media, influence and political engagement. Find her on Twitter @lizdubois, or at www.elizabethdubois.ca.
The thin line between comedy and fake news: Elizabeth Dubois on The Big Story
Politics, the old expression goes, is the art of the possible. It is also about constantly testing how far you can bend the rules of the game.
Take, for example, the case of a Twitter account that parodied Liberal environment minister Catherine McKenna. The account, previously called “Catherinne McKennna” (using the handle @CatheMcKennna), was banned from Twitter after complaints were submitted by various users, including McKenna’s office. There was outrage—did a government request cause a Twitter account to be deactivated? What are the implications for free speech? But then, the account returned—this time, as “CatheMckennnnnna,” or the handle @CatheeMcKennnna.
While whoever filed a complaint is likely not too happy to see it back, this change has obviously been enough to satisfy Twitter. That’s because the account now cannot be mistaken for the minister’s own, which is used to share policy updates and other information. It seems a bit silly, but the visual difference of those four-to-six n’s is enough to make most Twitter users think twice before assuming the account belongs to the actual minister.
There’s been outrage in the wake of the banning about the impact of government intervention into social media. But what is perhaps more chilling is what this has shown us about how the legitimacy of political tactics is determined—and how much power social-media networks are being allowed to wield.
Political tactics have always intentionally tested the boundaries of acceptability, and people from all sides want to make the most effective use of the tools they have to reach and mobilize supporters. With ever-changing digital communication tools and platforms, political players have to test out the best ways to share their messages, to learn what works and what doesn’t, and to find out what the public finds charming and what they find creepy. This negotiation builds consensus around what is fair play in our democratic system.
The rapid rate at which technology is developing—especially artificial intelligence—means we will see many more negotiations around political norms in the years and decades ahead. But that speed is making it nearly impossible for existing law and the judiciary to keep up. Journalism, too, can shine a light on emerging tactics, but there are limited resources and a lack of access to international platform companies like Twitter. That leaves three key players to decide what the line is between a legitimate and illegitimate political tactic: government, tech companies, and people.
Like it or not, government has long played a role in deciding what is an acceptable political tactic. They help develop and deploy laws and policy related to hate speech, online bullying, and voter suppression, to name a few.
The discussion of what is an acceptable political tactic comes back to the ever-challenging discussion of how we balance free speech with other rights and freedoms. We have agreed that there are things that need to be balanced against our right to free speech. Enforcement is not easy, but it is important in order to protect the rights and freedoms of Canadians.
Tech giants like Facebook, Google and Twitter lobby government, try to drum up sales in political settings, and depend on cultivating audiences for advertisers in order to make money. Twitter, specifically, also serves a purpose in the lives of some Canadians and has become particularly important for political insiders. But these companies do not ultimately have the specific Canadian context, laws, and interests as their main motivator when deciding what crosses the line and what does not.
In fact, other governments have only recently begun to put pressure on these platforms, prompting some wide-scale advancement in minimizing things such as harassment and hate speech. And because these solutions have been late to arrive, are only voluntary in Canada, and not necessarily designed with the Canadian context in mind, we cannot trust that they wouldn’t be eliminated if the business case for them dries up.
It has been far more common for these companies to offer government offices and political parties a case-by-case approach. Instead of designing robust systems to identify and deal with something like voter suppression, hacking a politician’s account and, yes, impersonation, these companies train politicians and bureaucrats to use their tools and then give them a number to call or an address to email, telling them, in effect: we will deal with any infringements you find.
But this approach encourages cherry-picking—say, when the parody account of a federal cabinet minister confuses someone with a lot of followers. The approach is not proactive, and makes for a hostile environment where it is politically risky to report anything, as evidenced by the blowback from government officials and officers reporting the account to Twitter.
More concerning, the approach gives the platforms a lot of power with no accountability. For example, there are many questions left unanswered in the case of the McKenna parody account. Was it her office’s complaint that prompted the account takedown? How many others complained? What was the decision-making process? We do not know the answers because it would be risky for Twitter to do so, and no government has yet to compel them to be more transparent.
That’s where the third party—the people—should come in. The parody-account debacle could have been avoided if people just clicked through to read the bio of the account. But there are two big problems with this argument. First, clicking through is simply not how most people use Twitter, and a good political strategist knows you need to think about how people are already using a channel of communication in order to design the best messaging approach. Second, it is increasingly difficult to distinguish between real and fake as technology evolves.
We need to increase our media and digital literacy efforts, and they need to be embedded in everything. From platform design to journalistic coverage to government statements, people need to see how decisions about what makes the cut are made. The kind of literacy required can only exist with the contribution of all political players. This is a lot of work and will take resources, commitment—and it will still only be a partial fix.
Ultimately, the problem we face is not whether a few Twitter users were confused over an overly opaque parody account, or even whether the government was overstepping its place over some social-media satire. The problem is figuring out who we are willing to let decide what counts as a legitimate political tactic and what does not. International platform companies like Twitter, Facebook and Google are being left to make up the rules as they go with no transparency or accountability, and frankly, like government, it’s unlikely that platform companies want all the power—and blame—either.