Imagine this scenario: It’s the 2019 Canadian federal election, and millions of people are sharing an article on Facebook that makes explosive claims about one of the federal party leaders. The article is spreading so quickly that someone at one of the federal parties has contacted Facebook using a dedicated crisis “hotline” and requested that the article, which they claim contains fake information, be taken down.
A team of Facebook employees calls a meeting at the company’s Palo Alto headquarters to discuss the possibility of taking down the offending article, deleting the Facebook page that published it, and even altering the platform’s News Feed algorithm to suppress stories containing similar content. Finally, a group of “content moderation” contractors stationed in Manila receives instructions from Facebook HQ to scrub the platform of all instances of the offending article before anyone else sees or shares it.
Is the article true, but inconvenient for one of the parties? Is it false, but from a reputable outlet? Or is it completely made-up with malicious intent? There’s no way for Canadians to be sure, because this entire process takes place behind closed doors.
This might sound terrifying, like a piece of bad science fiction. But this is a potential future based on a reading of Facebook’s Canadian Election Integrity Initiative, launched in Ottawa on Oct. 19—a future where the people who helped bring us fake news, 1960s-style racist redlining, live-streams of murder and suicide, and the largest global censorship operation in history are now also in charge of maintaining the integrity of our democratic institutions. And indeed, after months of publicly denying the idea that it could have any influence over an election at all, Facebook is now ready to appoint itself guardian of our democracies.
“At Facebook, we take our responsibilities seriously and are committed to do our part to guard against these potential cyber threats,” said Facebook’s Canadian public policy head Kevin Chan at the Ottawa launch event for the initiative, which appears to be based on similar efforts launched during France and Germany’s recent elections. There, Facebook shut down tens of thousands of fake accounts and set up “dedicated reporting channels” with authorities for flagging fake or misleading information. Minister of Democratic Institutions Karina Gould was also on hand, claiming that social platforms like Facebook “must begin to view themselves as actors in shaping the democratic discourse.”
But far from trusting Facebook, Canadians should be increasingly wary of the extent to which the platform has paralyzed Canada’s ability to protect its own democratic institutions. “[I think] the mechanisms that we’ve put in place to secure the integrity of the election are actually undermined and probably unenforceable in the platform ecosystem,” says University of British Columbia journalism professor Taylor Owen, who wrote in a recent op-ed in the Globe and Mail that the biggest foreign threat to Canada’s democratic institutions might not be Russian bots or hackers—but Facebook itself.
“Does Facebook threaten the integrity of Canadian democracy?” Owen wrote in that op-ed. “It is increasingly apparent that the answer is yes.”
Facebook will, of course, play some role in the next federal election, whether Canadians like it or not. Nearly three-quarters of all internet traffic today filters through Facebook and Google. A Pew Research Center survey in September found that 45 per cent of American adults today get their news from Facebook—and that percentage is probably higher in Canada, which has the most active Facebook population in the world. By the next election in 2019, the eyeballs of a majority of Canadian voters will be accessible to anyone in the world with a Facebook page and a credit card.
“We’ve gotten to a place where we have one social media platform for most Canadians. The reason why that’s so problematic is that you’re expecting one company to fix everything,” says Fenwick McKelvey, an assistant professor of information and technology policy at Concordia University.
While Facebook says it wants to build “a global community,” as its founder Mark Zuckerberg wrote in a long essay in February, the company’s track record says otherwise. A recent ProPublica investigation found that Facebook had not fixed illegal racial discrimination in its ad-targeting systems, despite the company’s claims to the contrary. Similar investigations by journalists like Gizmodo’s Kashmir Hill have found Facebook employees themselves often have trouble understanding Facebook’s vast, complex operation. And in 2016, the platform was instrumental in the birth of the “fake news” genre—sensational, fictitious news stories that make outlandish claims about prominent political figures—promoting fake stories at a higher rate than most real news from major outlets like the New York Times and the Washington Post. Despite efforts to fight fake news on the platform, false stories and conspiracies masquerading as news continue to constitute a large portion of the editorial content circulating on Facebook.
The company, too, has shown itself uninterested in protecting its users from false information. “Facebook took the same approach to this investigation as the one I observed during my tenure,” Sandy Parakilas, a former employee of the company, wrote in the New York Times while it was being investigated as part of a probe on Russian interference in America’s 2016 election. “React only when the press or regulators make something an issue, and avoid any changes that would hurt the business of collecting and selling data.”
“They are not a public good,” says Owen. “They are a private company that is publicly traded, that has a board of directors that has a fiduciary responsibility to make more than the $26 billion they made off ad sales last year, next year. That is what they are. That might not be what they want to be, that might not be what the identity of the people who work there is—they might legitimately believe in this narrative around ‘community’ and ‘global connectivity.’ But from our standpoint, looking at it objectively, looking at it from a distance as a country who is impacted by their activities during an election, I don’t think we can look at them as anything other than that.”
As writer John Lanchester pointed out earlier this year, Facebook today treats its users not as its customers, but as the product. The only way Facebook can continue to keep such a large group of users engaged with such a relatively small number of employees—one engineer per one million users, as the company likes to brag—is by algorithmically promoting anything that holds its users’ attention with as little attention to the actual content of that thing as possible.
The more that experts like University of Ottawa communications professor Elizabeth Dubois look at the issue, the more it becomes clear to her that letting Facebook become the world’s de facto election observer, without any complementary regulation from governments, is irresponsible.
“I think it’s very dangerous for us as Canadians to expect that that kind of self-regulation would be enough, and would be consistent,” says DuBois. “Facebook needs to figure out its policies in the context of a global marketplace, and what’s good for Canadian democracy is not necessarily what’s good for many other countries.”
So what does a regulated Facebook look like? For Dubois, the first step is simply getting a hold of the appropriate data. Facebook is a notoriously closed-off platform, and is extremely protective of its data—even for a tech company—releasing it to authorities and researchers only in exceptional circumstances.
“I think the most important thing is that we—either as researchers, the general public, or at the very least Elections Canada—have access to the right kind of data so that we can enforce our laws around foreign interference and advertising spend. Right now that information is not made available by Facebook to Elections Canada, or anyone else.”
Concordia University’s McKelvey similarly imagines a more transparent “auditable Facebook.”
“Where is the role of government? I think it’s partially in setting up ways to make sure some of the claims that Facebook and other companies are making are true.”
To some extent, governments are encouraging Facebook’s worst habits by giving it, and Silicon Valley more broadly, the benefit of the doubt and giving them special treatment, from an agreement with Netflix to spend just $100 million a year on Canadian content, to starstruck rapprochement with companies like Uber and Airbnb.
But to truly protect its delicate democratic systems, Canada will need to see past the narratives and its own tendencies to treat Silicon Valley companies as magical entities beyond regulation. In the election ahead, it will need to take steps to protect itself. We cannot, despite Minister Gould’s suggestion, rely on an American software company to shape our democratic discourse. That is our responsibility.
CORRECTION, Dec. 5, 2017: A previous version of this post erroneously listed Kevin Chan’s job title. We regret the error.
MORE ABOUT FACEBOOK:
- Why the fight for net neutrality matters
- What the Facebook, Google and Twitter algorithms hide from you
- Russia’s Facebook memes have been revealed, and politics will never be the same
- Is Facebook a ‘con’ or not?
- What will happen when we fall out of love with tech?
- Google uncovers ads placed by Russian operatives, says report
- The walls are closing in on Facebook
- Facebook pledges more transparency for political ads