Thursday evening, Facebook CEO Mark Zuckerberg posted a video in which he committed his company to providing more transparency around political advertising in the future. Zuckerberg spun it as a move that would make Facebook even more transparent than other media. “When someone buys political ads on TV or other media, they’re required by law to disclose who paid for them,” Zuckerberg explained. “But you still don’t know if you’re seeing the same messages as everyone else. So we’re going to bring Facebook to an even higher standard of transparency.”
Going forward, he said, not only will Facebook force disclosure of which page paid for an ad that users see, but the company will “also make it so you can visit an advertiser’s page and see the ads they’re currently running to any audience on Facebook.” Should that prove workable, it might be significant. Micro-targeted messaging on social media platforms like Facebook has evolved to a point where users can never be quite sure whether the ad they’re seeing is the same as the one their neighbours or family members see, because personal browsing history, as one example, has created a profile of them that advertisers—including political parties—can use to target slightly different messaging.
But while its effectiveness remains to be seen, it’s clear already why Facebook has chosen to make this move: the walls are closing in.
Facebook’s reluctance to divulge information about political advertisements was always based on its self-imposed rules that it simply chose not to break for business reasons. For a little while, it wasn’t a problem. Facebook would, in fact, frequently point to political campaigns as case studies of its audience targeting capabilities.
Political advertising, to Facebook, was no different in effect than any other kind of advertising. 2016 changed that.
A few hours before Zuckerberg posted his video and message about political ad transparency, The New York Times reported that Facebook is preparing to reveal to congressional investigators the more than 3,000 ads a Russian outfit paid to be displayed on Facebook during the 2016 presidential campaign. That news followed from Facebook’s own admission earlier this month that a Russian “troll farm”—a literal building in Russia that reportedly houses employees tasked with influencing online political discussion—had purchased $100,000 worth of advertising between June 2015 and March of this year.
Incredibly, the social media platform might have had a more uncomfortable headline this week: a BuzzFeed report from Myanmar, which suggested a correlation between the rise of anti-Muslim sentiment (which has boiled over into what some have called a genocide) and an explosion in internet adoption since 2014. Or rather, more specifically, to a rise in Facebook use in that period. “Its domination is so complete that people in Myanmar use ‘internet’ and ‘Facebook’ interchangeably,” BuzzFeed’s Sheera Frenkel wrote. “According to Amara Digital, a Yangon-based marketing agency, Facebook has doubled its local base in the last year to 9.7 million monthly users.” That’s even before Facebook’s Free Basics program—that is, affordable access to a select range of online services (the main one being Facebook) for nations with poor web infrastructure—kicks in.
Ironically, at the heart of all Facebook’s problems are its best intentions. In a lengthy manifesto posted earlier this year, Zuckerberg reasserted that Facebook wants to connect the world. The vision Facebook has always attached to that lofty goal is one of barrier-free cultural and economic connections, a kind of globalization 2.0. But just as with the first iteration of globalization—that which promoted free trade between states, rather than individuals—many people around the world are discovering that the product does not live up to the promise. The world Zuckerberg hopes Facebook can help create has started to appear more dystopian than likely intended.
And Facebook is not merely trying to rehabilitate its image away from its current darkening profile, because ahead of it lies not merely uncomfortable public relations, but potentially a fight for its survival as a corporation. Among those now pondering openly about Facebook becoming a public utility are none other than Steve Bannon, former adviser to Donald Trump and still, reportedly, an influential voice in the president’s ear. That might be a bridge too far even for Trump, but as European governments attempt to hold Facebook and other tech companies more responsible for the content that appears on their platforms, the threat in the U.S. that measures like the Communications Decency Act might be clarified to make corporations like Facebook more accountable still looms.
Which is why Zuckerberg moved first. Accountability is something Facebook doesn’t necessarily want to accept, certainly not in full. All of the negative headlines that have been attached to Facebook recently were generated by the content the platform allows its users to upload. At the end of the day, it does not want to be responsible for any of that—or at least for as little of it as possible. And so the company shifts. Its push for transparency in this case puts the onus on political parties and organizations to reveal what else they are advertising, sidestepping most of the responsibility Facebook might have to reveal much about how the ads got placed, or—most crucially—the information that played a role in a certain ad being shown to a certain person at a certain time.
In the end, that kind of stuff—the data it has on all of us, and that we give it—is worth too much to reveal.