Is Facebook a 'con' or not? - Macleans.ca
 

Is Facebook a ‘con’ or not?

In trying to downplay the impact of Russian-planted divisive political posts on its platform, Facebook prompts questions about its business model


 
Image courtesy Facebook.

Image courtesy Facebook.

In late October, Facebook (along with its platform contemporaries, Twitter and Google) sent a representative to testify before the U.S. Senate Judiciary Subcommittee on Crime and Terrorism. The topic at hand was the possible Russian influence during the 2016 U.S. presidential election via misinformation spread online. For Facebook, that meant fraudulent posts disseminated on its platform. And, in describing the impact of that misinformation, Facebook may have unwittingly forced a serious question about its entire business model.

MORE: The walls are closing in on Facebook

In its written statement presented to the committee, Facebook added context to its earlier revelations that a Russian troll farm, the Internet Research Agency, had spent $100,000 on roughly 3,000 ads that focused on “amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.” According to CNN, Facebook’s testimony this week clarified that “29 million people were served content directly from the Internet Research Agency, and that after sharing among users is accounted for, a total of ‘approximately 126 million people’ may have seen it.”

However, Facebook also sought to downplay the implications of that figure — roughly the same total number of people who voted in the 2016 election.

“This equals about four-thousandths of one percent (0.004%) of content in News Feed, or approximately 1 out of 23,000 pieces of content,” Facebook said in its statement. “Put another way, if each of these posts were a commercial on television, you’d have to watch more than 600 hours of television to see something from the IRA.”

RELATED: Google uncovers ads placed by Russian operatives, says report

There are potentially at least two issues with framing the debate in this way.

First, as any casual user knows, the algorithms that control what appears on Facebook’s News Feed (the stream of posts that greets us when we open the app or sign in), are at least partially geared toward prioritizing content based on what we’ve previously seen, clicked on, or shared. In other words, the issue is probably less about how many people in total may have seen material generated or paid for by foreign influencers, but rather how many users were exposed repeatedly to similar content once they’d initially engaged with a fraudulent post.

That is to say, it’s the repetition for each user that counts, not the overall percentage of Russian posts were present across the entire platform. Further, it likely matters more where those users lived – say, in swing states, for instance.

The second problematic facet of Facebook’s defence—that few people overall may have actually seen the Russian troll farm content—is what it implies about Facebook’s impact on its users. Facebook boasts that by using its platform and accessing people based on its data — which is, by all accounts, quite rich — advertisers and political parties can target their messages effectively. Yet, squaring that with how little impact Facebook suggests the Internet Research Agency’s posts had on users prompts an interesting quandary.

MORE: What will happen when we fall out of love with tech?

As Dylan Byers at CNN tweeted: “FACEBOOK timeline: didn’t happen — happened, but was small — ok, semi-big — ok, it reached 126 million, but no evidence it influenced them”. Pithy though that assessment may be, Facebook arguing that $100,000 worth of ads has little effect on their intended targets implies that its vaunted targeting abilities might not amount to much. Which should be a wake-up call for legitimate advertisers.

In a talk she delivered last month at TEDGlobal in New York City, techno-sociolognist Zeynep Tufecki summarized the effects algorithms are having on our lives. In doing so, she inadvertently summarized the current dilemma: “Either Facebook is a giant con of a half trillion dollars and ads don’t work on the site – that it doesn’t work as a persuasion architecture — or its power of influence is of great concern. It’s either one or the other.”


 

Is Facebook a ‘con’ or not?

  1. Good question, IS Facebook a (giant) con ? ………… mm,,,yes.

  2. Facebook ads are an utter con.

    Typically they use cookies to determine what sort of e-commerce sites a user already visits and shops at, and then advertises those sites to the user. But this creates no value to the advertiser.

    Typical timeline: I buy a product online.
    For the next two weeks, I see ads for that company in my news feed.
    Did the ads influence my decision to buy? No.
    Do they stimulate me to buy more? No.
    But I’m sure that in convincing advertisers to purchase their services, facebook make all sorts of grandiose claims about effectiveness. But the line of causation runs backwards, I don’t buy products because they are advertised, products are advertised to me because I have already bought them.

    Total ripoff. But nice cat memes.

  3. Face book obviously has some value for the millions of people who use it. As in anything computer-related, if you have the required skill set to ‘log on’ – the computer is ‘smart’ enough to let you be a stupid as you might wish to be. Some software can assist you with that but it is, and should be, optional. Facebook provides a number of such software devices for users to screen questionable or personally objectionable material – along with a wide range of other ‘stuff’, to which users are regularly ‘invited’. Screening out for everybody is censorship.

    There business plan is to monetize the membership and turn mouse clicks into revenue. That they are successful is only due to the stupidity of those who buy ad space.

  4. It’s a very old business model common to all media: advertising pays the way. We have direct advertising, infomercials, product placement, product reviews and PSAs; and then there’s so called journalism harboring a particular bias or cloaking itself in the protection (from honesty) of an editorial. The media food chain feeds on hot news and personalities which creates opportunities for deception and bias due to expeditious production. According to the media, Hillary Clinton has engaged in so many scandals she must never sleep and may have clones. However, spin and disinformation is rife: the charitable take is the presumption, with some justification, that journalists are out of their depth in many areas; the truth is more likely that sensationalism drives audience. As an example, many outlets have called the Rosatom purchase of US uranium capacity as ‘strategic’ ignoring details such as the fact that the supply chains for nuclear power and nuclear weapons are two completely different things and/or buying into political statements without question. In some cases, the incredible becomes news simply for effect: Canadian press blindly tells us that a certain pipeline company with 1500 employees will create 15,000 jobs by increasing it’s capacity by less than 10%. However, since the question here seems to be not even truth in advertising but who paid for it, there is no observable evidence that media in general is particularly concerned about where their ad revenue comes from or that government funded advertising is devoid of political messaging. Even the selectivity of media coverage (the media market equivalent of shelf space) raises questions; for example, the possible use of Clinton’s aid’s ex husband of an email server got much more coverage than Trump’s crotch grabbing claims: certainly gives support to accusations of gender bias – as does the Canadian media’s repetition of the ‘climate Barbie’ cat call.