TECHNOLOGY

How the internet may be turning us all into radicals

Online algorithms that have learned how to push our emotional buttons are polarizing society—and the emerging science on the implications is troubling

In September 2017, Zeynep Tufekci gave a TED Talk in New York City titled, “We’re building a dystopia just to make people click on ads.” Tufekci, a techno-sociologist at the University of North Carolina, is a rising star on social media, where she regularly rings alarm bells about the not-so-pretty underbelly of the internet and its effects on society.

In her talk, Tufekci took on one of the internet’s most elusive bogeymen: machine-learning algorithms—the kinds of artificial intelligence programs that use data to develop ways to perform tasks, independent of a human programmer.

Tufekci cited an example she had experienced while researching online for videos of Donald Trump rallies during his 2016 election campaign. After plumbing the vast trove of videos, she said, she noticed that the recommendation section of YouTube began suggesting and auto-playing “white supremacist rants, Holocaust denials and other disturbing content” despite the fact that she had been searching for mainstream content.

Tufekci then tested the opposite end of the political spectrum. She created a new YouTube account and began streaming videos of Hillary Clinton and Bernie Sanders. Before long, YouTube was recommending “videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11.”

As she continued the experiment, Tufekci realized the phenomenon extended beyond the political realm. When she streamed videos about vegetarianism, YouTube recommended videos about veganism. “Videos about jogging led to videos about running ultramarathons,” she wrote in a New York Times column. “It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm,” Tufekci mused.

After conducting more research, Tufekci came to a disturbing conclusion: YouTube’s algorithm “leads viewers down a rabbit hole of extremism.”

READ MORE: Meet the people who scar themselves to clean up our social-media networks

During her TED talk, she cited investigations by ProPublica and BuzzFeed which showed algorithms on Facebook and Google can be used to identify people with anti-Semitic tendencies and target them with ads. Facebook, she added, also “helpfully offered up suggestions on how to broaden that audience.”

The main thrust of Tufekci’s argument was disturbing: while we fret over radicalized ideologues and their potential for violence— whether they are Quran-thumping religious extremists who frequent jihadist websites, closeted neo-Nazis orbiting Stormfront or Marxist-Leninists still pining for a revolution—society as a whole is being steadily guided down a path of ever-increasing extremism.

That prospect leads to some unsettling possibilities. The internet—and not only the extreme, warped corners of it like 4Chan—is increasingly governed by algorithms that have learned how to push our emotional buttons, creating social media echo chambers that have polarized society to the point where political and social debates have been reduced to mud-slinging and trolling.

It is turning us all, to one degree or another, into radicals.

READ MORE: AI is the future—but it’s not immune to human bias

On the mainstream internet, we don’t call these processes radicalization, at least not yet. But according to Amarnath Amarasingam, a senior research fellow at the London, Ont.-based Institute for Strategic Dialogue and a Postdoctoral Fellow at the University of Waterloo who tracks online radicals, there are points of convergence.

“Processes of polarization have the potential to breed radicalization,” he says, “and radicalization sometimes, for some people, tips over into violence.”

The mystery is why only a small percentage of radicalized people turn to violence. No one has been able to answer that question in any holistic way, in part because there appear to be so many different variables at play.

READ MORE: Facebook still has Canadians hooked, even if they’re mad about Cambridge Analytica

A person can be drawn in by an ideology—jihadist or the extreme right-wing—but even those processes are poorly understood. Some experts argue that the ideology comes first and a person is radicalized as they learn more about it. Others contend that ideology may in fact come after a person is already on the path to radicalization. Or it may be a combination of the two.

There’s also the difficult subject of mental illness. For years, it was accepted wisdom that there was no connection between violent radicalization and mental illness. But recent studies are beginning to show some connection, particularly for lone actor terrorists.

At the forefront of this research are two experts at University College London in the U.K., Paul Gill and Emily Corner. They have used empirical approaches to demonstrate that lone actor terrorists are 13 times more likely to suffer from mental illness than their group actor counterparts.

Corner has noted, however, that in her research she has never found a direct causal link between a person being diagnosed with a mental illness or psychotic break and then directly committing a terrorist act. “There are always mitigating factors,” including life events and traumas, she said in a November 2017 interview on the Talking Terror podcast produced by the Terrorism and Extremism Research Centre at the University of East London.

Nonetheless, Corner and Gill have found in their research that lone actor terrorists show a higher prevalence of schizophrenia, delusional disorder and autism spectrum disorders than the general population. They warn however that this does not mean every lone actor terrorist suffers mental illness—in fact, the empirical evidence shows less than 50 per cent do—but that these people appear to be more likely to suffer mental illness.

Does the internet then adversely affect these people? That’s a big question mark, say experts.

READ MORE: Making sure AI is ethical is everyone’s responsibility

Different studies have come to different conclusions in different contexts. In the U.K., Gill and Corner found in a 2015 study that the growth of the internet between 1990 and 2011 “did not correlate with a rise in lone-actor terrorist activity” but did change the “means of radicalization and attack learning.”

In the U.S., according to the Department of Justice-funded Lone Wolf Terrorism database, the number of lone actor mass murder attacks per decade have skyrocketed from two in the 1950s to 35 since 2010, with the period of internet growth since 1990 showing a nearly four-fold increase over the previous four decades.

But even those numbers don’t show the full picture. The database’s definition of a lone actor attacker is limited to those with a political motive. Radicalized killers like Micah Johnson, who gunned down 5 police officers in Dallas in July 2016 because of his anger over police shootings of black men, would not be listed.

In part, the problem is how quickly the internet is evolving and the infancy of scholarly studies teasing apart the relationship between the online environment and its consequences in the real world.

Machine-learning algorithms in particular have only really taken off in the last few years and their effects are poorly understood. Social media use has exploded among young people but how it changes their cognitive development is still a relative mystery.

What’s worrying, however, is that the same 2015 study conducted by Gill and Corner also noted that young people were “significantly more likely” to develop extremist ideas online.

So, if, as Tufekci and others claim, society as a whole is being radicalized to some degree by online algorithms, what does that mean for young people? And what are the consequences if mental illness is a contributing factor in radicalization?

There are too few long term studies to come to any hard conclusions, but one published by Australian researchers in 2016 looking at adolescent internet use over a four-year period suggests there is a causal link between compulsive internet use, or CIU, and mental illnesses like depression and social anxiety, particularly among children under 15.

The study, one of the most exhaustive of its kind, tracking the development of more than 2,000 girls and boys, found that CIU “predicted the development of poor mental health,” though the exact mechanisms remain unclear. Interestingly, the study concluded the reverse was not true: “poor mental health did not predict CIU development.”

This would seem to support Tufekci’s argument that people are becoming more radicalized because of the internet.

READ MORE: An interview with the Canadian godfather of machine-learning algorithms

One study, however, is not gospel. It’s one thing to become addicted to the internet and lose the kind of direct contact humans need for their mental health. It’s another thing altogether to become radicalized and commit acts of violence.

That causal relationship has not been established, at least not yet. But we are perhaps seeing the first glimpses of it.

“We have created tools that are ripping apart the social fabric of how society works,” Chamath Palihapitiya, the Canadian venture capitalist and former senior executive at Facebook, told an audience of business students at Stanford University in late 2017. “That is truly where we are. The short term, dopamine-driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation, misinformation, mistruth. And it’s not an American problem; this is not about Russian ads. This is a global problem. It is eroding the core foundations of how people behave.”

Palihapitiya is famous for dramatic language but at the heart of his doomsday predictions is a deep understanding of how social media platforms like Facebook operate. He was in charge of growing Facebook’s user base during its early years and participated directly in strategies to draw in new users and keep them hooked.

The problem, he has said in multiple interviews, is that Facebook’s primary goal is to keep people engaged on its platform as long as possible. To do that, it uses machine-learning algorithms that essentially exploit their subconscious desires, which Facebook determines by the masses of usage data it collects.

The YouTube recommendation algorithm Tufekci tested, according to Guillaume Chaslot, a French programmer who helped create it in 2010, is successful at keeping viewers engaged because it knows how to press the right buttons and up the ante on the videos it feeds them.

“You come into that filter bubble, but you have no way out,” Chaslot, who left YouTube in 2013, told Bloomberg last May. “There’s no interest for YouTube to find one.”

For its part, YouTube says it is working on ways to better screen its content for misinformation and abusive content but it hasn’t addressed the ethical issues of how its algorithms exploit human tendencies to keep people engaged on its platform.

Generally, the ethical issues of how algorithms work have so far received little, if any, attention outside academic circles. A basic problem, ethicists say, is that it’s hard to determine who exactly to blame for the negative outcomes these algorithms might generate.

Often, it’s not just one algorithm at work in determining how to manipulate users but an assemblage of algorithms, making the task that much more complicated. Some ethicists have suggested that rather than cracking open the algorithms we should instead look at the outcomes themselves: are they benefitting society as a whole or harming it? It’s a difficult question to answer. But if our engagement with them is indeed radicalizing us, we should at least be paying more attention.

MORE BY ADNAN R. KHAN:

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.