Opinion

Fake video is a big problem. And it’s only going to get worse.

A wave of new technology will soon mean anyone can produce convincing forgeries

Deepfake video by Jordan Peele. (BuzzFeed/YouTube)

This will be the year that you can no longer believe your eyes.

Video manipulation techniques that previously only special effects studios could pull off will soon be available in off-the-shelf programs that anyone can use, and we are not ready for the way that will change the world.

The first sign of the future of mass uncertainty came in April, when American filmmaker Jordan Peele released a video of Barack Obama calling Donald Trump a “dips–t,” something the former president may believe but never said.

The convincing fake video was produced using a few inexpensive, widely available programs that use artificial intelligence to edit video, but it was only possible because Peele, a talented mimic, could do a good Obama impression. Sometime this year, Adobe is expected to release a sound-editing program that will allow anyone to produce audio of anyone saying anything. And more powerful AI-driven fake-video technology is on the way.

In a paper presented at the SIGGRAPH conference on computer graphics in Vancouver this spring, university researchers unveiled “deep video portraits,” which can quickly and convincingly transpose head and mouth movements from an actor to a video of anyone. This means that before long, it will be possible for anyone to produce convincing fake video.

“I think it’s safe to say that what took 400 hours will take 40, and we’re not that far from it taking four,” says Mark Nunnikhoven, a security expert with Trend Micro Canada, a cybersecurity firm. The people who understand where the technology is going are alarmed, because the internet will soon be flooded with fake videos produced by people with bad intentions. “It is going to be extremely difficult to figure out who do you trust, to what extent, and given the speed at which people react to things, we are not set up for success currently,” says Nunnikhoven.

In November, when CNN reporter Jim Acosta refused to relinquish the microphone to an intern during a tense exchange with Trump, hoax-spreading outlet Infowars released a doctored video of the confrontation, which falsely showed Acosta being rough. White House spokesperson Sarah Huckabee Sanders spread that video to justify taking away Acosta’s press pass. Because there was so much scrutiny of the incident, it was quickly determined that the video had been altered. But not every video will be examined in this way.

By October, when Canadians vote in the next federal election, it will be possible to produce fake videos of candidates saying whatever you like—and it will also be possible for candidates to deny saying things they did say. “It’s pretty scary stuff, because if you can’t tell the difference, who do you believe?” says Nunnikhoven.

This year, things will get a lot worse because the technology is getting so good that anyone can produce fake video of people they know—these videos are known as “deep fakes,” a mash-up of “deep learning” and “fake.” “If somebody gets in your Facebook feed and gets 20 minutes of you talking on YouTube or whatever, that could be enough,” says Regina Rini, an assistant professor of philosophy at York University. “That’s super scary.”

Rini thinks the implications are frightening. “What I’m imagining is people being gaslit about their own life experiences,” she says.

Facebook is already used to spread fake news stories, sometimes with fatal consequences: the Rohingya in Myanmar were victims of genocidal attacks after fake stories whipped up public sentiment against them. It is chilling to think of what might happen if fake news takes the form of fake videos of atrocities.

And there are terrible implications for political debates and the justice system, because we use video and audio recordings to settle disputes over events, like the microphone grab in the White House.

For the past century, recordings have served as an “epistemic backstop” when people can’t agree on what happened, says Rini. “That’s going to go away.” It might soon be much more difficult to use recordings in court cases, because defendants will be able to cast doubt on their authenticity. If you can’t rely on recordings to settle such disputes, if everything is up for debate, we may no longer have confidence in mutually accepted reality. “Once that happens, our testimonial practices might come undone, because people will be more free to lie because they know that there’s not a chance of being checked by audio or video recordings,” says Rini. “It’s almost catastrophic.”

Rini suspects that the Trump administration will take advantage of this, and soon. “You can tell they’re just waiting until everyone has heard about deep fakes, and then they can start saying the video has been faked entirely. And I don’t know what happens after that.”

MORE ABOUT FAKE NEWS:

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.