From an industrial park in Oshawa, Ont., Robert Mowles sells the human future, one little camera at a time. It will be an age—should we not turn back, and time is running out for us to do so—of omnipresent surveillance, unceasing recording and perpetual spying, performed by an array of insidious, wondrous devices that hear and remember everything we say, see everything we see and follow us everywhere we go.
Some academics call this dawning epoch of infallible facial and speech recognition, interactive robots, driverless cars and artificial intelligence the “age of ubiquitous listening.” Some doomsayers call it “the end of privacy.” Whatever is coming, we stand barely at its Stone Age.
“Two or three years ago, I had a Samsung smart TV,” Mowles is saying in his office at Aartech, one of Canada’s largest suppliers of video cameras, infrared sensors and “smart home” automation to the retail and wholesale trade. “That was when there was this great big backlash that the voice-activated remote control was listening to you all the time. People were saying, ‘Oh, it’s phoning home, it’s recording everything you say.’ But you had to push a button to make it work, so I just turned it off.”
“Now, fast-forward to today, and I’ve been testing out Google Home,” he says, referring to Google’s voice-activated, artificially intelligent assistant. “At first, I was not comfortable at all with it. But it only took a few days for that to change. I enjoyed telling it to play music. I enjoyed that when I was washing dishes and my hands were wet, I could tell it to switch on the lights. I gradually forgot that it was listening all the time.
“Google says that it is only recording for about eight seconds after you say the keyword. My concern is that it doesn’t only listen when you say the word; that it’s going all the time. But I don’t have the answer to that. Only Google knows.”
Convenience; the cool factor; new toys that incrementally, addictively sap a user’s independence from technology’s spidery web. In California, a man has been raising money on Indiegogo.com to market a combination salt shaker and voice-activated centrepiece that he calls SMALT. You talk to the device or shake your phone and sodium chloride comes out.
What’s next: SMEPPER? “It’s like everything else,” says Mowles. “It’s a stupid idea until we adopt it, and then we can’t live without it.”
Who buys the little cameras that Aartech sells? Nursing homes, to prevent abuse of patients. Homeowners with what Mowles calls “the crazy neighbour.” Alien-hunters tracking UFOs. “A lot of people who suspect that their spouse is cheating,” says Mowles. “Though if you’ve already lost that level of trust, you’re wasting your time and money.”
Mowles doesn’t have a surveillance camera in his office—or at least that’s what he tells a visitor. He has taped over the lens on his laptop, although, he says, “I don’t know why—it gets my audio anyway.”
Soon enough, there will be so many cameras looking at us that the world will run out of tape.
“What if they hack our smart door locks?” Mowles muses. “We all talk to our cars. What if they hack your car?”
“Is it frightening?” he is asked.
“I think it’s only frightening if you have something to hide,” he says.
“Don’t we all have something to hide?”
“Yes,” he replies, “but it depends. Is it just embarrassing, or is it criminal?”
“Nobody is required to place an Amazon Echo in his home,” says Sameer Patil an assistant professor at Indiana University. (Echo, whose female-voiced “Alexa” responds to a user’s voice commands, is Amazon’s rival to Google Home.) “Nobody is forced to be on Facebook. But people like the convenience. Telling Google Maps your location to find a coffee shop is convenient. People think, ‘Oh, I only shared my location to find a gas station,’ but our full day is the sum of many parts that lead to a greater understanding of where we go, who we are and what we do,” adds Patil, an expert on the effects of overt and covert surveillance on human society.
“Amazon Echo; Google Home; a smart thermostat—most people have no idea that this [data] is being collected. A lot of people trust Amazon. So do I, to some extent. I don’t mind if they’re hearing what I’m saying and storing it for four months, but what if it’s stored on a server that gets hacked? What if it has stored my locations and it knows I’m on vacation and somebody robs my house? It’s not about being afraid of the parties that gather the information—it’s the second and third parties and what they can do with it, plus the element of government surveillance, which I personally do not want.”
“Is this the end of privacy?” Patil is asked.
“We are getting to the point where we will have to warn our friends that they are being recorded when they come to our homes,” he replies. (As a measure of the Echo’s popularity, the Alexa app, which is needed to run the device, was the most downloaded app in the Apple Store on Christmas Day.) “When I go to a friend’s house and he has an Echo, I push the mute button. Is this the end of privacy? That is a question that we will have to answer looking back at history. It could go either way.”
“People have been saying since the 1970s that data collection is going to be a threat,” says Abraham Newman, a professor at Georgetown University in Washington, D.C. “But there has been a switch from the idea that Big Brother would be the government that has all this information to a corporate panopticon. But in a way, it’s not just a question of whether Big Brother is a corporation or Big Brother is the government—either one of them, or both of them, can lead to the sort or surveillance state in which they know that you’re African-American, they know that you have just purchased alcohol, and there are red lights that are going to go off if you enter a certain area.”
Across North America, the age of ubiquitous listening—and watching— has already begun. In Baltimore, a company called Persistent Surveillance Systems has been hired to deploy a Cessna fitted with ultra-sensitive cameras over courthouses, demonstrations and troubled neighbourhoods. In California, 40 per cent of local police departments contacted by the American Civil Liberties Union in 2017 admitted to conducting social-media surveillance on private citizens without informing legislators or the public.
The federal Public Health Agency of Canada is reportedly set to hire an Ottawa analytics company named Advanced Symbolics to comb thousands of status updates, tweets and snaps on social media for keywords that could foretell a spike in suicides. “We’re not violating anybody’s privacy—it’s all public posts,” the company’s CEO explained to the CBC.
In Toronto, Google subsidiary Sidewalk Labs is partnering with the city to transform a derelict section of the waterfront into an ecological, demographical and technological utopia to be called Quayside. Google’s proposal includes “a distributed network of sensors to collect real-time data about the surrounding environment” and a promise, in CEO Dan Doctoroff’s words, “not to use the data for commercial purposes but instead only to use it for the quality of life.”
But in an interview with GeekWire last November, Doctoroff admitted that he could not predict what the City of Toronto or other stakeholders will do with facial-recognition or other information that the sensors obtain. “There may be alternate structures that we don’t control, that public entities control,” he said. “We don’t know.”
When even Google doesn’t know where the data goes, what chance is there for the ordinary citizen?
“In a courtyard which he shares with others, a man should not open a door facing another person’s door, nor a window facing another person’s window,” advises the Jewish Talmud.
“Do not spy,” succinctly commands the Holy Koran.
“Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life,” wrote Louis Brandeis and Samuel Warren in the Harvard Law Review in a seminal article entitled “The right to privacy,” “and numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’ ”
That was in 1890. But the mechanical devices of the 21st century make moot the well-meaning admonitions of scholars both biblical and jurisprudential.
“When I talk publicly about the right to be forgotten, all I get are blank stares,” says Donovan Molloy, the information and privacy officer for Newfoundland and Labrador. “When I ask an audience if they have read the instructions on how to set the privacy settings on their internet-connected devices; when I tell them that the toy they just bought for their child is watching and talking to their children, they have no idea what I’m talking about. 1984 is here and it’s voluntary—it’s not being imposed on us. It’s here and we signed up for it!
“If I started a political party and its policy was to ask the public to tell me about all their business transactions, to list all their friends and personal affinities and to tell me everywhere they go, they’d say I was crazy and they’d never vote for me,” Molloy, a former Crown prosecutor, reasons. “But ask the same questions on their devices and they surrender without even thinking.”
Molloy worries about a world of omnipresent surveillance in which, he says, “someone in the government could decide, ‘Well, you have been to a bar six times this month, and you just bought two packs of cigarettes,’ so I will get put on a longer wait-list for a medical procedure because they ﬁnd me responsible for my own poor health.
“But who is ‘they’? Who gets to decide whether or not you are a productive member of society? If you go to a bar more often than you go to church, if you don’t donate to charity, and if it all goes into some algorithm, then the question becomes, who gets to write the algorithm?”
“He that invents a machine augments the power of a man and the well-being of mankind,” wrote the American social reformer Henry Ward Beecher in 1887. So consider the machines that are being invented today: machines that will know you by your voice; machines that will know you by your face.
An Israeli-born “algorithm scientist” named Yishay Carmiel is working on the voice part; he is the head of the speech-recognition and machine-learning division at Seattle-based Spoken Communications.
“The first task is that you build some kind of human-like talking machine,” Carmiel says. “By that I mean Alexa, Siri, Google Home, a personal assistant. Right now, when people talk to Alexa or to Siri, they are starting to understand that ‘Hey! This thing actually works!’ In terms of a simple voice search for Siri, Alexa and Google, the state of the art is very good—about a five per cent error rate. But the second task is analyzing human conversation, and right now, that is impossible.”
In 2012 a Florida man named George Zimmerman, serving as a volunteer neighbourhood watchman, shot and killed a 17-year-old African-American named Trayvon Martin, whom he suspected of being in the community for “suspicious” purposes. At one point, the two men struggled, and one of them shouted, “Help!”
When the murder case against Zimmerman, who claimed he fired in self-defence, came to trial, experts in speech processing attempted to determine whether the voice on the cellphone audio was Martin’s or Zimmerman’s. They could not; the clip was too noisy and too brief. Zimmerman was acquitted.
In 2015, three friends sat around a house in Arkansas, watching football on television and drinking. In the morning, one of the three was found floating in the hot tub, dead.
When the murder case against the homeowner came to trial, it was disclosed that there was an Amazon Echo in the house where the men were partying. Headlines shouted, “Did Alexa hear a murder?”
It took a formal request from the prosecutor before Amazon would consent to play the Echo’s audio file for the judge. Nothing of forensic value was found. (Amazon claims that it retains only a few seconds of input after the keyword “Alexa” is spoken.) In the Arkansas case, all charges were dropped.
In each case, what had changed were not the capabilities of the technologies involved, but society’s growing expectation of ubiquitous surveillance. We move rapidly from amazement to expectation; our machines mature quickly from toy to tool. Remember when you used to phone your friends on your brick-sized cellphone just to say, “Guess what? I’m calling from the car!”
“The holy grail, if you look at it from a science-fiction perspective, is that we want to humanize machines,” Carmiel says. “We want to build the friendly robot that we can control with our voice because the voice is the most natural communication interaction for us and we want to do the same for a machine. It has a psychological effect—this machine is your friend, it is someone you can trust.”
Note the terminology: not “something.”
“To apply speech recognition to everything we say, everywhere we go, is very complicated,” Carmiel says. “I’m not saying it’s impossible, but it is a big challenge. Imagine the amount of information that is contained in a 20- or 30-minute call. There is a lot more data there than just asking Alexa what the weather is in Cincinnati.”
Even with these challenges, speech recognition may be the easier nut to crack on the path to our omni-audio-video future. Erik Learned-Miller of the University of Massachusetts is one of the continent’s leading researchers in the field of facial recognition. His lab is moving far beyond the iPhone X to what he calls “labelled faces in the wild,” the ability to digitize and recognize our faces as we cry, laugh, grimace, slumber, scream.
“The iPhone X got a lot of publicity, but one thing you need to understand about it is that it uses what you could call active lighting—it shines out a pattern of infrared light onto a person’s face and uses triangulation to measure the 3D shape of the face,” Learned-Miller says. The system can’t be fooled by a photograph because photographs are flat, he adds.
Aim your iPhone at a movie screen, and it also cannot identify the actors. But when Learned-Miller walked past one software company’s booth at a recent computer-vision conference in Venice that was attended by 4,000 scientists, a screen instantly displayed his image and his name. “That was very impressive,” he says. “I don’t know what their accuracy rate was, but I know they got me.”
“Was it scary?” Learned-Miller is asked.
“This is a very interesting question,” he replies. “Of course, there’s the typical scientist answer: ‘Any tool can be used for good or bad, and other people will decide how to use these things’—which is kind of a cop-out.
“To have governments keep all the data about everybody leads to the kind of thing going on in China—it can be used to pressure people to stay out of organizations, to keep them from associating with each other. Like anything, it’s almost impossible to put it back in the bag. Even if the technology doesn’t get any better than it is today, the database will get bigger and that will lead to the ability to recognize more people.”
“I’m not paranoid, and I’m not living in a bunker,” says Molloy. But he sees trouble “getting closer every day. If our laws don’t keep pace with these technologies, we are going to find ourselves in a regime that we never anticipated. There are a few voices shouting in the wilderness, and there are a few legislators who are aware of the repercussions of these technologies, but on the other side are companies with hundreds of billions of dollars who are going to say, ‘We don’t need to be controlled.’ As a privacy commissioner, and as a citizen who doesn’t want to lose my privacy, I find this very frightening.”
“One of the problems with mobile phones, with social networks, with Facebook gathering lots of information about you, is that every time you log on you are losing some of your privacy,” says Carmiel. To sell targeted ads, Facebook tracks its users’ online activity, location and other personal details, including birthdays, moods, relationships and financial status. “I am working on speech recognition, but I’m not trying to build a system that will serve as a Big Brother. I think that any system we design should always have an off switch. As humans, we must always have the choice to turn it off.”
Two years ago, a group of academic researchers installed multiple cameras in the homes of 10 volunteers in Finland. The project was designed to study what the scientists called “the long-term effects of ubiquitous surveillance in the home.” Each participant was permitted one “safe room” for intimate moments: secret whispers; storing cash.
“First,” the researchers reported, “the subjects showed no negative effects on stress or mental health attributable to surveillance. The tentative conclusion would be that the mental effects of long-term subjection to invasive ubiquitous surveillance are non-existent. Second, although perhaps not potent enough to cause severe issues in this setting, the surveillance system proved to be a cause of annoyance, concern, anxiety and even rage.”
Patil is working on the follow-up study: what happened when the cameras were removed? “Some of the people, yes, they felt liberated,” he says. “But the amazing thing was that most of them still continued the same behaviours they had adopted when the cameras were in place. They still moved to the safe room when they wanted to do something they didn’t want to have seen. The surveillance had changed them.
“There was one man who accidentally walked naked in the house when the cameras were there. He had this moment of shock, then he lost his inhibitions altogether. He said, ‘Now there is nothing left for me to hide. Once it’s out, it’s out forever, and there is nothing I can do to get it back.’ ”
MORE ABOUT TECHNOLOGY:
- Trust in digital technology will be the internet’s next frontier
- ‘A unique Canadian problem:’ Apps connect teams with netminders
- What the fourth industrial revolution means for your job
- The Canadian tech startups that could be the next billion-dollar breakouts
- Why the fight for net neutrality matters
- Is Canada ready for the radical change artificial intelligence will unleash?
- What the Facebook, Google and Twitter algorithms hide from you
- The ethics of restricting speech on social media