Technology

Is Canada ready for the radical change artificial intelligence will unleash?

Orwellian scenarios involving AI are closer than most people realize, spawning a tense debate: how far AI should be allowed to go?

Cybernetic robot hand and child's hand point toward each other. (Coneyl Jay/Getty Images)

(Coneyl Jay/Getty Images)

The digital tech world has always relished a thrilling zero-to-60 story, but the meteoric ascent of Element AI, a tiny Montreal start-up, was rapid even in an industry where breathless tales of explosive growth are the coin of the realm.

Founded just last year by entrepreneur Jean-François Gagné and University of Montreal computer scientist Yoshua Bengio to commercialize cutting-edge artificial intelligence algorithms, Element in early June bagged an astonishing $102-million equity infusion from a syndicate of venture capital outfits eager to tap into what many in the tech world see as the next major revolution in computing.

Element will use the funds to recruit dozens of highly skilled software engineers and invest in other start-ups in the frothy world of “neural networks,” Bengio’s field. (His own academic research is now being deployed in much-discussed applications such as smartphone apps that can screen for skin cancer.) “It’s one more signal telling the world that Canada is becoming an AI leader,” he says.

AI is an umbrella term that encompasses rapidly emerging fields such as “machine learning” and “neural networks.” The latest technologies are capable of finding, learning and interpreting intricate patterns hidden inside massive tranches of data—everything from huge collections of medical images to banking records and insurance claims. In the popular imagination, the term AI is often taken as shorthand for computers that don’t just perform complex calculations but also think, reason, learn and make autonomous choices.

AI algorithms make increasingly sophisticated predictions—what movie you want to see next or the likelihood you’ll default on a loan, for instance. Experts point to a wide range of emerging applications in fields as diverse as personalized medicine, cybersecurity, self-driving vehicles, weapons systems, predictive policing and even law—uses that could radically transform everything from basic tasks to complex professional decisions and geopolitics. Someday, humanoid robots with advanced AI capabilities may wander among us.

While some emerging AI technologies still have plenty of bugs—the internet abounds with memes of botched voice and image recognition results—potential future applications should make everyone a bit uncomfortable: Will military planners dispatch weapons, like drones, programmed to make their own judgments about what to target? Will police departments equip themselves with AI systems that can chew through huge data sets to identify individuals likely to commit crimes? And will vast segments of the workforce, from truck drivers to call centre operators, find their somewhat repetitive jobs rendered obsolete?

These Orwellian scenarios are closer than most people realize, and have spawned a tense debate about how far AI should be allowed to go. In fact, Tesla founder Elon Musk, an outspoken critic, recently accused Facebook founder Mark Zuckerberg of being wilfully blind to the risks of a technology that, he contends, could someday dominate humans. Not to be outdone, Russian President Vladimir Putin, during a press conference in September, mused that “the one who becomes the leader in [AI] will be the ruler of the world.”

Chilling stuff, given the source.

Whatever one’s view, it’s clear that the AI industry won’t be stopped. Many now realize they will have to invest in AI technologies—thus the whoosh of investor interest in Element AI and other start-up firms, like Toronto-based Layer 6. “Every company will have to have an AI strategy if it’s going to succeed in the next 10 years,” says Layer 6 co-founder Jordan Jacobs.

READ: Can RBC help stop Canada’s brain drain in deep learning?

Canadian policy makers and tech investors say the stakes are considerable: more and more corporate leaders and government officials have come to view AI as a wealth-creating sector to which they want to hitch their economic development wagons. Ottawa recently changed federal visa rules to encourage more graduate students and tech workers to locate here. The Ontario Liberals, meanwhile, have spent much of the year aggressively courting tech giants that are heavily invested in AI’s present and its future.

In fact, both governments, a host of corporate giants and the University of Toronto earlier this year anted up $250 million to fund the Vector Institute for Artificial Intelligence, which has a mandate to back emerging AI entrepreneurs, generate jobs for AI graduate students and put Canada on the map as a global player in this brave new world.

In the time-honoured tradition of mathematics, Sanja Fidler, an assistant professor of computer science at the University of Toronto and one of the Vector Institute’s co-founders, strides over to a smudged white board in her sparse office to illustrate a point she’s making about machine learning, a form of AI that involves algorithms that can be ‘trained’ to learn as they go. The two white boards are covered with scrawled schematics showing the architecture of software designed to perform tasks such as recognizing objects captured on a digital video camera.

Fidler’s research is image recognition and interpretation. If a digital camera can ‘see’ a pop can on the sidewalk or someone behaving suspiciously on a subway platform, how do you write code that disaggregates the image into millions of pixels, detects patterns and compares those to similar configurations gleaned from other images? She has a graduate student working on such an algorithm for a street cleaning application, using video sensors and software that can distinguish between a piece of trash and a dropped wallet. “That,” she says, “is a big, big challenge.”

The best way to solve this problem isn’t to give the computer access to all possible images of trash and wallets—an impossibility—but rather to create algorithms that process sensory information the way the brain does, with intricate and ever evolving networks of neurons and synapses that can ‘learn’ to recognize shapes and forms and sounds through repetition.

This idea has been the life’s work of U of T computer scientist Geoff Hinton, widely considered one of AI’s greatest gurus. “What drives Geoff is how the human brain works,” says Fidler, who has studied with Hinton. But for years, AI experts mainly dismissed his theory. “Geoff would say most people called him a lunatic for most of his career,” adds Jacobs, whose business partner at Layer 6 is another Hinton collaborator. “It turned out he was right.”

In 2012, Hinton published a research paper that turned the AI world on its head. He showed that computer systems designed to mimic the human brain can be programmed to do very human-like thinking tasks, among them iterative learning. Hinton’s algorithms performed almost twice as accurately as its closest competitors. (Hinton is currently a Google fellow and a key player in Vector.)

University of Alberta AI expert Russell Greiner says a simple way of illustrating the difference between conventional AI and the machine learning advances is to think about a program that can figure out how to prepare dishes for a dinner party that will satisfy everyone’s tastes. The conventional approach depends on the computer having access in advance to all the information it needs to perform the task: e.g., that the host can’t include pomegranates because they’re not available, and mustn’t cook with nuts because two guests have allergies. If all the ingredients and preferences are known, the system can come up with an optimal menu.

With machine learning, the algorithm doesn’t ‘know’ anything at the outset. Rather, it has a means of watching individual guests eat many meals in succession (i.e., prior to the theoretical dinner party). Eventually, the algorithm develops a model for what Miriam likes to eat, and another for what Lucah enjoys, and so on.

With each successive meal, the algorithm also improves its ability to make a reasonably accurate guess about what foods Miriam and Lucah prefer, and in what combination. After the system has learned the guests’ respective tastes, it begins calculating how to find a set of ingredients and dishes that all six will enjoy.

The rapid recent advances in machine learning are also attributable to the development of increasingly sophisticated sensors —everything from high quality digital cameras and recording devices to other types of technologies that capture and import data that’s used to train these systems. The other critical ingredient is the growing availability of massive pools of digital information, from online images and videos to smart phone GPS signals, social media content and all the highly granular, up-to-date data now embedded in online maps.

The possible applications run the gamut. The most hotly anticipated, or hyped, are self-driving cars, freight trucks and even transit vehicles. Equipped with highly sensitive sensors and drawing on constantly updated GPS data that may eventually include information on construction sites or even potholes, they use machine learning systems to constantly process what they detect in their surroundings and navigate the vehicle accordingly. Most major car manufacturers, as well as the ride-sharing giants Uber and Lyft, have trials or real world pilot projects underway.

READ: Big law is having its Uber moment

Medicine is the other huge frontier. A Stanford team in February published the results of a study showing how a neural network could be “trained” to spot potentially cancerous skin legions by using mobile phones to take digital photos of the blotch and comparing them to a database of almost 130,000 images representing over 2,000 conditions. The system “achieves performance on par with all tested experts,” according to the authors, writing up their findings in Nature.

Other emerging and potential applications focus on productivity, efficiency and cyber-security solutions for large companies, governments and manufacturers looking to invest in next-gen robots that have the adaptability to do more than merely repeat a highly standardized set of tasks. Many of the customized software systems provided by Element AI, Yoshua Bengio’s firm, fall into this category.

Tech investor Mike Serbinis—who is CEO of League, an online health benefits firm, and a Vector board member—adds that AI algorithms are already improving fraud detection and credit risk evaluation for insurance companies and financial institutions. “That’s the meat and potatoes, the low-hanging fruit,” he says.

The machine-learning revolution is also surfacing in unexpected fields, like law. Blue Jay Legal, a start-up established last year by U of T law and computer science profs, offers a service based on an AI system trained to read and interpret legal decisions in tax law. When a client is looking to challenge a Canada Revenue Agency decision about a complicated tax treatment, the firm’s software will provide a report on the likelihood that an appeal to federal tax courts will succeed or fail.

In some U.S. states, Wired reported earlier this spring, AI algorithms have been used to determine bail, sentencing or parole eligibility, drawing on historical data and profile details about prisoners to make predictions about likely recidivism. In one case cited by the magazine, the question of transparency surfaced: the algorithm belonged to a private firm that didn’t want to reveal how the software came to particular conclusions.

Computer scientist Richard Zemel, Vector’s research director, focuses his research detecting and accounting for bias in collections of data. He points out that large sets of sentencing or parole information likely contains evidence of racial prejudice—e.g., guilty verdicts based on fabricated evidence against a visible minority—that skew the algorithm’s ability to make fair predictions. “You can’t blindly take that data to be true,” says Zemel.

University of Ottawa law professor Ian Kerr, Canada Research Chair in ethics, law and technology, notes that AI algorithms themselves contain assumptions that could produce skewed results, although the calculations are so complex that it can be extremely difficult to understand how.

With machine learning systems in particular, the algorithms operate not according to some pre-determined set of rules written by humans, but rather organically, based on the patterns they detect as they evolve and eventually display what Kerr calls “emergent behaviour.”

In recent research papers, Kerr has posed another heavy ethical/legal question about the emergence of these technologies: what happens when machines can perform tasks more accurately and predictably than humans? With trucks driven by automated systems that don’t get sleepy and respond predictably to bad weather, the benefits seem fairly uncomplicated.

Or what about physicians whose diagnostic batting averages lag those of the AI-fueled computers which take a patient’s imaging data and spit out a verdict? What happens if the specialist and the algorithm come to opposing conclusions? “The doctor finds herself in an existential scenario,” Kerr says. “How do we resolve disagreements between humans and machines when machines generally outperform humans?” He warns that as professionals become more reliant on AI systems to make complex decisions, they’ll inevitably lose the capacity to understand the underlying basis for the diagnosis.

Such futuristic conflicts may not be the only social side effect of widespread AI adoption. A growing number of critics also wonder what happens if large chunks of the labour force are displaced by AI robots and machine learning-driven devices. “There will be fewer and fewer jobs that a robot can’t do better,” Elon Musk predicted at a conference earlier this year.

New technologies always produce changes in work, including the demise of certain types of jobs. No one bemoans the loss of the hand-operated loom and there aren’t many unemployed weavers around. In fact, it may be the case that truck drivers are the equivalent of yesterday’s lamplighters or milk men—i.e., people in those occupations that will disappear as the economy and technology move on.

But the issue of machine learning-induced job displacement has a more urgent cast because the pace of technological advancement is so fast. In a study published this spring, Yale and Oxford political scientists surveyed hundreds of AI researchers on their estimates of how long it would take before machines could outperform humans doing tasks ranging from writing student term papers to handling retail sales and even performing surgery. The answers, though guesstimates, are sobering: a dozen years for truck drivers, 15 for retail clerks, 40 for surgeons. The authors even asked the respondents to guess how long it would take before AI technology outperformed AI scientists (about 80 years).

“Because machines are increasingly good at doing what humans will do,” observed Ryan Avent, a U.S. economist and author of The Wealth of Humans, in a recent interview, “the only jobs that the average human will really be able to get are those where it’s attractive to hire them because they’re really cheap.”

Bengio has no patience for scare-mongering predictions or Terminator-like scenarios. “The machines we’re building are very far from the intelligence humans have,” he insists. “As far as I’m concerned, [the forecasts] are all crap.”

It’s true that even scholarly predictions that look ahead a generation or more are little more than educated guesses. But network, data storage and computing power expand at a fearsome pace; consider that the world-wide web, as we know it now, is barely out of its teens, and yet is capable of things no one would have thought possible as recently as the mid-1990s. There’s no reason to conclude that AI and machine learning won’t make comparably quantum leaps that produce dramatic social change, some of it positive but some not so much.

READ: As Ottawa embraces AI, experts urge caution

Even small examples point to the potential for disruption: a growing number of banks are now looking at AI-powered online chat interfaces capable of dealing with relatively ordinary queries customers have about their own accounts. Some of these systems are being marketed to financial institutions looking at reducing labour costs. There’s no reason to believe such customer-experience technologies won’t spread to telecom giants, airlines, and any other firm with a call centre.

Ian Kerr, for his part, says he’s got little interest in the sci-fi version of AI and prefers to focus on near- and medium-term social, legal and ethical questions that, he argues, deserve far closer scrutiny than they receive, especially at a time when Canadian governments are betting heavily on AI’s economic development potential. “We haven’t seen nearly enough of that,” he asserts.

Musk agrees. He cites pointed examples such as the use of machine learning in advanced weapons technologies—a development being actively pursued in the U.S. and China, according to a recent report by the New York Times. (Responding to such stories, Pentagon officials have insisted that humans must always control weapon systems.)

Questions hover around other applications, such as image search and recognition for videos, Sanja Fidler’s own field. An upbeat potential use: computer-generated narration of videos for the visually impaired. But she acknowledges that it would be possible to adapt those kinds of applications for surveillance purposes—for example, machine learning systems that could search or even monitor CCTV security videos for individuals who may be acting suspiciously. “The system would have to understand if someone is not behaving according to some norm.”

While these use cases are still highly theoretical, such scenarios raise all sorts of thorny problems about privacy and due process—issues that arise in other potential applications, such as the potential use of vast storehouses of patient data.

Richard Zemel, Vector’s research director, says Canada’s new AI institute is “part of the conversation” about social impacts but not actively involved in studying the potential implications—a shortfall Kerr believes should be remedied.

University of Toronto president Meric Gertler, who was actively involved in establishing Vector, says it is “critically important” for scholars and policy-makers to focus on the social context of these technologies, and points out that CIFAR, which administers Vector’s funding, is pursuing a national strategy to explore AI’s impact.

Less clear, of course, is what happens if and when the huge commercial opportunities presented by the next wave of advances in AI collide with tough questions about law, ethics and regulatory oversight. As Kerr says, “You don’t hear any of that in the media coverage of the Vector Institute.”

At Element AI, flush as it is with a $100 million of expansion capital, Bengio’s team of AI engineers will spend the next several years racing hard to develop commercial applications for paying customers even as they push themselves towards new machine learning frontiers, such as “common sense reasoning” that seeks to mimic the brain’s ability to do things like interpret facial expressions or colloquial speech. The way computers and humans interact, Bengio explains, “will change significantly in the coming years. But it’s not a solved problem.” Yet.

MORE ABOUT ARTIFICIAL INTELLIGENCE

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.