What’s it really like in the driver’s seat of a driverless car?

Google thinks safe, driverless cars could be on the market in four years. Teaching them to navigate around terrible drivers might just be the easy part.

Content image

In this Sept. 3, 2013 photo, a Google self-driving car is displayed during a news conference at the Virginia Tech Transportation Institute’s Smart Road in Blacksburg, Va. The car is equipped with laser range finders, radar and cameras to monitor its surroundings and react accordingly. Matt Gentry/The Roanoke Times/AP

If your grandmother were the sort of driver who put caution before haste—if she had two heads-up display screens on her dash, five computers in the trunk and a big, red kill switch on the console; if she updated you in dulcet tones on her vehicle’s every turn, and if that vehicle happened to be a Lexus hybrid SUV—then travelling in the Google self-driving car would be like riding with your grandmother.

We’re waiting in the left-turn lane at the intersection of San Antonio Road and Nita Avenue in Palo Alto, Calif., and our vehicle is acting anxious. Roboticists will cringe at the anthropomorphism. But how else to describe it? Pressed close to the left-hand median and faced with oncoming traffic speeding around a curve, the car has spiked the brakes and nudged suddenly to the right, as if spooked by the cars on both of its flanks, and uncertain of its next move. Then, over the next few moments, it recovers, processing its surroundings as it waits for a green arrow to appear on the traffic signal above.

Jared Mendiola, a Google software operator riding with his laptop open in the passenger seat, nods with approval. The car, he explains, has successfully monitored the colour of the traffic light, along with the trailing vehicles “blowing past” on our right at 50 km/h. “It’s predicting that they might swing in ahead of us,” he says. “Human drivers have been known to do that.” The abrupt braking, he adds, has provided space ahead, in case some buttinski pushes to get into the turn lane ahead of us.

So it goes in the most conservative vehicle on the road, which Mendiola and his fellow tester, Ryan Espinosa, are demonstrating on the parkways and residential crescents of Silicon Valley. The car slows and steers wide around every trash bin, cyclist and encroaching shrub, as Espinosa sits placidly behind the wheel, hands at his sides, and Mendiola watches these hazards appear on his laptop screen. It’s a Neuromancer-like glimpse into the vehicle’s mind—linear renderings, moving like ghosts in and out of frame, colour-coded for recognition so Mendiola can monitor the car’s reactions in real time. He logs the slightest anomalies or problems for later analysis, as when we try to turn left off a boulevard called West Middlefield Road.

There, we encounter traffic so heavy, it’s overflowing from the turn bay we want to get into, and the Google car refuses to play the buttinski. It’s programmed not to cross double-solid yellow lines, notes Espinosa, so it comes to a halt, blocking cars in the lane behind us. Testers encounter this conundrum several times a day. Should they let the computer sort its way out of this snarl? Or should they take control, get out of the way, and defuse the human anger gathering behind us?

PR considerations win out. Espinosa switches to manual operation—the better to head off someone posting a video of the Google car blocking traffic—while Mendiola muses on the irony of driving a car programmed to obey traffic laws. “We’re here to stress-test the car’s software,” he says, “but there’s a social aspect to our job, too.” Building trust in the idea of cars without drivers, it seems, means building trust in the company at the forefront of the cause. Or, as Mendiola succinctly puts it: “Cutting off a bunch of people isn’t a Googley thing to do.”

Google's Self Driving Car.  Iconic image for city streets.  Google
Google

Googley. As used by the company’s employees, the word can describe anything from improved search speeds to reductions in one’s carbon footprint, but, in its grandest sense, it’s about the company’s self-stated mission to achieve things both great and righteous: lavishing brain power and jaw-dropping sums of cash on problems long assumed intractable.

Nowhere in the utopian empire of this $400-billion company does that apply more than the place Espinosa and Mendiola call headquarters, a repurposed shopping centre in northwest Mountain View where the dream factory known as Google X is housed. Here, amid polished concrete and exposed steel beams, some of the company’s most fantastical projects are hatched and nursed, from a contact lens that measures blood sugar to balloon-borne wireless networks. Astro Teller, the man in charge, goes by the self-consciously Googley title “Captain of Moonshots,” and he defines his purpose with pleasing economy. “We’re not just here to be audacious,” he says. “We’re trying to systematize the process of taking these moonshots.”

Systematic or not, it’s the kind of blue-sky rhetoric that makes Google X a punching bag for technology skeptics and critical shareholders. Some complain that few of the unit’s projects have evolved into marketable products. Stephen Arnold, an analyst who follows Google’s patents, compares the company’s senior leadership to a high school math club. “It’s whatever the math club wants to do,” he says. “I focus on what Google has done to make money. It’s really simple. It makes money with online ads.”

Google has resolutely clung to its idealistic vision (“Peter Pans with Ph.D.s,” says Teller with pride). But it addressed the criticism last month by proposing a new holding company, Alphabet, within which Google X would be one subsidiary. The new structure, said CEO Larry Page, will make the larger company “cleaner and more accountable,” allowing for greater oversight and transparency.

What the overseers will make of Google X remains to be seen, but even the toughest skeptics have trouble denying the potential of its car. After 1.5 million self-driven kilometres on U.S. roads, the test cars have yet to cause a collision (they’ve been involved in 17 that were blamed on human drivers, including one ascribed to a Google tester who’d taken over the wheel), while its laser-scanning camera and computer technology have advanced to the point that Chris Urmson, the Canadian engineer leading the project, believes a safe, self-driving car could be ready for sale in four years.

The result would be transformative. If widely embraced, say experts, autonomous cars would all but erase the scourge of crashes that claim 32,000 lives a year on U.S. and Canadian roads; computers are simply safer drivers. Traffic would flow better, cars would use less fuel, the elderly and disabled would enjoy door-to-door mobility. In 2013, the Eno Center for Transportation, a Washington-based think tank, released a study hypothesizing that half the cars on U.S. roads were robotic, pegging the economic benefits at $102 billion. That number encompasses everything from reduced travel times to funeral costs saved.

It all sounds grand—a rolling validation of Google’s cherished ideas about the redemptive power of technology, and its own role in delivering those benefits. But the why and wherefores of self-driving cars are nowhere near as complicated as the hows, as in: How do you convince drivers to relinquish the wheel? How do you get governments, insurers and police to buy into your mini-revolution?

The barriers are, in large measure, psychological, not least the perverse human desire to believe we’re in control, even when a machine is, in fact, the primary actor. At conferences, Teller points to anti-lock brakes, which many drivers think they operate mechanically by hitting a pedal hard and fast. In truth, it’s the car that detects premature wheel-lock on an icy street and engages ABS. To Teller, it’s a sublime example of technology receding from view, while performing a vital task.

But it also raises questions. At what point of technological advancement will we grasp that we’re no longer the ones doing the work? When that penny drops, will we surrender to the superior judgment of machines? Even as Google’s test models amass driverless miles, after all, other automakers are adding so-called “driver-assistance” features to their cars: lane detectors, collision-mitigation systems and advanced cruise control that adjusts to the pace of traffic. Why not explore these happy mediums? Why not see where it leads?

They’re questions the Google X crew treat carefully. Years ago, the company decided to stick with the moonshot vision, rather than spinning off its car technology for immediate commercial use. It’s not that they reject driver-assistance systems per se, stresses Urmson: “They could have a huge positive impact.” The problem, he argues, lies in the flip side of our need to feel in control, namely, that as we gradually put faith in machines, our instinct for self-preservation starts to fly out the window.

A few years back, Urmson and his team began lending their self-driving cars to Google employees. They wanted criticism to help improve the vehicles. About 100 staffers received two-hour lessons on operating the car, but they were cautioned to use autopilot only while driving on freeways. The automation system could cut out at any time, they were warned. When changing lanes, and at exit ramps, they were to take back the wheel.

The results were illuminating. Cab-mounted cameras revealed employees slipping into a daze, or shifting their attention to other tasks. One man turned around and fished in the rear seats for his laptop, then set up his charge cable and smartphone, all while sailing down the freeway. Some were caught off-guard when it came time to get off the highway.

The informal findings reflected those of academic studies on so-called limited-ability autonomous driving: People seize the opportunity to do something else. And to Urmson, the implications were obvious. “If we’re counting on that person to leap back in and save the day if things go wrong, we’re setting the system up to fail.”

Distrust of the human machine is certainly the theme of my test drive with Mendiola and Espinosa. The pair, both in their 20s, watch with amusement as drivers make moves the Google Lexus never would—blind lane changes, rolling stops, illegal U-turns. And those are just the motorists. Legendary in the self-driving car program is the day that one test vehicle came across an elderly woman in a motorized wheelchair, chasing a duck down the road. The car stopped and let the scene play out and, after a pair of figure eights in the street, the woman herded the duck onto the sidewalk. The car rolled on.

The human capacity to process such randomness is remarkable. But it varies from person to person, from moment to moment. And it is nothing, says Espinosa, next to a robot car’s capability. He cites the example of a crumpled paper bag in a car’s path that looks for all the world like a rock. Neither person nor car can discern whether it’s safe to run over the hazard. Which is more likely to take the best course of action?

No contest, says Espinosa. A human’s first instinct might be to veer around it, “but you’re not able to shoulder-check fast enough to know if that’s the right thing to do. And really, you can only track one object at a time. The car can track multiple objects. It knows the velocity of those objects, and it’s tracking 360 degrees around us. So it’ll know if it’s safe.”

Still, the incremental model, based on the value of human judgment, has taken root. Established automakers have embraced driver assistance with enthusiasm, even as they develop their own self-driving prototypes. Governments, meanwhile, are fixated on intermediate steps, such as intelligent highways that guide traffic, or vehicle-to-vehicle networks so cars can communicate proximity and traffic information—developments that could be decades in the making.

Google does not want to wait. That’s why, says Urmson, it is designing cars able to travel side by side with human-driven vehicles, adjusting and compensating for our foibles. “Let’s get a technology that will work in the world the way it is now,” he says. But to break the psychological barrier, the company must make a case for its technology that will move cautious regulators, while creating a critical mass of users. And for that, it believes it has found an impeccably Googley talking point: the issue of fairness.

In March 2012, Steve Mahan got into the driver’s seat of a car for the first time in seven years. He’d surrendered his licence in 2005 after losing his sight. Then, as head of the Santa Clara Valley Blind Center, a Bay Area non-profit, he’d been approached by Google to try out one of its early self-driving vehicles, a modified Toyota Prius.

With special dispensation from the police, and a Google camera crew tailing him, Mahan took the car first to Taco Bell near his home in Morgan Hill, Calif., and then to his drycleaner. Managers at both establishments looked on in astonishment. “For our generation, an automobile represents the ability to direct your life, to keep appointments, to go places you want to go, when you want to go,” says Mahan, 62. For a blind or disabled person, he adds, this could mean the difference between having a job or sitting at home.

Mahan has since become a kind of unpaid pitchman for self-driving cars—and proxy for a much wider constituency than the blind. Baby Boomers, it’s worth noting, have been described as the first “suburban generation,” yet show little inclination to give up their highly mobile lifestyle (85 per cent say they want to stay put as long as possible; more than half plan to remain active into their 80s). But, as that demographic ages, the number of people with mobility problems is spiking. For tens of millions of North Americans, the self-driving car could prove a godsend.

There’s no overstating their potential impact. Oft-cited impediments to automated cars include the need to rewrite insurance laws, the absence of regulation and public concerns about safety. But what Mahan describes as society’s “doctrine of fairness” would be a powerful impetus to get moving on those fronts—to say nothing of plain, old demand. Saying no to the Peter Pans at Google is one thing. Snubbing one-quarter of the population would be outright political folly.

Those potential risks are sinking in. Four states have passed laws allowing the testing of self-driving cars, while in Canada, Ontario is setting up a framework to allow testing. (“We recognize the importance of new vehicle technology,” says a ministry of transportation spokesman, “especially if it can expand mobility options for Ontarians.”) The province has distributed $2 million in seed money through its Centres of Excellence program to firms developing components for self-driving cars and vehicle-communication technology.

And while history might credit Google for starting the driverless revolution, it’s no longer assumed the company will finish it. Carmakers now in the running include BMW, GM, Mercedes-Benz and Nissan, as well as electric-car maker Tesla. Rumours that Apple has gotten in on the act took off last week with the leak of documents showing the company’s been scouting for locations to test one. “If you’d done a web search for ‘self-driving car’ in 2004, you’d have gotten maybe 1,000 hits,” says Thomas Frey, head of the DaVinci Institute, a Colorado-based think tank that studies the future of technology. “Now you’re going to get hundreds of millions. At some point, it entered the mainstream consciousness. Then it took on a life of its own.”

As ever, though, Google fancies itself different. “They feel like they have a higher calling,” says Frey. “They’re obviously driven by the things you need to do to run a hard-core company, but there’s more to it for them.”

It’s a culture defined in part by money; $64 billion in cash reserves allows you to dabble in moonshots. But it also arises from qualities thought to define Silicon Valley: belief in collaboration; faith in the sharing economy; willingness to spend on ventures that stand a better-than-even chance of falling flat. Google’s founders, Sergey Brin and Larry Page, are frequently credited with helping to coin that culture. But around Google X, it’s personified by the Captain of Moonshots.

‘Technology succeeds best by solving problems so invisibly, you forget it’s even solving the problem.' - Astro Teller.  (Kimberly White/Vanity Fair/Getty)
‘Technology succeeds best by solving problems so invisibly, you forget it’s even solving the problem.’ – Astro Teller. (Kimberly White/Vanity Fair/Getty)

On the day we meet, Astro (née Eric) Teller is travelling on rollerblades and, with his ponytail, jeans and goatee, creates the effect of a time traveller from the 1997 version of Palo Alto. Teller, who picked up his nickname due to a flat-top haircut that high school classmates said looked like Astroturf, enjoyed a successful early career developing body-worn medical devices before he joined Google X in 2010. At that point, Brin told him to choose a title that didn’t make him “sound like a banker.” Captain of Moonshots is on his business card.

He arrives while Urmson showcases the next leap in the project, a subcompact prototype sometimes referred to as the “bubble car,”purpose-built from the ground up. The interior has two simple leather seats split by a console with lock buttons, cupholders, and virtually nothing else. No steering wheel, no gear shift, no dials. The impression is that of a spacious train compartment.

The idea was to produce a vehicle that reflected the next transportation paradigm: fun and non-threatening, yet suited to cost-efficient mass production. To Teller, though, the minimalism indicates something else. He has long extolled technology that blends with our surroundings, relieving users of intolerable burdens such as pushing buttons. To him, such chores reflect lousy “user interfaces.” Example: the steering wheel. “When you approach a curve,” he says with incredulity, “you’re expected to manually turn a wheel to express going straight. As a society, we can do better.” Technology succeeds best, he adds, “by solving problems so invisibly, you forget it’s even solving the problem.”

That is not just a philosophy, but an aesthetic choice, one that hasn’t always panned out. Google Glass, eyewear that displays smartphone information to its user, produced less than a rush of demand when Google issued a prototype for sale, and was pulled for further tweaking. Apple, meanwhile, has proven people will put technology front and centre in their lives, if it’s pleasingly designed.

But there are miles to go before any firm starts mass-producing self-driving vehicles. Back in California, Mendiola talks about the quirks of human behaviour the Google car had to learn. Some are subtle, such as inching forward at four-way stops to signal readiness to proceed. Others are dramatic, such as chasing waterfowl down a road in a wheelchair.

Both demand what humans call intuition, reflecting the banality, danger and inherent loopiness that converges on our roads, and it’s comforting to think it’s burned into digital brains of the cars of the future. For a company determined to change the transportation paradigm, as for a computer processing the myriad hazards of automotive travel, a little grandmotherly caution seems in order.