Brian David Johnson is Intel Corporation’s chief futurist, the man the computer-chip-making giant entrusts with looking ahead 10 or 15 years and predicting how people will be interacting with its products. He is also someone who passionately argues the future is what we make it, and being pessimistic and fearful is more liable to make it unpleasant.
Q: You have a very cool title: “futurist and director of future casting.” Your job is to divine the future—how you see it in social terms—and try to make technology that works for it?
A: Correct. Get an understanding of people first and foremost, right? Base it on people, because people don’t change all that much compared to technology. Base it on social science and people, and then take the work that we’re doing inside our labs and ask, “Okay, how can we use that technology to make people’s lives better?”
Q: It sounds harder than doing the opposite, saying, “We can make this,” whatever this is, “and surely someone will find a use for it.”
A: Oh, that is so intellectually dishonest, but it is something that we have been guilty of in the high-tech industry for a very long time. We create technology—really cool, brilliant technology—that is useless to people. To me that’s one of the greatest tragedies: amazing technology that does no good. It’s the end experience, not the specs—but how is this device, this platform, this future—how is that going to engage with people, make their lives better, make them more sustainable?
Q: What do you see 10 years ahead?
A: Right now the size of meaningful computational power is moving to zero. In the newest phones, the thickness of the device has nothing to do with the silicon inside, nothing to do with the chip. For decades the size of the computer was dependent upon cooling the chip, because the chip ran so hot. But because of Moore’s Law we’ve gotten it so small and so power-efficient that it’s not about that anymore, and that again makes it about the experience. Essentially we can soon turn anything into a computer, and we will be living in a world where we’re surrounded by computational intelligence. The social question is: how do we educate the future engineers and workers? One of the things I say to people is, “Learn to code.” We’re already living in a world where we’re surrounded by computers, but it’s only going to be more so; we need to make sure that we all have a deep understanding of how we act and interact with technology.
Q: What does that mean for Intel in terms of what it should be working toward?
A: It’s all about the experience people have with devices, like our Ultrabooks—incredibly thin, insanely light [laptops]. If you look at one, it’s simply a screen, because we learned that for the consumer the device isn’t a laptop or a tablet or a smartphone or a TV—it’s just a screen. What I always tell my team is, bet on choice. Choice always wins. That’s the wonderful thing about people—that they are different, right? That goes all the way down to the fact that people are different sizes. Literally the size of their hands are bigger or smaller, which means that the device that they pick, if you just think about it as a screen, can be tailored for them. They can pick whatever device kind of works.
Q: Does the emphasis on experience mean the future of computing is more about entertainment than spreadsheets?
A: Yeah, not just productivity. People actually have a very deep cultural connection to entertainment, and we have a very deep desire for story and for being entertained. It’s something that we need. And I recognize that, partly because I’m a science-fiction author, but also working with the social scientists we saw a very deep indicator. And here’s a great example of what I think is embedded in so many different cultures. In end-of-the-world movies—I’m a huge science-fiction fan, too—there is always that point when the protagonist gets up and turns on the TV and all they get is static, and you know for sure that the end of days has come. Rest assured, if there is no TV, then certainly the apocalypse has happened. For me that just showed again the deep cultural connection we have with entertainment, so putting chips into all sorts of different devices—essentially creating a world of screens—made a lot of sense, offered a lot of potential to enrich people’s lives.
Q: In terms of story, you do not like our current dystopian attitude toward the future. You want to change that narrative.
A: I do, I do. There’s been some research recently that human beings seem to be “apocaholics”—always seeing something right around the corner that’s going to kill us all. I understand it. As human beings we’re hard-wired for a world where, if you heard a twig break behind you, you jump and you have a physical fear reaction. That was okay when that snap was a sabre-toothed tiger, but we don’t live in that world any more. Now that reaction blocks us from coming up with the really great ideas, so I’m on a crusade against fear, because being afraid of the future means we’re giving up our power. You can’t let the future happen to you, you can’t sit back and be passive—you need to be an active participant. We all, as human beings, personally build the future, whether it be our own, our family’s, the world’s. We have to own that fact and we need to do something about it.
Q: What possible future trends, frightening or otherwise, do people raise most often with you?
A: I always get questions from people about when will we be putting a chip in our heads. It usually cuts between 80 per cent and 20 per cent, with the majority who say it with a chuckle and a grimace, who kind of say, “Ew.” They’re a little worried that I’m actually going to do it. But there is a 20 per cent group who come up to me and say, “Dude, where do I sign up? Is there a list? Because if there’s a list I want to get on it.”
Q: Something else that scares people is this secret life of data idea—that we are throwing off so much information about ourselves that will be used in ways we cannot predict. That doesn’t scare you, though; in fact, you seem excited about it.
Q: Well, it scares me. So talk to me about this.
A: I seem to have become a father confessor of technological fears. I actually get calls from people, sometimes high-ranking government officials—who like having a futurist on their speed dial—and they go, “Brian, I have a problem. Let me ask you . . . ” Okay, so this notion that we will have big data and big data will have a secret life, just say that and very quickly people go from Tuesday to Doomsday. From me saying “secret life of data” to “Skynet.” Now, I love the Terminator movies too, but . . . Again I’m going to tell you a story. I want to show you the picture of the robot apocalypse—sentient robots acting on their own. Do you know what that looks like? It looks like a Roomba. That is the robopocalypse rolling through your house.
Q: Hmmm, maybe now, but I’m sure it’ll be worse in the future.
A: There you go! Okay, it’s true: the Roomba is an autonomous robot, and it’s going through your house. But then consider some big-data work by the U.S. military to treat post-traumatic stress syndrome. PTS is an awful, awful disease. What the doctors involved are doing is using smartphones with an application to monitor soldiers—who are they calling, where are they surfing, what music are they listening to, are they moving around—to get an understanding of their health state so that they can intervene, or go to that service member’s family and say, “It would be a good idea if you maybe gave this person a call.” You can’t be with somebody constantly, but that smartphone or that piece of computation can be, and by being able to monitor and farm that data doctors can look for downward indicators. It’s a very small thing, a very small app, but a great use of data’s secret life. You can create the algorithms just to be watching what’s going on, and then alert a family member.
Q: There’s more potential for abuse there than with a vacuum.
A: When you start thinking of the reality of these things, I ask, “What do we want to do with them?” It comes back to tools, right? Technology is simply a tool, just a hammer, and a hammer you can use to do stuff, to build a house. Now, it should also be said, to be pragmatic, you can also use a hammer to bash somebody’s skull in. That’s the nature of a tool. That’s why this is actually more about the people, not about the tool. Let’s just make sure that we’re having an ongoing cultural conversation, not just one, because there’s always going to be unintended consequences with our tools. There’s no way that you can design a hammer that can drive a nail but cannot bash a skull, but we can create other safeguards. And we’ve already done that before—we’ve done it for hundreds and thousands of years culturally. That’s what’s kept us around—that’s why we beat the sabre-toothed tiger.
Here’s Brian David Johnson on putting intelligence into everything: