Technophobes, rest easy. When the machines rise, humans will be prepared.
A philosopher, an astrophysicist and a software engineer have joined forces at Cambridge University with hopes of creating a laboratory that will analyze the dangers that technology poses to the future of humankind. The Centre for the Study of Existential Risk (CSER), will study the potential dangers posed by rogue bio- and nanotechnology, extreme climate change, nuclear war and artificial intelligence.
“The seriousness of these risks is difficult to assess,” the founders write on the centre’s website. “But that in itself seems a cause for concern, given how much is at stake.”
The three founders, Cambridge philosophy professor Huw Price, cosmology and astrophysics professor Martin Rees and Skype co-founder Jaan Tallinn, say that the centre is necessary to avoid opening a “Pandora’s box” where super-intelligent machines would be beyond human control. Reese, in particular, has long been a technology alarmist and is the author of the ominously titled Our Final Century. The book, published in 2003, is a detailed account of the threats posed by the unbridled rise of technology and rapidly improving artificial intelligence. While critics have dismissed the idea as a science fiction, the founders insist that just because a threat can be readily cited in pop culture, it does not mean it is not a present danger to the human race.
While CSER does not have an official opening date, the founders insist it will open sometime next year. Cambridge University, which in its 800 year history has withstood everything from the Blitz to the Bubonic plague, clearly does not intend to let the apocalyptic threat go unstudied.