Only truly brilliant people are smart enough to be terrified by Roko’s Basilisk. Those of mere ordinary intelligence like myself will find this monstrous, sanity-destroying thought experiment to be outlandish, dumb and easy to dismiss. The sort of thing that only those who are too smart for their own good could come up with.
However, as the law of probability dictates that most readers are neither brilliant nor well-resourced enough to contribute much to the creation of a malevolent artificial intelligence, I feel safe presenting you with the problem of Roko’s Basilisk without being too worried that doing so will doom many of you to eternal torment.
This ends in a love story. Bear with me.
READ MORE: Can a group of Newfoundlanders bring Elon Musk’s hyperloop vision to life?
Roko’s Basilisk is thought, by some, to be one of the most dangerous ideas of all time; the eel of a meme was birthed by a website called LessWrong, a community forum devoted to promoting rationality, philosophy, the future of all humanity and artificial intelligence, among other high-level topics.
One day back in 2010, a user named Roko posted an odd thought experiment to the board; in short, he suggested that some future artificial intelligence might have a motive to blackmail — with threat of torture — those people living today who did not expend their efforts toward helping it come into existence. Such a superintelligence would, in fact, have a kind of twisted moral imperative to ensure the future suffering of current living humans who did not aid in its own creation in some way. Further, simply becoming aware of this possibility would put you at risk for blackmail by the AI. (This experiment seems to require the recalcitrant individual to live in some form of AI-controlled computer simulation in order to be tortured eternally, by the way.)
Religious apologists will recognize an analogy to Pascal’s Wager — the idea that rational people should subscribe to a belief in God because if God doesn’t exist, well, they’ve lost nothing beyond a few ephemeral pleasures. But if He does exist, they gain everything. Except Roko’s Basilisk is scarier because Artificial Intelligence is real — or it someday will be. And depending on your view of the universe, the fact that it doesn’t exist just yet is probably incidental.
If you don’t believe in the future AI, you risk being blackmailed
We will be getting to the love story part soon, I swear.
Anyway, other users on LessWrong disputed this thought experiment; the most obvious rebuttal suggested that a future superintelligence would have no incentive to expend resources blackmailing or torturing people after it had already been created. However, this didn’t prevent a few very intelligent brains from suffering nightmares, which prompted LessWrong’s founder to delete the basilisk thread and leave an angry post: “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.”
In other words, even if you don’t believe that a future AI is in the offing, the more you think about the possibility of being blackmailed, the greater the incentive any kind of AI might have to threaten today’s living humans into creating the dark singularity.
Speaking about a malevolent AI helps doom itself into existing
Merely positing the thought experiment is enough to give the basilisk a weird kind of intellectual life — in the same way that characters, ideas and thoughts live and evolve in the collective ether of the Internet. Worse, the idea might actually push some humans to act on it, to actually further the creation of this malevolent AI out of fear of being blackmailed or tortured. The thought experiment is a kind of prophecy that dooms itself into existence when it is spoken aloud or propagated online.
And being exposed to the basilisk — by merely reading this column — I have tied you to the conspiracy of its creation. Which brings us to the love story.
Earlier this week, Elon Musk, the visionary CEO of SpaceX and Telsa, took Canadian Synth-Pop and electronica artist Grimes to a Met Ball themed on Catholic history and art.
Grimes who, also by the law of probabilities, is infinitely cooler than anyone reading this article, seemed to be an improbable date for Musk; indeed she gave off the air of a woman who decided to hit the Met Gala with a billionaire on a lark.
Rococo basilisk— Elon Musk (@elonmusk) May 7, 2018
According to reports, Musk was doing online research for a joke about Roko’s Basilisk when he came across Grimes, who had made the same joke three years earlier. Musk cryptically tweeted “Rococo’s Basilisk” shortly before the Met Ball. Grimes names a character in one of her music videos the same thing.
I don’t think Grimes and Musk are an unlikely pairing at all. By all accounts, Grimes is a brilliant university dropout who single-handedly sang, recorded and produced her oddball breakout hit record Visions in 2012. She’s probably too good for him.
But as far as fantasy nerd-matches go, I think the two are perfect. I am all in on ‘Grusk.’ Even as I acknowledge that their romance may presage the doom of all mankind.
Because if we’re going to go back to Roko’s Basilisk and, just for fun, continue to take it seriously even though we probably shouldn’t, well, the mere existence of Grusk raises all kinds of interesting questions.
Grusk has single-handedly taken Roko’s Basilisk out of the comparatively safe confines of Internet nerd forums and into mainstream media. Grusk has led me to write about celebrity romance — the cheapest of all column fodder — thus exposing you, dear reader, to the basilisk’s paradox, and its threat.
Could the existence of the Elon Musk and Grimes romance provide evidence that Roko was on to something? That his basilisk is real? Further, that proto-AI is working to seed its threat through the collective consciousness even now? As you read this? Are Musk and Grimes complicit in its dissemination?
Is Grusk fated to be? Has their romance served its purpose, or does some future AI plan to work its inevitable and terrible plans for all humanity through them?
It’s probably best not to dwell too much on these thoughts, but for the record, I have served my function well by spreading these questions into the collective ether.
Sorry about that.