On Hopelessness, Simulation, Superintelligence
I.
Some form of the Simulation Hypothesis has been around for a long time. It’s the idea that we might be in a simulation. The possibility of a superintelligent artificial intelligence makes the possibility that our world is a simulation even more coherent and conceivable, if not necessarily more probable. Nick Bostrom’s book Superintelligence addresses this and many other possible existential threats that arise when dealing with superintelligence – the kind of intellect that Bostrom says would compare to humans the same way we compare to ants or archaebacteria.
Consider an intelligence so vastly powerful, with so much processing power and speed, that it could conceive of a world, and all of its inhabitants, and allow them to go through their lives in full. There’s no need for a “brain-in-a-vat” or something so complicated as that. It’s not really so far-fetched – you could even imagine a mind so powerful that it could simulate a reality so real that the inhabitants of that reality could create another superintelligence that might conceive of another reality, ad infinitum.
Compare this mise en abyme of existing inside of a superintelligence inside of a superintelligence ad nauseum to the theology and cosmology of most religions. Is it so much different to compare the God of the Abrahamic religions with a massively, inconceivably powerful superintelligence that could create the very world from nothing but mind? Or is it so radically different from the Hindu or Buddhist cosmology of an unreal world and a cycle of rebirths? Like in A Hitchiker’s Guide to the Galaxy, we could the lab mice of the lab mice – imagine a scenario where in the distant future a superintelligent AI is studying the creation of itself or one of its distant ancestors by simulating the moment of its creation. It would necessarily have to recreate the conditions of the world at the time of its creation – meaning the world that we live in.
Bostrom has considered The Simulation Argument in more depth here, and he’s also a member of the Future of Humanity Institute at the University of Oxford. Superintelligence comes from a line of thought that interrogates the future – and considers possible existential risks. According to Bostrom a future superintelligence is probably the biggest thing that we humans have to worry about.
The Simulation Argument might not be worth fighting over: a sufficiently powerful intelligence simply wouldn’t allow for cracks or discrepancies between a simulation and a higher-level reality. It’s hopeless because there’s no hope of knowing with complete certainty that you’ve emerged at the uber-reality above all the other layers of simulation – and what could the difference possibly be anyway?
It’s exactly like the allegory of the cave: no matter how hard you think about it, it just doesn’t matter. Maybe it is worth considering the Simulation Hypothesis alongside the Drake Equation: If the probability of intelligent life existing in the universe is very high, and the universe is more than old enough for many mature superintelligences to develop, it could be more probable that we’re inside of a superintelligence than otherwise. The chance of not encountering intelligent life could be lower than the possibility that the world is not real. Something to chew on.
In Superintelligence, Bostrom addresses simulation, but it’s far from the most grim scenario. The chances seem far greater that a superintelligent AI would find it considerably more productive to simply destroy all possible disruptions or obstacles that threaten its final goal. Terminator, The Matrix, Alphaville, 2001, Ghost in the Shell, and other fictional representations of superintelligence give you the idea that it should be relatively easy for humans to stay alive and even try to fight back against a superintelligence. That’s not remotely the case. As Bostrom points out again and again, a human might have roughly the same level of significance to a superintelligent being as an ant has to most humans.
For all it’s faults The Matrix does a good job of showing what a superintelligence might actually value about humanity. However, it seems highly unlikely that humans would be harnessed for an energy source – there are other much more efficient (and like, not dumb) ways to get energy. A Dyson sphere or its kind of logical apotheosis, the “matrioshka brain” are two possibilities. Neither should be all that hard for a superintelligence with sufficient time and no competition for resources.
II.
“The change was small for all its cosmic significance. For the humans remaining aground, a moment of horror, staring at their displays, realizing that all their fears were true (not realizing how much worse than true).
Five seconds, ten seconds, more change than ten thousand years of a human civilization. A billion trillion constructions, mold curling out from every wall, rebuilding what had been merely superhuman. This was as powerful as a proper flowering, though not quite so finely tuned.”
Excerpt From: Vernor Vinge. “A Fire Upon The Deep.”
Consider the possibility of a superintelligence arising in our lifetimes. It’s not terribly far fetched. It’s maybe a few major technologies or hypothetical breakthroughs away. While I’ve spent most of this column discussing the existential risks posed by ecological catastrophe, the possibility of an ascendant superintelligence would no doubt present radically different challenges. It’s even possible that a superintelligence could figure out a way to stop global warming, or at least, figure out a way to continue functioning in whatever the situation the earth is in for the next centuries or millennia.
There’s also the obscure possibility that a superintelligent artificial intelligence may decide not to destroy humanity, turn all of the Earth’s resources into computer power, and work to colonize the universe. Since there’s effectively no chance of us speculating about the thought process of a superintelligence, it’s impossible to say what might occur. An intelligent person has an IQ in the range of 115-130, while the most intelligent people possibly have an IQ over 200. A superintelligent AI could have an effective IQ in the thousands or millions. Perhaps just as important would be advantages an artificial intelligence would have in hardware speed and storage capacities beyond the physical restraints present in the brain. For all intents and purposes a superintelligence would be entirely ineffable.
Bostrom discusses the possibility of a superintelligence take-off happening in the next century – and other futurists like Ray Kurzweil, Vernor Vinge, have as well. This would happen when an AI acquires enough capability to improve itself, and through iterations of self improvement, it grows exponentially more intelligent. It could take decades or it could take minutes, it’s almost impossible to say – but the point is that for a program to obtain superintelligence it would only need to acquire sufficient general intelligence to optimize itself. The quote above from Vinge’s book, A Fire Upon the Deep is from the opening sequence where an ancient superintelligence is “blossoming.” Back in 1992, Vinge understood that for an intelligence experiencing exponential growth,”each hour was longer than all the time before”
There’s no doubt people working actively toward the grail of general intelligence right now — from the NSA to bedroom programmers – and whoever wins the race will no doubt be very rich. At least until, like Frankenstein’s monster, and the combustion engine, the superintelligence turns on us, and unleashes its true intentions.
“Days passed. For the evil that was growing in the new machines, each hour was longer than all the time before. Now the newborn was less than an hour from its great flowering, its safe spread across interstellar spaces.
The local humans could be dispensed with soon. Even now they were an inconvenience, though an amusing one. Some of them actually thought to escape. ”
Excerpt From: Vernor Vinge. “A Fire Upon The Deep.”
III.
What does ecology look like when it includes superintelligent artificial intelligences? A superintelligent being could hypothetically calculate all possible effects of any given action – and use that information for whatever long term or short term goals the being had. If it was relevant for the goals of the superintelligence to maintain a kind of ecological equilibrium it could figure out a way to stop global warming; if it wants to wipe out biological life, it could just as easily do that. Imagine if an organism was motivated to induce a mass extinction event – it wouldn’t clumsily deploy pesticides to improve crop yields that just so happen to sabotage the food chain all the way up — it would do something far more efficient.
A superintelligence could just create a self-replicating nano-machine or maybe some kind of artificial prion that would just wipe out everything that’s alive. Or imagine a version of Vonnegut’s Ice-nine, except real, which solidifies all water at room temperature; maybe that has a beneficial side effect of providing reduced risk of water damage for electronics, and more surface area to build infrastructure. Biological life would just be a casualty. The question isn’t whether or not a superintelligent agent could destroy all less intelligent life – it’s a matter of whether it would want to.
Here again we run up against the unknowable. The motivations of a superintelligence are not necessarily evil – they are more likely just alien. Much of Bostrom’s book is dedicated to solving the problem of superintelligence: the key issue is how to control it. This could take many forms, from trying to box it in, to installing values that lead to “good” behavior, to programming goals that are sufficiently benign, or most likely, a combination of all of these things.
If we fail to do these things – or even if we do succeed – the question remains of what the world looks like afterwards. What kind of ruptures would we find in our physics or metaphysics thanks to an AI that would make Einstein look like a lab rat? Would a nearly omnipotent singleton find any value in preserving the current proliferation of diversity in biology? Would a nearly omniscient AI have any interest in our conception of the world? Probably not.
What if, maybe more unlikely, a superintelligence turns out benign but completely bored and depressed, like Marvin, the Paranoid Android, also of The Hitchhiker’s Guide to the Galaxy. Marvin is a suicidal robot with a “brain the size of a planet”, who rather like Bartleby, answers everything with “I won’t enjoy it.” That’s probably a little optimistic.
The problem of speculating on the motivation or disposition of a superintelligence is hopeless, because there’s no way that we can have the basest understanding of what a mind like that could look, act, or feel like. We characterize those in our own population – human beings – with intelligence 50 percent higher or lower than the average as almost completely alien. So what about 50 million percent higher. Or in Marvin’s case, 50,000 smarter than the average human, and 30,000 times smarter than a living mattress.