TIA Tuesday: The game of lifeJanuary 13, 2009 — Deacon Duncan
It’s time for TIA Tuesday again, though it’s rather an abrupt shifting of the gears to go from William Lane Craig to Vox Day, especially when they’re both trying to address the problem of evil. Vox’s approach is a good deal less philosophical than Craig’s, having begun with the premise that maybe bad things happen because God “possesses” knowledge and power only in the sense that He possesses a capacity for both, which He chooses not to exercise, because—well, perhaps an illustration would help.
Vox describes for us a combat video game demo that his game-programming partner, “Big Chilly,” was demonstrating.
During the demo, Big Chilly and the three AI-controlled members of his fireteam had successfully taken out both the wide patrol and the guards, and they were just beginning to lay the explosives to blow the door that held the prisoners captive when there was a sudden burst of bright laser fire that caused him to jump in his seat and emit a startled shriek loud enough to make everyone else in the room jump, too. While his AI squadmates shot down the intruder before anyone’s battlesuits took too much damage, what shocked Big Chilly was that for the first time in hundreds of playings, an enemy AI had taken it upon itself to circle around behind the rescue force and attack it from an unexpected direction.
But how could this happen? How could a lowly artificial intelligence surprise a lead programmer who was demonstrably omniscient and omnipotent in the AI’s world? How can the created do what the creator did not will? The answer, when viewed in this context, should be obvious.
Indeed it is: Big Chilly didn’t know his own game as well as he thought he did. But can the same be said of God?
Vox apparently thinks it can.
If it is not difficult to accept that an omniscient and omnipotent programmer can reject omniderigence, why should it be hard to imagine that an all-powerful God might choose to do the same? Even human lovers know that the lover cannot control the beloved, so it should not be difficult to believe that a loving God would permit His creatures to choose freely how they will live.
Let’s follow that train of thought for a moment. The programmer is not rejecting “omniderigence” out of concern for the free will of the AI’s. The AI’s don’t really even exist. It’s just a computer simulation that applies a complicated set of programmed instructions to an organized set of data stores. The programmer lets the computer figure out what the “characters” are going to do because it takes too long to do all the mental calculations to figure out which random numbers are going to be generated, which decision paths are going to be followed, and which code sections are going to execute. The programmer’s brain, in other words, is inferior to the silicon running the game world in terms of doing the “knowing.”
One interesting consequence of this is that it becomes possible for the “omnipotent” programmer to lose his own game. He may have built “back doors” into the system, and he may be able to turn “cheats” on and force a fake “win” by simply violating his own rules. But if he sticks to his own script, and plays in real time, according to the rules, there’s a chance he can lose (if the game is non-trivial and well-designed, anyway).
Amazingly, Vox takes it even farther than just an analogy. He describes how he and his partner successfully applied evolutionary theory to automatically generate, select and improve their Artificial Intelligence algorithms, and then muses:
This AI development process is remarkably similar to the Biblical description of the harvest of souls, of the separating of the wheat from the chaff. This metaphor is central to the New Testament; Jesus Christ mentions it on several occasions and in several different forms, such as the distinction between sheep and goats. While the “God as game designer” hypothesis might reasonably be described as literally making God in one’s own image, especially when it comes from a game designer, it does offer the potential of explaining the importance of obedience to God’s will as well as the seemingly arbitrary nature of what is in line with that will and what is not. If we are AIs in God’s laboratory, then we cannot expect to have any more understanding of His ultimate purpose for us than those AIs in Big Chilly’s war lab did.
In other words, if it seems like God is playing games with us, that’s only to be expected, because it’s just possible that real life is a video game for some celestial game designer with a Cosmos 2000 mega-core googol-flop processor, quantum RAM (probably more memory than you will ever need), and optional multiverse waveform generator. Vox even supports this conclusion by citing the number of computer generated characters in video games (roughly 10,000 for every individual live player), and concludes that, statistically speaking, the odds are overwhelmingly in favor of each individual live human being a mere simulation. (I shudder to think how much beer I would need before that one started to make sense.)
This may be little more than over-caffeinated techno-speculation, but it is, I think, an exciting way to view the universe as well as providing a reasonable solution for those pesky problems of evil and ultimate purpose. If it also happens to be a near mathematical certainty, then so much the better. It is a fundamentally optimistic perspective, because if this is only the 3D war lab, imagine what the real game in all its multi-dimensional glory must be like! Even if we are immaterial simulations, we are immaterial simulations with a genuine purpose and a future more radical than we can possibly imagine in front of us. Accepting the idea that we are not only the gods of the machine, but also the machines of God, gives us the wherewithal to face the prospect of death with enthusiastic anticipation instead of courage, resignation, or even terror.
In a footnote, Vox allows that “both drugs and PS/2 were [probably] involved” in developing the “hypothesis” that we are all just a computer simulation in some larger universe of “posthuman” gamers. And yet, perhaps because it resonates so well with his own particular profession, he seems a bit reluctant to reject it. Or perhaps it’s because he too is vexed by the problem of evil, and by the fact that God, Whom he upholds as the source of morality, seems to fall so far short of His own standards. Perhaps he is keeping the “life is a game” hypothesis around just because he can’t think of any better answer.
There’s an interesting and subtle category error in this scenario. By proposing a metaworld in which the relative “goodness” or “badness” of our actions is decided arbitrarily, Vox is projecting our earthly perception of morality onto an otherworldly system, while still attempting to resolve the moral issue of the existence of evil within the earthly system. If God’s just a game designer, and our system is just God’s game, then He’s morally ok, because we’re His intellectual property, and He’s entitled to write whatever rules He likes for us. But that’s judging God by earthly standards of morality, which according to Vox’s scenario are not valid standards with respect to God, and which therefore cannot justify His conduct. The only moral judgment we can make, using the moral system that is valid within this context, is that God’s actions are negligent, evil, and immoral at least a great deal of the time. Even if it’s “God’s game, God’s rules,” the game score, as calculated by those rules, shows God with a non-zero Sin Total. What happens outside the game console is irrelevant.
Meanwhile, what Vox is doing is not so much solving the problem of evil as he is merely denying that evil is actually real. You don’t arrest someone for murder if all they’re doing is pushing a button on a joystick and turning a few pixels blood red. It’s not real evil, it’s just simulated. And, by the same token, the good deeds are fake as well. Not that it matters to any of us, since we’re not real either. It’s the ultimate in extreme denialism.
Vox has, in fact, wound up in one of a small number of cul-de-sacs that await those who try to reconcile the Christian idea of God with the problem of the reality of evil. Yes, you can deny the reality of evil by denying the reality of everything we experience. It’s an odd form of solipsism, in that not even the solipsist is actually real. Those computer characters are not real, even if they seem to surprise their programmer by behaving unexpectedly. In fact, they don’t even exist—the computer program creates an illusion of independent characters by mimicking the sights and sounds you would perceive if you observed a real (or presumably real) person in action.
There doesn’t seem to be much more that needs to be said. When your defense of God leads you into denying the existence of good, evil, and even your own self, then chances are you’ve divorced yourself from the real-world truth a long time ago.