XFiles Weekend: It’s more like “guidelines”July 17, 2010 — Deacon Duncan
(Book: Mere Christianity by C. S. Lewis, chapter 1, “The Law of Human Nature”)
We’re ready to start the main body of Mere Christianity, but before we delve into what Lewis calls the “law of human nature,” let’s take a moment to do some forward thinking. Let’s start with a species that is intelligent enough to have some understanding of cause and effect, so that they can anticipate the probable consequences of their actions, and choose the ones which will have the most favorable outcomes. Let’s further suppose that these beings possess enough empathy to communicate with each other, to recognize each other’s feelings, and to anticipate what sort of feelings others are likely to feel in any particular set of circumstances.
Given this as a premise, plus the assumption that each individual wants to achieve the most favorable possible outcomes, what consequences would we expect as the members of this species interact with each other and with an environment that contains both dangers and opportunities? If we look at a few specific scenarios, I think a clear general trend will emerge.
Let’s start with Ogg, Glog, and Berk, three members of a clan of these beings. In the first scenario, each one is fending for himself, looking for food. Ogg manages to catch a squirrel—not really a satisfying meal, but better than nothing. Glog, however, decides it would be easier to steal Ogg’s squirrel (or demand a part of it) than to catch one of his own, and the two begin to fight, allowing Berk to sneak in and steal the whole thing while the first two are distracted. Berk gets a meal, but now Ogg and Glog are both mad at him.
Second scenario: a moose wanders into the clan’s hunting grounds. It’s too big for any two or three hunters, so the clan gathers all of its hunters into a full scale hunting party. Ogg and Glog join in the hunt, but they deliberately don’t let Berk in on it because they’re still mad about the squirrel. The hunt is successful, and all the hunters, including Ogg and Glog, get a good, satisfying meal. Berk gets some too, as part of the clan, but by the time the hunters have finished, all the choice bits are gone and he has to make do with leftovers.
We could spend quite a lot of time exploring this particular set of scenarios, but these two give us a good starting point. Notice first of all the consequences of competition versus cooperation. The competing hunters had to settle for smaller prey since each was operating on his own, and the results were poor. Also, as Berk found out, certain behaviors had social consequences: by putting his own selfish interests ahead of those of the rest of the clan, Berk lost social esteem, and found that he received less benefit from intra-clan cooperation than the other hunters did.
The cooperative consequences were much better: the group could work together to bring down much bigger prey, thus providing much more food for each individual in the tribe. It wasn’t a matter of “I’ll give you some of my squirrel and then we’ll both have an inadequate meal,” it was “wow, that was some moose, I couldn’t eat another bite.” Competition is inevitable, and not necessarily a bad thing in and of itself, but the potential rewards of cooperation are frequently far better.
What we have here, then, is the evolution of a rudimentary moral system, i.e. a set of guidelines that help us categorize behaviors into those which promote conflict and competition versus those which promote cooperation and mutual benefit. We can call these guidelines “evil” versus “good,” but that’s just a label. The main significance is that we recognize and encourage the behaviors that we anticipate will bring the most desirable outcomes.
And speaking of labels, notice we’re not necessarily talking double-entry bookkeeping here. Ogg and Glog didn’t write down “Debit: one stolen squirrel; Credit: one missed moose hunt.” They got mad at Berk, and regarded him as a “Them” in the age-old categories of Us versus Them. It’s much simpler and more commonplace to categorize people according to how you feel about them. Can you imagine if we had to make all our decisions about how to treat people on the basis of adding up every interaction we’ve ever had with them, assigning a positive or negative score to each, and then adding up the total to see if it ended up on the plus side or the minus side?
It’s much easier, and more instinctive, to simply put labels on people, and then base your judgments on how you feel about the label: liberal vs. conservative, believer vs unbeliever, freshman vs. senior, dude vs. babe, black vs. white, homo vs. hetero, etc. And here’s the trick: if we’re talking about people that know about feelings like this, and who can anticipate that certain behaviors will put them in certain social categories, then that in itself becomes a “moral” guideline. We want to do things that will benefit us; we don’t want to do things that will cause us to end up in an unfavorable category (like Berk did).
We can make several predictions based on the above evolutionary scenario. First of all, we can predict that different groups will evolve different moral standards, though with a lot of common ground based on our common experience (i.e. we tend to have fairly predictable feelings about being robbed, assaulted, threatened, and so on). This is a perfectly natural outcome resulting from the immediate material consequences of certain types of competitive actions, regardless of the culture in which they occur.
Next, we can also predict that there will be certain individuals who will find competition more personally advantageous than cooperation is: the schoolyard bullies, or the bloody tyrants. Their moral system won’t restrain them from harming others, because they’re big enough and bad enough to get away with it. By the same token, however, very few people will adopt such narrowly selfish moral codes because such codes benefit the bully/tyrant at the expense of others, leaving others with little reason to admire the code. The others will stick to seeing that sort of conduct as wrong.
We can also predict that evolved moral codes will tend to have different guidelines for those outside our own social group than they do for those inside our group. For example, the code may say that it is wrong to tell a lie, meaning that it’s wrong to tell a lie to another member of the same group. At the same time, it can be perfectly ok to tell a lie to someone outside the “Us” group (“Do you know where the Jews are hiding?” demanded the Gestapo leader…), and sometimes it might even be wrong not to tell a lie.
Finally, we can predict that moral codes will continue to evolve, as we continue to acquire experience and (hopefully) wisdom regarding which behaviors do or do not contribute to the most desirable outcomes. There may be a period when the bully/tyrant can build a society by imposing his own strength and will on a troubled and chaotic world, and his servants might very well see his tyranny in terms of “the divine right of kings,” assuming they’re better off with a strong bully on the throne than they are with dog-eat-dog anarchy and disorder. But such periods can end, as stability opens up new experiences in the benefits of cooperation, equality, and liberty. Despotism’s Golden Age can fade and tarnish, morally speaking. And likewise with slavery, sexism, and homophobia.
Thus, what we have in the real world are a number of moral codes, with common core principles that evolve naturally out of our common, human reactions to behaviors that are materially harmful to us or beneficial to us. These natural, real-world codes are further augmented by the anticipatory social awareness that helps us recognize which behaviors are going to promote cooperation (and consequent benefits) within our society, versus those which are going to put us into undesirable social categories and to provoke undesirable conflicts with those around us. And these codes evolve and adapt to the particular social and environmental circumstances of the groups that hold them, leading to regional and temporal variations from one another.
This is an extremely important concept for us to grasp, because not only does it spare us the superstitious mistake of ascribing “right” and “wrong” to some invisible legislator in the sky, but it also explains what “right” and “wrong” really mean, and how they are grounded in objective reality itself. When we say that murder is wrong, this is not an arbitrary and whimsical designation. It’s not that some celestial tablet-scratcher flipped a coin that came up tails. Murder is wrong because it produces undesirable outcomes in the real world: undesirable for the victim’s friends and family because they are grieved and hurt by their loss, and undesirable for the murderer because he has just put himself in the category of Dangerous Threats, and society will, if it can, work to eliminate him somehow.
C. S. Lewis, I daresay, isn’t going to see this. That’s going to seriously handicap his argument, because the alternative is actually a pretty sad little system. The alternative is to say that there is no real-world basis for right and wrong, that it’s just an arbitrary system made up by some celestial bully/tyrant, and the only reason we need to care about it is that He is strong enough and brutal enough to hurt us if we fail to play along. That’s not an ethical system, it’s autocratic mind games. It’s like saying blue is good and green is evil—neither color has any intrinsic moral qualities, good or bad, they’ve just arbitrarily been designated as one or the other. Is murder really no more intrinsically immoral that some randomly chosen color?
No. Real-world morality is not arbitrary. It arises naturally and inevitably from the consequences (including the social consequences) of our behavior. And God Himself cannot change that.