Human beings are notoriously adept at self-deception. Self-deception provides an evolutionary advantage. Robert Wright addressed this phenomenon at length:
Page 263, Chapter 13 Deception and Self-Deception
What wretched doings come from the ardor of fame; the love of truth alone would never make one man attack another bitterly.
n Letter to J. D. Hooker (1848)
Natural selection’s disdain for the principal of truth in advertising is widely evident. Some female fireflies in the genus Photuris mimic the mating flash of females in the genus Photunis and then, having attracted a Photunis male, eat him. Some orchids look quite like female wasps, the better to lure the male wasps that then unwittingly spread pollen. Some harmless snakes have evolved the coloration of poisonous snakes, gaining undeserved respect. Some butterfly pupa bear an uncanny resemblance to a snake’s head – fake scales, fake eyes – and, if bothered, start rattling around menacingly. In short: organisms may present themselves as whatever it is in their genetic interest to seem like.
People appear to be no exception. In the late 1950s and early sixties, the (non-Darwinian) social scientist Erving Goffman made a stir with a book called The Presentation of Self in Everyday Life, which stressed how much time we all spend on stage, playing to one audience or another, striving for effect. But there is a difference between us and many other performers in the animal kingdom. Whereas the female Photuris is, presumably, under no illusion as to its true identity, human beings have a way of getting taken in by their acts. Sometime, Goffman marveled, a person is “sincerely convinced that the impression of reality which he stages is the real reality.”
What modern Darwinism brings to Goffman’s obsession is, among other things, theory about the function of the confusion: we deceive ourselves in order to deceive others better. This hypothesis was tossed out during the mid-1970s by both Richard Alexander and Robert Trivers. In his foreword to Richard Dawkin’s The Selfish Gene, Trivers noted Dawkin’s emphasis on the role of deception in animal life and added, in a much-cited passage, that if indeed “deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray – by the subtle signs of self-knowledge – the deception being practiced.” Thus, Trivers ventured, “the conventional view that natural selection favors nervous systems which produce ever more accurate images of the world must be a very naïve view of mental evolution.”
It should come as no surprise that the study of self-deception makes for murky science. “Awareness” is a region with ill-defined and porous borders. The truth, or certain aspects of it, may float in and out of awareness, or hover on the periphery, present yet not distinct. And even assuming we could confirm that someone is wholly unaware of information relevant to some situation, whether this constitutes self-deception is another question altogether. Is the information somewhere in the mind, blocked from consciousness by a censor designed for that function? Or did the person just fail to take
note of the information in the first place? If so, is that selective perception itself a result of specific evolutionary design for self-deception? Or a more general reflection of the fact that the mind can hold only so much information (and the conscious mind even less)? Such difficulties of analysis are one reason the science Trivers envisioned two decades ago – a rigorous study of self-deception, which might finally yield a clear picture of the unconscious mind – has not arrived.
Still, the intervening years have tended to validate the drift of Dawkin’s and Triver’s and Alexander’s worldview: our accurate depiction of reality – to others, and, sometimes, to ourselves – is not high on natural selection’s list of priorities. The new paradigm helps us map the terrain of human deception and self-deception, if at a low level of resolution.
We’ve already explored one realm of deception; sex. Men and women may mislead each other – and even, in the process, themselves – about the likely endurance of their commitment or about their likely fidelity. There are two other large realms in which the presentation of self, and the perception of others, has great Darwinian consequence: reciprocal altruism and social hierarchy. Here, as with sex, honesty can be a major blunder. In fact, reciprocal altruism and social hierarchy may together be responsible for most of the dishonesty in our species – which, in turn, accounts for a good part of the dishonesty in the animal kingdom. We are far from the only dishonest species, but we are surely the most dishonest, if only because we do the most talking.
Can anyone who has participated in internet “debates” doubt the truth of this? People excel at fooling themselves before our eyes. Of course, the unsettling part is that we are engaged in the very same act of self-deception and just don’t see it. But we can see it in others, which should be enough evidence to support the above assertions.
Page 269
The keen sensitivity with which people detect the flaws of their rivals is one of nature’s wonders. It takes a Herculean effort to control this tendency consciously, and the effort must be repeated on a regular basis. Some people can summon enough restraint not to talk about their rival’s worthlessness; they may even utter some Victorian boilerplate about a “worthy opponent”. But to rein in the perception itself – the unending, unconscious, all-embracing search for signs of unworthiness – is truly a job for a Buddhist monk. Honesty of evaluation is simply beyond the reach of most mortals.
This next portion preceded the split brain experiment I already shared:
Page 273
Reciprocal altruism brings its own agenda to the presentation of self, and thus to the deception of self. Whereas status hierarchies place a premium on our seeming competent, attractive, strong, smart, etcetera, reciprocal altruism puts its accent on niceness, integrity, fairness. These are the things that make us seem like worthy reciprocal altruists. They make people want to strike up relationships with us. Puffing up our reputations as decent and generous folks can’t hurt, and it often helps.
Richard Alexander, in particular, has stressed the evolutionary importance of moral self-advertisements. In The Biology of Moral Systems he writes that “modern society is filled with myths” about our goodness: “that scientists are humble and devoted truth-seekers; that doctors dedicate their lives to alleviation of suffering; that teachers dedicate their lives to alleviation of suffering; that we are all basically law-abiding, kind, altruistic souls who place everyone’s interests before our own.”
There’s no reason moral self-inflation has to involve self-deception. But there’s little doubt that it can. The unconscious convolutions by which we convince ourselves of our goodness were seen in the laboratory before the theory of reciprocal altruism was around to explain them. In various experiments, subjects have been told to behave cruelly toward someone, to say mean things to him or even deliver what they thought were electric shocks. Afterwards, the subjects tended to derogate their victim, as if to convince themselves that he deserved his mistreatment – although they knew he wasn’t being punished for any wrongdoing and, aside from that, knew only what you can learn about a person by briefly mistreating him in a laboratory setting. But when subjects delivered “shocks” to someone after being told he would get to retaliate by shocking them later, they tended not to derogate him. It is as if the mind were programmed with a simple rule: so long as accounts are settled, no special rationalization is in order; the symmetry of exchange is sufficient defense of your behavior. But if you cheat or abuse another person who doesn’t cheat or abuse you, you should concoct reasons why he deserved it. Either way, you’ll be prepared to defend your behavior if challenged; either way, you’ll be prepared to fight with indignation any allegations that you’re a bad person, or a person unworthy of trust.
Our repertoire of moral excuses is large. Psychologists have found that people justify their failure to help others by minimizing, variously, the person’s plight (“That’s not an assault, it’s a lover’s quarrel”), their own responsibility for the plight, and their own competence to help.
This next passage gets to the heart of the matter: why it is in our evolutionary best interest to be so adept at self deception:
Page 278-9
(regarding the “deep sense of justice slightly slanted toward the self”)
Why would it be so important that the bias be unconscious? A clue may lie in a book called The Strategy of Conflict by the economist and game theorist Thomas Schelling. In a a chapter called “An Essay on Bargaining” – which isn’t about evolution, but could apply to it – Schelling noted an irony; in a non-zero-sum game, “the power to constrain an adversary may depend on the power to bind oneself”. The classic example is the non-zero-sum game of “chicken”. Two cars head toward each other. The first driver to swerve loses the game, along with some stature among his adolescent peers. On the other hand, if neither driver bails out, both lose in a bigger way. What to do? Schelling suggests tossing your steering wheel out the window in full view of the other driver. Once convinced that you’re irrevocably committed to your course, he will, if rational, do the swerving himself.
The same logic holds in more common situations, like buying a car. There is a range of prices within which a deal makes sense for both buyer and seller. Within that range, though, interests diverge – the buyer prefers the low end, the seller the high end. The path to success, says Schelling, is essentially the same as in the game of chicken; be the first to convince the other party of your rigidness. If the dealer believes you’re walking away for good, he’ll cave in. But if the dealer stages a preemptive strike, and says “I absolutely cannot accept less than x, “ and appears to be someone whose pride wouldn’t let him swallow those words, then he wins. The key, said Schelling, is to make a “voluntary but irreversible sacrifice of freedom of choice” – and to be the first to do it.
For our purposes, take out the word voluntary. The underlying logic may be excluded from consciousness to make the sacrifice seem truly “irreversible.” Not when we’re on a used-car lot, maybe. Car salesmen, like game theorists, actually think about the dynamics of bargaining, and the savvier car buyers do too. Still, everyday haggling – over fender benders, salaries, disputed territory – often begins with an actual belief, on each side, in its own rightness. And such a belief, a quickly reached and hotly articulated sense of what we deserve, is a quick route to the preemptive strikes Schelling recommends. Visceral rigidity is the most convincing kind.
Still, puzzles remain. Utter rigidity could be self-defeating. As “shady accounting” genes spread through the population, shady accountants would more and more often run into each other. With each insisting on the better half of the deal, both would fail to strike any deal. Besides, in real life, the rigidity wouldn’t know where to set in, because it’s often hard to say what deals the other party will accept. A car buyer doesn’t know how much the car actually cost the dealer or how much other buyers are offering. And in less structured situations – swapping favors with someone, say – these calculations are even dimmer, because things are less quantifiable. Thus has it been throughout evolution; hard to fathom precisely the range of deals that are in the interest of the other party. If you begin the bargaining by insisting irreversibly on a deal outside of that range, you’re left without a deal.
The ideal strategy, perhaps, is a pseudorigidity, a flexible firmness. You begin the discourse with an emphatic statement of what you deserve. Yet you should retreat – up to a point, at least – in the face of evidence as to the other person’s firmness. And what sort of evidence might that be? Well, evidence. If people can explain the reasons behind their conviction, and the reasons seem credible (and sound heartfelt), then some retreat is in order. If they talk about how much they’ve done for you in the past, and it’s true, you have to concede the point. Of course, to the extent you can muster countervailing evidence, with countervailing conviction, you should. And so it goes.
What we’ve just described are the dynamics of human discourse. People do argue in precisely this fashion. (In fact, that’s what the word argue means) Yet they’re often oblivious to what they’re doing and to why they’re doing it. They simply find themselves constantly in touch with all the evidence supporting their position, and often having to be reminded of all the evidence against it. Darwin wrote in his autobiography of a habit he called a “golden rule”; to immediately write down any observation that seemed inconsistent with his theories – “for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones.”
The reason the genetic human arguing style feels so effortless in that, by the time the arguing starts, the work has already been done. Robert Trivers has written about the periodic disputes – contract renegotiations, you might call them – that are often part of a close relationships, whether a friendship or a marriage. The argument, he notes, “may appear to burst forth spontaneously, with little or no preview, yet as it rolls along, two whole landscapes of information appear to lie already organized, waiting only for the lightning of anger to show themselves.”
The proposition here is that the human brain is, in large part, a machine for winning arguments, a machine for convincing others that its owner is in the right - and thus a machine for convincing its owner of the same thing. The brain is like a good lawyer: given any set of interests to defend, it sets about convincing the world of their moral and logical worth, regardless of whether they in fact have any of either. Like a lawyer, the human brain wants victory, not truth; and, like a lawyer, it is sometimes more admirable for skill than for virtue.
Long before Trivers wrote about the selfish uses of self-deception, social scientists had gathered supporting data. In one experiment, people with strongly held positions on a social issue were exposed to four arguments, two pro and two con. On each side of the issue, the arguments were of two sorts: (a) quite plausible, and (b) implausible to the point of absurdity. People tended to remember the plausible arguments that supported their views and the implausible arguments that didn’t, the net effect being to drive home the correctness of their position and the silliness of the alternative.
One might think that, being rational creatures, we would eventually grow suspicious of our uncannily long string of rectitude, our unerring knack for being on the right side of any dispute over credit, or money, or manners, or anything else. Nope. Time and again – whether arguing over a place in line, a promotion we never got, or which car hit which – we are shocked at the blindness of people who dare suggest our outrage isn’t warranted.
So how does the fact that human beings excel at self-deception factor into the free will argument?
One of the strongest evidences if Free Will is that
we believe we have free will. We feel as if we are making volitional choices. But given the human capacity for self-deception, this actually cannot be considered evidence at all.
I just remembered the source for the Libet experiment, and it is a book that dealt with free will at length, If I recall correctly - Daniel Dennet's book about consciousness. I lent it to my son and will have to get it back from him to see if it also contains pertinent information.