Boehm

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

            Whenever literary folk talk about voice, that supposedly ineffable but transcendently important quality of narration, they display an exasperating penchant for vagueness, as if so lofty a dimension to so lofty an endeavor couldn’t withstand being spoken of directly—or as if they took delight in instilling panic and self-doubt into the quivering hearts of aspiring authors. What the folk who actually know what they mean by voice actually mean by it is all the idiosyncratic elements of prose that give readers a stark and persuasive impression of the narrator as a character. Discussions of what makes for stark and persuasive characters, on the other hand, are vague by necessity. It must be noted that many characters even outside of fiction are neither. As a first step toward developing a feel for how character can be conveyed through writing, we may consider the nonfiction work of real people with real character, ones who also happen to be practiced authors.

The Dutch-American primatologist Frans de Waal is one such real-life character, and his prose stands as testament to the power of written language, lonely ink on colorless pages, not only to impart information, but to communicate personality and to make a contagion of states and traits like enthusiasm, vanity, fellow-feeling, bluster, big-heartedness, impatience, and an abiding wonder. De Waal is a writer with voice. Many other scientists and science writers explore this dimension to prose in their attempts to engage readers, but few avoid the traps of being goofy or obnoxious instead of funny—a trap David Pogue, for instance, falls into routinely as he hosts NOVA on PBS—and of expending far too much effort in their attempts at being distinctive, thus failing to achieve anything resembling grace. 

The most striking quality of de Waal’s writing, however, isn’t that its good-humored quirkiness never seems strained or contrived, but that it never strays far from the man’s own obsession with getting at the stories behind the behaviors he so minutely observes—whether the characters are his fellow humans or his fellow primates, or even such seemingly unstoried creatures as rats or turtles. But to say that de Waal is an animal lover doesn’t quite capture the essence of what can only be described as a compulsive fascination marked by conviction—the conviction that when he peers into the eyes of a creature others might dismiss as an automaton, a bundle of twitching flesh powered by preprogrammed instinct, he sees something quite different, something much closer to the workings of his own mind and those of his fellow humans.

De Waal’s latest book, The Bonobo and the Atheist: In Search of Humanism among the Primates, reprises the main themes of his previous books, most centrally the continuity between humans and other primates, with an eye toward answering the questions of where does, and where should morality come from. Whereas in his books from the years leading up to the turn of the century he again and again had to challenge what he calls “veneer theory,” the notion that without a process of socialization that imposes rules on individuals from some outside source they’d all be greedy and selfish monsters, de Waal has noticed over the past six or so years a marked shift in the zeitgeist toward an awareness of our more cooperative and even altruistic animal urgings. Noting a sharp difference over the decades in how audiences at his lectures respond to recitations of the infamous quote by biologist Michael Ghiselin, “Scratch an altruist and watch a hypocrite bleed,” de Waal writes,

Although I have featured this cynical line for decades in my lectures, it is only since about 2005 that audiences greet it with audible gasps and guffaws as something so outrageous, so out of touch with how they see themselves, that they can’t believe it was ever taken seriously. Had the author never had a friend? A loving wife? Or a dog, for that matter? (43)

The assumption underlying veneer theory was that without civilizing influences humans’ deeper animal impulses would express themselves unchecked. The further assumption was that animals, the end products of the ruthless, eons-long battle for survival and reproduction, would reflect the ruthlessness of that battle in their behavior. De Waal’s first book, Chimpanzee Politics, which told the story of a period of intensified competition among the captive male chimps at the Arnhem Zoo for alpha status, with all the associated perks like first dibs on choice cuisine and sexually receptive females, was actually seen by many as lending credence to these assumptions. But de Waal himself was far from convinced that the primates he studied were invariably, or even predominantly, violent and selfish.

Illustration of Veneer Theory
            What he observed at the zoo in Arnhem was far from the chaotic and bloody free-for-all it would have been if the chimps took the kind of delight in violence for its own sake that many people imagine them being disposed to. As he pointed out in his second book, Peacemaking among Primates, the violence is almost invariably attended by obvious signs of anxiety on the part of those participating in it, and the tension surrounding any major conflict quickly spreads throughout the entire community. The hierarchy itself is in fact an adaptation that serves as a check on the incessant conflict that would ensue if the relative status of each individual had to be worked out anew every time one chimp encountered another. “Tightly embedded in society,” he writes in The Bonobo and the Atheist, “they respect the limits it puts on their behavior and are ready to rock the boat only if they can get away with it or if so much is at stake that it’s worth the risk” (154). But the most remarkable thing de Waal observed came in the wake of the fights that couldn’t successfully be avoided. Chimps, along with primates of several other species, reliably make reconciliatory overtures toward one another after they’ve come to blows—and bites and scratches. In light of such reconciliations, primate violence begins to look like a momentary, albeit potentially dangerous, readjustment to a regularly peaceful social order rather than any ongoing melee, as individuals with increasing or waning strength negotiate a stable new arrangement.

            Part of the enchantment of de Waal’s writing is his judicious and deft balancing of anecdotes about the primates he works with on the one hand and descriptions of controlled studies he and his fellow researchers conduct on the other. In The Bonobo and the Atheist, he strikes a more personal note than he has in any of his previous books, at points stretching the bounds of the popular science genre and crossing into the realm of memoir. This attempt at peeling back the surface of that other veneer, the white-coated scientist’s posture of mechanistic objectivity and impassive empiricism, works best when de Waal is merging tales of his animal experiences with reports on the research that ultimately provides evidence for what was originally no more than an intuition. Discussing a recent, and to most people somewhat startling, experiment pitting the social against the alimentary preferences of a distant mammalian cousin, he recounts,

Despite the bad reputation of these animals, I have no trouble relating to its findings, having kept rats as pets during my college years. Not that they helped me become popular with the girls, but they taught me that rats are clean, smart, and affectionate. In an experiment at the University of Chicago, a rat was placed in an enclosure where it encountered a transparent container with another rat. This rat was locked up, wriggling in distress. Not only did the first rat learn how to open a little door to liberate the second, but its motivation to do so was astonishing. Faced with a choice between two containers, one with chocolate chips and another with a trapped companion, it often rescued its companion first. (142-3)

This experiment, conducted by Inbal Ben-Ami Bartal, Jean Decety, and Peggy Mason, actually got a lot of media coverage; Mason was even interviewed for an episode of NOVA Science NOW where you can watch a video of the rats performing the jailbreak and sharing the chocolate (and you can also see David Pogue being obnoxious.) This type of coverage has probably played a role in the shift in public opinion regarding the altruistic propensities of humans and animals. But if there’s one species who’s behavior can be said to have undermined the cynicism underlying veneer theory—aside from our best friend the dog of course—it would have to be de Waal’s leading character, the bonobo.

            De Waal’s 1997 book Bonobo: The Forgotten Ape, on which he collaborated with photographer Frans Lanting, introduced this charismatic, peace-loving, sex-loving primate to the masses, and in the process provided behavioral scientists with a new model for what our own ancestors’ social lives might have looked like. Bonobo females dominate the males to the point where zoos have learned never to import a strange male into a new community without the protection of his mother. But for the most part any tensions, even those over food, even those between members of neighboring groups, are resolved through genito-genital rubbing—a behavior that looks an awful lot like sex and often culminates in vocalizations and facial expressions that resemble those of humans experiencing orgasms to a remarkable degree. The implications of bonobos’ hippy-like habits have even reached into politics. After an uncharacteristically ill-researched and ill-reasoned article in the New Yorker by Ian Parker which suggested that the apes weren’t as peaceful and erotic as we’d been led to believe, conservatives couldn’t help celebrating. De Waal writes in The Bonobo and the Atheist,

Given that this ape’s reputation has been a thorn in the side of homophobes as well as Hobbesians, the right-wing media jumped with delight. The bonobo “myth” could finally be put to rest, and nature remain red in tooth and claw. The conservative commentator Dinesh D’Souza accused “liberals” of having fashioned the bonobo into their mascot, and he urged them to stick with the donkey. (63)

But most primate researchers think the behavioral differences between chimps and bonobos are pretty obvious. De Waal points out that while violence does occur among the apes on rare occasions “there are no confirmed reports of lethal aggression among bonobos” (63). Chimps, on the other hand, have been observed doing all kinds of killing. Bonobos also outperform chimps in experiments designed to test their capacity for cooperation, as in the setup that requires two individuals to pull on a rope at the same time in order for either of them to get ahold of food placed atop a plank of wood. (Incidentally, the New Yorker’s track record when it comes to anthropology is suspiciously checkered—disgraced author Patrick Tierney’s discredited book on Napoleon Chagnon, for instance, was originally excerpted in the magazine.)

Chim (left), a bonobo, and Panzee (right), a chimp, with Yerkes.
            Bonobos came late to the scientific discussion of what ape behavior can tell us about our evolutionary history. The famous chimp researcher Robert Yerkes, whose name graces the facility de Waal currently directs at Emory University in Atlanta, actually wrote an entire book called Almost Human about what he believed was a rather remarkable chimp. A photograph from that period reveals that it wasn’t a chimp at all. It was a bonobo. Now, as this species is becoming better researched, and with the discovery of fossils like the 4.4 million-year-old Ardipethicus ramidus known as Ardi, a bipedal ape with fangs that are quite small when compared to the lethal daggers sported by chimps, the role of violence in our ancestry is ever more uncertain. De Waal writes,

What if we descend not from a blustering chimp-like ancestor but from a gentle, empathic bonobo-like ape? The bonobo’s body proportions—its long legs and narrow shoulders—seem to perfectly fit the descriptions of Ardi, as do its relatively small canines. Why was the bonobo overlooked? What if the chimpanzee, instead of being an ancestral prototype, is in fact a violent outlier in an otherwise relatively peaceful lineage? Ardi is telling us something, and there may exist little agreement about what she is saying, but I hear a refreshing halt to the drums of war that have accompanied all previous scenarios. (61)

Ardi
De Waal is well aware of all the behaviors humans engage in that are more emblematic of chimps than of bonobos—in his 2005 book Our Inner Ape, he refers to humans as “the bipolar ape”—but the fact that our genetic relatedness to both species is exactly the same, along with the fact that chimps also have a surprising capacity for peacemaking and empathy, suggest to him that evolution has had plenty of time and plenty of raw material to instill in us the emotional underpinnings of a morality that emerges naturally—without having to be imposed by religion or philosophy. “Rather than having developed morality from scratch through rational reflection,” he writes in The Bonobo and the Atheist, “we received a huge push in the rear from our background as social animals" (17).

            In the eighth and final chapter of The Bonobo and the Atheist, titled “Bottom-Up Morality,” de Waal describes what he believes is an alternative to top-down theories that attempt to derive morals from religion on the one hand and from reason on the other. Invisible beings threatening eternal punishment can frighten us into doing the right thing, and principles of fairness might offer slight nudges in the direction of proper comportment, but we must already have some intuitive sense of right and wrong for either of these belief systems to operate on if they’re to be at all compelling. Many people assume moral intuitions are inculcated in childhood, but experiments like the one that showed rats will come to the aid of distressed companions suggest something deeper, something more ingrained, is involved. De Waal has found that a video of capuchin monkeys demonstrating "inequity aversion"—a natural, intuitive sense of fairness—does a much better job than any charts or graphs at getting past the prejudices of philosophers and economists who want to insist that fairness is too complex a principle for mere monkeys to comprehend. He writes,

This became an immensely popular experiment in which one monkey received cucumber slices while another received grapes for the same task. The monkeys had no trouble performing if both received identical rewards of whatever quality, but rejected unequal outcomes with such vehemence that there could be little doubt about their feelings. I often show their reactions to audiences, who almost fall out of their chairs laughing—which I interpret as a sign of surprised recognition. (232)

What the capuchins do when they see someone else getting a better reward is throw the measly cucumber back at the experimenter and proceed to rattle the cage in agitation. De Waal compares it to the Occupy Wall Street protests. The poor monkeys clearly recognize the insanity of the human they’re working for.

            There’s still a long way to travel, however, from helpful rats and protesting capuchins before you get to human morality. But that gap continues to shrink as researchers find new ways to explore the social behaviors of the primates that are even more closely related to us. Chimps, for instance, have been seen taking inequity aversion an important step beyond what monkeys display. Not only will certain individuals refuse to work for lesser rewards; they’ll refuse to work even for the superior rewards if they see their companions aren’t being paid equally. De Waal does acknowledge though that there still remains an important step between these behaviors and human morality. “I am reluctant to call a chimpanzee a ‘moral being,’” he writes.

This is because sentiments do not suffice. We strive for a logically coherent system and have debates about how the death penalty fits arguments for the sanctity of life, or whether an unchosen sexual orientation can be morally wrong. These debates are uniquely human. There is little evidence that other animals judge the appropriateness of actions that do not directly affect themselves. (17-8)

Moral intuitions can often inspire some behaviors that to people in modern liberal societies seem appallingly immoral. De Waal quotes anthropologist Christopher Boehm on the “special, pejorative moral ‘discount’ applied to cultural strangers—who often are not even considered fully human,” and he goes on to explain that “The more we expand morality’s reach, the more we need to rely on our intellect.” But the intellectual principles must be grounded in the instincts and emotions we evolved as social primates; this is what he means by bottom-up morality or “naturalized ethics” (235).

*****

Capuchins
            In locating the foundations of morality in our evolved emotions—propensities we share with primates and even rats—de Waal seems to be taking a firm stand against any need for religion. But he insists throughout the book that this isn’t the case. And, while the idea that people are quite capable of playing fair and treating each other with compassion without any supernatural policing may seem to land him squarely in the same camp as prominent atheists like Richard Dawkins and Christopher Hitchens, whom he calls “neo-atheists,” he contends that they’re just as, if not more, misguided than the people of faith who believe the rules must be handed down from heaven. “Even though Dawkins cautioned against his own anthropomorphism of the gene,” de Waal wrote all the way back in his 1996 book Good Natured: The Origins of Right and Wrong in Humans and Other Animals, “with the passage of time, carriers of selfish genes became selfish by association” (14). Thus de Waal tries to find some middle ground between religious dogmatists on one side and those who are equally dogmatic in their opposition to religion and equally mistaken in their espousal of veneer theory on the other. “I consider dogmatism a far greater threat than religion per se,” he writes in The Bonobo and the Atheist.

I am particularly curious why anyone would drop religion while retaining the blinkers sometimes associated with it. Why are the “neo-atheists” of today so obsessed with God’s nonexistence that they go on media rampages, wear T-shirts proclaiming their absence of belief, or call for a militant atheism? What does atheism have to offer that’s worth fighting for? (84)

For de Waal, neo-atheism is an empty placeholder of a philosophy, defined not by any positive belief but merely by an obstinately negative attitude toward religion. It’s hard to tell early on in his book if this view is based on any actual familiarity with the books whose titles—The God Delusion, god is not Great—he takes issue with. What is obvious, though, is that he’s trying to appeal to some spirit of moderation so that he might reach an audience who may have already been turned off by the stridency of the debates over religion’s role in society. At any rate, we can be pretty sure that Hitchens, for one, would have had something to say about de Waal’s characterization.

De Waal’s expertise as a primatologist gave him what was in many ways an ideal perspective on the selfish gene debates, as well as on sociobiology more generally, much the way Sarah Blaffer Hrdy’s expertise has done for her. The monkeys and apes de Waal works with are a far cry from the ants and wasps that originally inspired the gene-centered approach to explaining behavior. “There are the bees dying for their hive,” he writes in The Bonobo and the Atheist,

and the millions of slime mold cells that build a single, sluglike organism that permits a few among them to reproduce. This kind of sacrifice was put on the same level as the man jumping into an icy river to rescue a stranger or the chimpanzee sharing food with a whining orphan. From an evolutionary perspective, both kinds of helping are comparable, but psychologically speaking they are radically different. (33)

At the same time, though, de Waal gets to see up close almost every day how similar we are to our evolutionary cousins, and the continuities leave no question as to the wrongheadedness of blank slate ideas about socialization. “The road between genes and behavior is far from straight,” he writes, sounding a note similar to that of the late Stephen Jay Gould, “and the psychology that produces altruism deserves as much attention as the genes themselves.” He goes on to explain,

Mammals have what I call an “altruistic impulse” in that they respond to signs of distress in others and feel an urge to improve their situation. To recognize the need of others, and react appropriately, is really not the same as a preprogrammed tendency to sacrifice oneself for the genetic good. (33)

We can’t discount the role of biology, in other words, but we must keep in mind that genes are at the distant end of a long chain of cause and effect that has countless other inputs before it links to emotion and behavior. De Waal angered both the social constructivists and quite a few of the gene-centered evolutionists, but by now the balanced view his work as primatologist helped him to arrive at has, for the most part, won the day. Now, in his other role as a scientist who studies the evolution of morality, he wants to strike a similar balance between extremists on both sides of the religious divide. Unfortunately, in this new arena, his perspective isn’t anywhere near as well informed.

             The type of religion de Waal points to as evidence that the neo-atheists’ concerns are misguided and excessive is definitely moderate. It’s not even based on any actual beliefs, just some nice ideas and stories adherents enjoy hearing and thinking about in a spirit of play. We have to wonder, though, just how prevalent this New Age, Life-of-Pi type of religion really is. I suspect the passages in The Bonobo and the Atheist discussing it are going to be equally offensive to atheists and people of actual faith alike. Here’s one  example of the bizarre way he writes about religion:

Neo-atheists are like people standing outside a movie theater telling us that Leonardo DiCaprio didn’t really go down with the Titanic. How shocking! Most of us are perfectly comfortable with the duality. Humor relies on it, too, lulling us into one way of looking at a situation only to hit us over the head with another. To enrich reality is one of the most delightful capacities we have, from pretend play in childhood to visions of an afterlife when we grow older. (294)

He seems to be suggesting that the religious know, on some level, their beliefs aren’t true. “Some realities exist,” he writes, “some we just like to believe in” (294). The problem is that while many readers may enjoy the innuendo about humorless and inveterately over-literal atheists, most believers aren’t joking around—even the non-extremists are more serious than de Waal seems to think.

            As someone who’s been reading de Waal’s books for the past seventeen years, someone who wanted to strangle Ian Parker after reading his cheap smear piece in The New Yorker, someone who has admired the great primatologist since my days as an undergrad anthropology student, I experienced the sections of The Bonobo and the Atheist devoted to criticisms of neo-atheism, which make up roughly a quarter of this short book, as soul-crushingly disappointing. And I’ve agonized over how to write this part of the review. The middle path de Waal carves out is between a watered-down religion believers don’t really believe on one side and an egregious postmodern caricature of Sam Harris’s and Christopher Hitchens’s positions on the other. He focuses on Harris because of his book, The Moral Landscape, which explores how we might use science to determine our morals and values instead of religion, but he gives every indication of never having actually read the book and of instead basing his criticisms solely on the book’s reputation among Harris’s most hysterical detractors. And he targets Hitchens because he thinks he has the psychological key to understanding what he refers to as his “serial dogmatism.” But de Waal’s case is so flimsy a freshman journalism student could demolish it with no more than about ten minutes of internet fact-checking.

De Waal does acknowledge that we should be skeptical of “religious institutions and their ‘primates’,” but he wonders “what good could possibly come from insulting the many people who find value in religion?” (19). This is the tightrope he tries to walk throughout his book. His focus on the purely negative aspect of atheism juxtaposed with his strange conception of the role of belief seems designed to give readers the impression that if the atheists succeed society might actually suffer severe damage. He writes,
 
Religion is much more than belief. The question is not so much whether religion is true or false, but how it shapes our lives, and what might possibly take its place if we were to get rid of it the way an Aztec priest rips the beating heart out of a virgin. What could fill the gaping hole and take over the removed organ’s functions? (216)

The first problem is that many people who call themselves humanists, as de Waal does, might suggest that there are in fact many things that could fill the gap—science, literature, philosophy, music, cinema, human rights activism, just to name a few. But the second problem is that the militancy of the militant atheists is purely and avowedly rhetorical. In a debate with Hitchens, former British Prime Minister Tony Blair once held up the same straw man that de Waal drags through the pages of his book, the claim that neo-atheists are trying to extirpate religion from society entirely, to which Hitchens replied, “In fairness, no one was arguing that religion should or will die out of the world. All I’m arguing is that it would be better if there was a great deal more by way of an outbreak of secularism” (20:20). What Hitchens is after is an end to the deference automatically afforded religious ideas by dint of their supposed sacredness; religious ideas need to be critically weighed just like any other ideas—and when they are thus weighed they often don’t fare so well, in either logical or moral terms. It’s hard to understand why de Waal would have a problem with this view.
Sam Harris
 
*****
            De Waal’s position is even more incoherent with regard to Harris’s arguments about the potential for a science of morality, since they represent an attempt to answer, at least in part, the very question of what might take the place of religion in providing guidance in our lives that he poses again and again throughout The Bonobo and the Atheist. De Waal takes issue first with the book’s title, The Moral Landscape: How Science can Determine Human Values. The notion that science might determine any aspect of morality suggests to him a top-down approach as opposed to his favored bottom-up strategy that takes “naturalized ethics” as its touchstone. This is, however, unbeknownst to de Waal, a mischaracterization of Harris’s thesis. Rather than engage Harris’s arguments in any direct or meaningful way, de Waal contents himself with following in the footsteps of critics who apply the postmodern strategy of holding the book to account for all the analogies that can be drawn with it, no matter how tenuously or tendentiously, to historical evils. De Waal writes, for instance,

While I do welcome a science of morality—my own work is part of it—I can’t fathom calls for science to determine human values (as per the subtitle of Sam Harris’s The Moral Landscape). Is pseudoscience something of the past? Are modern scientists free from moral biases? Think of the Tuskegee syphilis study just a few decades ago, or the ongoing involvement of medical doctors in prisoner torture at Guantanamo Bay. I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden. (22)

(Great phrase that "morality's handmaiden.") But Harris never argues that scientists are any more morally pure than anyone else. His argument is for the application of that “science of morality,” which de Waal proudly contributes to, to attempts at addressing the big moral issues our society faces.

            The guilt-by-association and guilt-by-historical-analogy tactics on display in The Bonobo and the Atheist extend all the way to that lodestar of postmodernism’s hysterical obsessions. We might hope that de Waal, after witnessing the frenzied insanity of the sociobiology controversy from the front row, would know better. But he doesn’t seem to grasp how toxic this type of rhetoric is to reasoned discourse and honest inquiry. After expressing his bafflement at how science and a naturalistic worldview could inspire good the way religion does (even though his main argument is that such external inspiration to do good is unnecessary), he writes,

It took Adolf Hitler and his henchmen to expose the moral bankruptcy of these ideas. The inevitable result was a precipitous drop of faith in science, especially biology. In the 1970s, biologists were still commonly equated with fascists, such as during the heated protest against “sociobiology.” As a biologist myself, I am glad those acrimonious days are over, but at the same time I wonder how anyone could forget this past and hail science as our moral savior. How did we move from deep distrust to naïve optimism? (22)

Was Nazism borne of an attempt to apply science to moral questions? It’s true some people use science in evil ways, but not nearly as commonly as people are directly urged by religion to perpetrate evils like inquisitions or holy wars. When science has directly inspired evil, as in the case of eugenics, the lifespan of the mistake was measurable in years or decades rather than centuries or millennia. Not to minimize the real human costs, but science wins hands down by being self-correcting and, certain individual scientists notwithstanding, undogmatic.

Harris intended for his book to begin a debate he was prepared to actively participate in. But he quickly ran into the problem that postmodern criticisms can’t really be dealt with in any meaningful way. The following long quote from Harris’s response to his battier critics in the Huffington Post will show both that de Waal’s characterization of his argument is way off-the-mark, and that it is suspiciously unoriginal:

How, for instance, should I respond to the novelist Marilynne Robinson’s paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think—beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.

And we have to ask further what alternative source of ethical principles do the self-righteous grandstanders like Robinson and Horgan—and now de Waal—have to offer? In their eagerness to compare everyone to the Nazis, they seem to be deriving their own morality from Fox News.

De Waal makes three objections to Harris’s arguments that are of actual substance, but none of them are anywhere near as devastating to his overall case as de Waal makes out. First, Harris begins with the assumption that moral behaviors lead to “human flourishing,” but this is a presupposed value as opposed to an empirical finding of science—or so de Waal claims. But here’s de Waal himself on a level of morality sometimes seen in apes that transcends one-on-one interactions between individuals:

female chimpanzees have been seen to drag reluctant males toward each other to make up after a fight, while removing weapons from their hands. Moreover, high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as a sign that the building blocks of morality are older than humanity, and that we don’t need God to explain how we got to where we are today. (20)

The similarity between the concepts of human flourishing and community concern highlights one of the main areas of confusion de Waal could have avoided by actually reading Harris’s book. The word “determine” in the title has two possible meanings. Science can determine values in the sense that it can guide us toward behaviors that will bring about flourishing. But it can also determine our values in the sense of discovering what we already naturally value and hence what conditions need to be met for us to flourish.

De Waal performs a sleight of hand late in The Bonobo and the Atheist, substituting another “utilitarian” for Harris, justifying the trick by pointing out that utilitarians also seek to maximize human flourishing—though Harris never claims to be one. This leads de Waal to object that strict utilitarianism isn’t viable because he’s more likely to direct his resources to his own ailing mother than to any stranger in need, even if those resources would benefit the stranger more. Thus de Waal faults Harris’s ethics for overlooking the role of loyalty in human lives. His third criticism is similar; he worries that utilitarians might infringe on the rights of a minority to maximize flourishing for a majority. But how, given what we know about human nature, could we expect humans to flourish—to feel as though they were flourishing—in a society that didn’t properly honor friendship and the bonds of family? How could humans be happy in a society where they had to constantly fear being sacrificed to the whim of the majority? It is in precisely this effort to discover—or determine—under which circumstances humans flourish that Harris believes science can be of the most help. And as de Waal moves up from his mammalian foundations of morality to more abstract ethical principles the separation between his approach and Harris’s starts to look suspiciously like a distinction without a difference.

            Harris in fact points out that honoring family bonds probably leads to greater well-being on pages seventy-three and seventy-four of The Moral Landscape, and de Waal quotes from page seventy-four himself to chastise Harris for concentrating too much on "the especially low-hanging fruit of conservative Islam" (74). The incoherence of de Waal's argument (and the carelessness of his research) are on full display here as he first responds to a point about the genital mutilation of young girls by asking, "Isn't genital mutilation common in the United States, too, where newborn males are circumcised without their consent?" (90). So cutting off the foreskin of a male's penis is morally equivalent to cutting off a girl's clitoris? Supposedly, the equivalence implies that there can't be any reliable way to determine the relative moral status of religious practices. "Could it be that religion and culture interact to the point that there is no universal morality?" Perhaps, but, personally, as a circumcised male, I think this argument is a real howler.
*****
The slick scholarly laziness on display in The Bonobo and the Atheist is just as bad when it comes to the positions, and the personality, of Christopher Hitchens, whom de Waal sees fit to psychoanalyze instead of engaging his arguments in any substantive way—but whose memoir, Hitch-22, he’s clearly never bothered to read. The straw man about the neo-atheists being bent on obliterating religion entirely is, disappointingly, but not surprisingly by this point, just one of several errors and misrepresentations. De Waal’s main argument against Hitchens, that his atheism is just another dogma, just as much a religion as any other, is taken right from the list of standard talking points the most incurious of religious apologists like to recite against him. Theorizing that “activist atheism reflects trauma” (87)—by which he means that people raised under severe religions will grow up to espouse severe ideologies of one form or another—de Waal goes on to suggest that neo-atheism is an outgrowth of “serial dogmatism”:

Hitchens was outraged by the dogmatism of religion, yet he himself had moved from Marxism (he was a Trotskyist) to Greek Orthodox Christianity, then to American Neo-Conservatism, followed by an “antitheist” stance that blamed all of the world’s troubles on religion. Hitchens thus swung from the left to the right, from anti-Vietnam War to cheerleader of the Iraq War, and from pro to contra God. He ended up favoring Dick Cheney over Mother Teresa. (89)

Jon Laing
This is truly awful rubbish, and it’s really too bad Hitchens isn’t around anymore to take de Waal to task for it himself. First, this passage allows us to catch out de Waal’s abuse of the term dogma; dogmatism is rigid adherence to beliefs that aren’t open to questioning. The test of dogmatism is whether you’re willing to adjust your views in light of new evidence or changing circumstances—it has nothing to do with how willing or eager you are to debate. What de Waal is labeling dogmatism is what we normally call outspokenness. Second, his facts are simply wrong. For one, though Hitchens was labeled a neocon by some of his fellows on the left simply because he supported the invasion of Iraq, he never considered himself one. When he was asked in an interview for the New Stateman if he was a neoconservative, he responded unequivocally, “I’m not a conservative of any kind.” Finally, can’t someone be for one war and against another, or agree with certain aspects of a religious or political leader’s policies and not others, without being shiftily dogmatic?

            De Waal never really goes into much detail about what the “naturalized ethics” he advocates might look like beyond insisting that we should take a bottom-up approach to arriving at them. This evasiveness gives him space to criticize other nonbelievers regardless of how closely their ideas might resemble his own. “Convictions never follow straight from evidence or logic,” he writes. “Convictions reach us through the prism of human interpretation” (109). He takes this somewhat banal observation (but do they really never follow straight from evidence?) as a license to dismiss the arguments of others based on silly psychologizing. “In the same way that firefighters are sometimes stealth arsonists,” he writes, “and homophobes closet homosexuals, do some atheists secretly long for the certitude of religion?” (88). We could of course just as easily turn this Freudian rhetorical trap back against de Waal and his own convictions. Is he a closet dogmatist himself? Does he secretly hold the unconscious conviction that primates are really nothing like humans and that his research is all a big sham?

            Christopher Hitchens was another real-life character whose personality shone through his writing, and like Yossarian in Joseph Heller’s Catch-22 he often found himself in a position where he knew being sane would put him at odds with the masses, thus convincing everyone of his insanity. Hitchens particularly identified with the exchange near the end of Heller’s novel in which an officer, Major Danby, says, “But, Yossarian, suppose everyone felt that way,” to which Yossarian replies, “Then I’d certainly be a damned fool to feel any other way, wouldn’t I?” (446). (The title for his memoir came from a word game he and several of his literary friends played with book titles.) It greatly saddens me to see de Waal pitting himself against such a ham-fisted caricature of a man in whom, had he taken the time to actually explore his writings, he would likely have found much to admire. Why did Hitch become such a strong advocate for atheism? He made no secret of his motivations. And de Waal, who faults Harris (wrongly) for leaving loyalty out of his moral equations, just might identify with them. It began when the theocratic dictator of Iran put a hit out on his friend, the author Salman Rushdie, because he thought one of his books was blasphemous. Hitchens writes in Hitch-22,

When the Washington Post telephoned me at home on Valentine’s Day 1989 to ask my opinion about the Ayatollah Khomeini’s fatwah, I felt at once that here was something that completely committed me. It was, if I can phrase it like this, a matter of everything I hated versus everything I loved. In the hate column: dictatorship, religion, stupidity, demagogy, censorship, bullying, and intimidation. In the love column: literature, irony, humor, the individual, and the defense of free expression. Plus, of course, friendship—though I like to think that my reaction would have been the same if I hadn’t known Salman at all. (268)

Hitchens and Rushdie
Suddenly, neo-atheism doesn’t seem like an empty place-holder anymore. To criticize atheists so harshly for having convictions that are too strong, de Waal has to ignore all the societal and global issues religion is on the wrong side of. But when we consider the arguments on each side of the abortion or gay marriage or capital punishment or science education debates it’s easy to see that neo-atheists are only against religion because they feel it runs counter to the positive values of skeptical inquiry, egalitarian discourse, free society, and the ascendency of reason and evidence.

            De Waal ends The Bonobo and the Atheist with a really corny section in which he imagines how a bonobo would lecture atheists about morality and the proper stance toward religion. “Tolerance of religion,” the bonobo says, “even if religion is not always tolerant in return, allows humanism to focus on what is most important, which is to build a better society based on natural human abilities” (237). Hitchens is of course no longer around to respond to the bonobo, but many of the same issues came up in his debate with Tony Blair (I hope no one reads this as an insult to the former PM), who at one point also argued that religion might be useful in building better societies—look at all the charity work they do for instance. Hitch, already showing signs of physical deterioration from the treatment for the esophageal cancer that would eventually kill him, responds,

The cure for poverty has a name in fact. It’s called the empowerment of women. If you give women some control over the rate at which they reproduce, if you give them some say, take them off the animal cycle of reproduction to which nature and some doctrine, religious doctrine, condemns them, and then if you’ll throw in a handful of seeds perhaps and some credit, the flaw, the flaw of everything in that village, not just poverty, but education, health, and optimism, will increase. It doesn’t matter—try it in Bangladesh, try it in Bolivia. It works. It works all the time. Name me one religion that stands for that—or ever has. Wherever you look in the world and you try to remove the shackles of ignorance and disease and stupidity from women, it is invariably the clerisy that stands in the way. (23:05)

            Later in the debate, Hitch goes on to argue in a way that sounds suspiciously like an echo of de Waal’s challenges to veneer theory and his advocacy for bottom-up morality. He says,

The injunction not to do unto others what would be repulsive if done to yourself is found in the Analects of Confucius if you want to date it—but actually it’s found in the heart of every person in this room. Everybody knows that much. We don’t require divine permission to know right from wrong. We don’t need tablets administered to us ten at a time in tablet form, on pain of death, to be able to have a moral argument. No, we have the reasoning and the moral suasion of Socrates and of our own abilities. We don’t need dictatorship to give us right from wrong. (25:43)

And as a last word in his case and mine I’ll quote this very de Waalian line from Hitch: “There’s actually a sense of pleasure to be had in helping your fellow creature. I think that should be enough” (35:42).







Let's Play Kill Your Brother: Fiction as a Moral Dilemma Game

The cousins Salamanca by Martin Woutisseth
            Season 3 of Breaking Bad opens with two expressionless Mexican men in expensive suits stepping out of a Mercedes, taking a look around the peasant village they’ve just arrived in, and then dropping to the ground to crawl on their knees and elbows to a candlelit shrine where they leave an offering to Santa Muerte, along with a crude drawing of the meth cook known as Heisenberg, marking him for execution. We later learn that the two men, Leonel and Marco, who look almost identical, are in fact twins (played by Daniel and Luis Moncada), and that they are the cousins of Tuco Salamanca, a meth dealer and cartel affiliate they believe Heisenberg betrayed and killed. We also learn that they kill people themselves as a matter of course, without registering the slightest emotion and without uttering a word to each other to mark the occasion. An episode later in the season, after we’ve been made amply aware of how coldblooded these men are, begins with a flashback to a time when they were just boys fighting over an action figure as their uncle talks cartel business on the phone nearby. After Marco gets tired of playing keep-away, he tries to provoke Leonel further by pulling off the doll’s head, at which point Leonel runs to his Uncle Hector, crying, “He broke my toy!”
“He’s just having fun,” Hector says, trying to calm him. “You’ll get over it.”

“No! I hate him!” Leonel replies. “I wish he was dead!”

Hector’s expression turns grave. After a moment, he calls Marco over and tells him to reach into the tub of melting ice beside his chair to get him a beer. When the boy leans over the tub, Hector shoves his head into the water and holds it there. “This is what you wanted,” he says to Leonel. “Your brother dead, right?” As the boy frantically pulls on his uncle’s arm trying to free his brother, Hector taunts him: “How much longer do you think he has down there? One minute? Maybe more? Maybe less? You’re going to have to try harder than that if you want to save him.” Leonel starts punching his uncle’s arm but to no avail. Finally, he rears back and punches Hector in the face, prompting him to release Marco and rise from his chair to stand over the two boys, who are now kneeling beside each other. Looking down at them, he says, “Family is all.”

The scene serves several dramatic functions. By showing the ruthless and violent nature of the boys’ upbringing, it intensifies our fear on behalf of Heisenberg, who we know is actually Walter White, a former chemistry teacher and family man from a New Mexico suburb who only turned to crime to make some money for his family before his lung cancer kills him. It also goes some distance toward humanizing the brothers by giving us insight into how they became the mute, mechanical murderers they are when we’re first introduced to them. The bond between the two men and their uncle will be important in upcoming episodes as well. But the most interesting thing about the scene is that it represents in microcosm the single most important moral dilemma of the whole series.

Marco and Leonel are taught to do violence if need be to protect their family. Walter, the show’s central character, gets involved in the meth business for the sake of his own family, and as he continues getting more deeply enmeshed in the world of crime he justifies his decisions at each juncture by saying he’s providing for his wife and kids. But how much violence can really be justified, we’re forced to wonder, with the claim that you’re simply protecting or providing for your family? The entire show we know as Breaking Bad can actually be conceived of as a type of moral exercise like the one Hector puts his nephews through, designed to impart or reinforce a lesson, though the lesson of the show is much more complicated. It may even be the case that our fondness for fictional narratives more generally, like the ones we encounter in novels and movies and TV shows, originated in our need as a species to develop and hone complex social skills involving powerful emotions and difficult cognitive calculations.

Jean Briggs
            Most of us watching Breaking Bad probably feel Hector went way too far with his little lesson, and indeed I’d like to think not too many parents or aunts and uncles would be willing to risk drowning a kid to reinforce the bond between him and his brother. But presenting children with frightening and stressful moral dilemmas to guide them through major lifecycle transitions—weaning, the birth of siblings, adoptions—which tend to arouse severe ambivalence can be an effective way to encourage moral development and instill traditional values. The ethnographer Jean Briggs has found that among the Inuit peoples whose cultures she studies adults frequently engage children in what she calls “playful dramas” (173), which entail hypothetical moral dilemmas that put the children on the hot seat as they struggle to come up with a solution. She writes about these lessons, which strike many outsiders as a cruel form of teasing by the adults, in “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” a chapter she contributed to a 1994 anthology of anthropological essays on peace and conflict. In one example Briggs recounts,
A mother put a strange baby to her breast and said to her own nursling: “Shall I nurse him instead of you?” The mother of the other baby offered her breast to the rejected child and said: “Do you want to nurse from me? Shall I be your mother?” The child shrieked a protest shriek. Both mothers laughed. (176)
This may seem like sadism on the part of the mothers, but it probably functioned to soothe the bitterness arising from the child’s jealousy of a younger nursling. It would also help to settle some of the ambivalence toward the child’s mother, which comes about inevitably as a response to disciplining and other unavoidable frustrations. 
Another example Briggs describes seems even more pointlessly sadistic at first glance. A little girl’s aunt takes her hand and puts it on a little boy’s head, saying, “Pull his hair.” The girl doesn’t respond, so her aunt yanks on the boy’s hair herself, making him think the girl had done it. They quickly become embroiled in a “battle royal,” urged on by several adults who find it uproarious. These adults do, however, end up stopping the fight before any serious harm can be done. As horrible as this trick may seem, Briggs believes it serves to instill in the children a strong distaste for fighting because the experience is so unpleasant for them. They also learn “that it is better not to be noticed than to be playfully made the center of attention and laughed at” (177). What became clear to Briggs over time was that the teasing she kept witnessing wasn’t just designed to teach specific lessons but that it was also tailored to the child’s specific stage of development. She writes,
Indeed, since the games were consciously conceived of partly as tests of a child’s ability to cope with his or her situation, the tendency was to focus on a child’s known or expected difficulties. If a child had just acquired a sibling, the game might revolve around the question: “Do you love your new baby sibling? Why don’t you kill him or her?” If it was a new piece of clothing that the child had acquired, the question might be: “Why don’t you die so I can have it?” And if the child had been recently adopted, the question might be: “Who’s your daddy?” (172)
As unpleasant as these tests can be for the children, they never entail any actual danger—Inuit adults would probably agree Hector Salamanca went a bit too far—and they always take place in circumstances and settings where the only threats and anxieties come from the hypothetical, playful dilemmas and conflicts. Briggs explains,
A central idea of Inuit socialization is to “cause thought”: isumaqsayuq. According to [Arlene] Stairs, isumaqsayuq, in North Baffin, characterizes Inuit-style education as opposed to the Western variety. Warm and tender interactions with children help create an atmosphere in which thought can be safely caused, and the questions and dramas are well designed to elicit it. More than that, and as an integral part of thought, the dramas stimulate emotion. (173)
Part of the exercise then seems to be to introduce the children to their own feelings. Prior to having their sibling’s life threatened, the children may not have any idea how they’d feel in the event of that sibling’s death. After the test, however, it becomes much more difficult for them to entertain thoughts of harming their brother or sister—the thought alone will probably be unpleasant.
            Briggs also points out that the games send the implicit message to the children that they can be trusted to arrive at the moral solution. Hector knows Leonel won’t let his brother drown—and Leonel learns that his uncle knows this about him. The Inuit adults who tease and tempt children are letting them know they have faith in the children’s ability to resist their selfish or aggressive impulses. Discussing Briggs’s work in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame, anthropologist Christopher Boehm suggests that evolution has endowed children with the social and moral emotions we refer to collectively as consciences, but these inborn moral sentiments need to be activated and shaped through socialization. He writes,
On the one side there will always be our usefully egoistic selfish tendencies, and on the other there will be our altruistic or generous impulses, which also can advance our fitness because altruism and sympathy are valued by our peers. The conscience helps us to resolve such dilemmas in ways that are socially acceptable, and these Inuit parents seem to be deliberately “exercising” the consciences of their children to make morally socialized adults out of them. (226)
The Inuit-style moral dilemma games seem strange, even shocking, to people from industrialized societies, and so it’s clear they’re not a normal part of children’s upbringing in every culture. They don’t even seem to be all that common among hunter-gatherers outside the region of the Arctic. Boehm writes, however,
Deliberately and stressfully subjecting children to nasty hypothetical dilemmas is not universal among foraging nomads, but as we’ll see with Nisa, everyday life also creates real moral dilemmas that can involve Kalahari children similarly. (226)
Boehm goes on to recount an episode from anthropologist Marjorie Shostak’s famous biography Nisa: The Life and Words of a !Kung Woman to show that parents all the way on the opposite side of the world from where Briggs did her fieldwork sometimes light on similar methods for stimulating their children’s moral development.
Nisa seems to have been a greedy and impulsive child. When her pregnant mother tried to wean her, she would have none of it. At one point, she even went so far as to sneak into the hut while her mother was asleep and try to suckle without waking her up. Throughout the pregnancy, Nisa continually expressed ambivalence toward the upcoming birth of her sibling, so much so that her parents anticipated there might be some problems. The !Kung resort to infanticide in certain dire circumstances, and Nisa’s parents probably reasoned she was at least somewhat familiar with the coping mechanism many other parents used when killing a newborn was necessary. What they’d do is treat the baby as an object, not naming it or in any other way recognizing its identity as a family member. Nisa explained to Shostak how her parents used this knowledge to impart a lesson about her baby brother.
After he was born, he lay there, crying. I greeted him, “Ho, ho, my baby brother! Ho, ho, I have a little brother! Some day we’ll play together.” But my mother said, “What do you think this thing is? Why are you talking to it like that? Now, get up and go back to the village and bring me my digging stick.” I said, “What are you going to dig?” She said, “A hole. I’m going to dig a hole so I can bury the baby. Then you, Nisa, will be able to nurse again.” I refused. “My baby brother? My little brother? Mommy, he’s my brother! Pick him up and carry him back to the village. I don’t want to nurse!” Then I said, “I’ll tell Daddy when he comes home!” She said, “You won’t tell him. Now, run back and bring me my digging stick. I’ll bury him so you can nurse again. You’re much too thin.” I didn’t want to go and started to cry. I sat there, my tears falling, crying and crying. But she told me to go, saying she wanted my bones to be strong. So, I left and went back to the village, crying as I walked. (The weaning episode occurs on pgs. 46-57)
Again, this may strike us as cruel, but by threatening her brother’s life, Nisa’s mother succeeded in triggering her natural affection for him, thus tipping the scales of her ambivalence to ensure the protective and loving feelings won out over the bitter and jealous ones. This example was extreme enough that Nisa remembered it well into adulthood, but Boehm sees it as evidence that real life reliably offers up dilemmas parents all over the world can use to instill morals in their children. He writes, 
I believe that all hunter-gatherer societies offer such learning experiences, not only in the real-life situations children are involved with, but also in those they merely observe. What the Inuit whom Briggs studied in Cumberland Sound have done is to not leave this up to chance. And the practice would appear to be widespread in the Arctic. Children are systematically exposed to life’s typical stressful moral dilemmas, and often hypothetically, as a training ground that helps to turn them into adults who have internalized the values of their groups. (234)
One of the reasons such dilemmas, whether real or hypothetical or merely observed, are effective as teaching tools is that they bypass the threat to personal autonomy that tends to accompany direct instruction. Imagine Tío Salamanca simply scolding Leonel for wishing his brother dead—it would have only aggravated his resentment and sparked defiance. Leonel would probably also harbor some bitterness toward his uncle for unjustly defending Marco. In any case, he would have been stubbornly resistant to the lesson. Winston Churchill nailed the sentiment when he said, “Personally, I am always ready to learn, although I don’t always like being taught.” The Inuit-style moral dilemmas force the children to come up with the right answer on their own, a task that requires the integration and balancing of short and long term desires, individual and group interests, and powerful albeit contradictory emotions. The skills that go into solving such dilemmas are indistinguishable from the qualities we recognize as maturity, self-knowledge, generosity, poise, and wisdom. 
For the children Briggs witnessed being subjected to these moral tests, the understanding that the dilemmas were in fact only hypothetical developed gradually as they matured. For the youngest ones, the stakes were real and the solutions were never clear at the onset. Briggs explains that
while the interaction between small children and adults was consistently good-humored, benign, and playful on the part of the adults, it taxed the children to—or beyond—the limits of their ability to understand, pushing them to expand their horizons, and testing them to see how much they had grown since the last encounter. (173)
What this suggests is that there isn’t always a simple declarative lesson—a moral to the story, as it were—imparted in these games. Instead, the solutions to the dilemmas can often be open-ended, and the skills the children practice can thus be more general and abstract than some basic law or principle. Briggs goes on,
Adult players did not make it easy for children to thread their way through the labyrinth of tricky proposals, questions, and actions, and they did not give answers to the children or directly confirm the conclusions the children came to. On the contrary, questioning a child’s first facile answers, they turned situations round and round, presenting first one aspect then another, to view. They made children realize their emotional investment in all possible outcomes, and then allowed them to find their own way out of the dilemmas that had been created—or perhaps, to find ways of living with unresolved dilemmas. Since children were unaware that the adults were “only playing,” they could believe that their own decisions would determine their fate. And since the emotions aroused in them might be highly conflicted and contradictory—love as well as jealousy, attraction as well as fear—they did not always know what they wanted to decide. (174-5)
As the children mature, they become more adept at distinguishing between real and hypothetical problems. Indeed, Briggs suggests one of the ways adults recognize children’s budding maturity is that they begin to treat the dilemmas as a game, ceasing to take them seriously, and ceasing to take themselves as seriously as they did when they were younger.
Brian Boyd
            In his book On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd theorizes that the fictional narratives that humans engage one another with in every culture all over the world, be they in the form of religious myths, folklore, or plays and novels, can be thought of as a type of cognitive play—similar to the hypothetical moral dilemmas of the Inuit. He sees storytelling as an adaption that encourages us to train the mental faculties we need to function in complex societies. The idea is that evolution ensures that adaptive behaviors tend to be pleasurable, and thus many animals playfully and joyously engage in activities in low-stakes, relatively safe circumstances that will prepare them to engage in similar activities that have much higher stakes and are much more dangerous. Boyd explains,
The more pleasure that creatures have in play in safe contexts, the more they will happily expend energy in mastering skills needed in urgent or volatile situations, in attack, defense, and social competition and cooperation. This explains why in the human case we particularly enjoy play that develops skills needed in flight (chase, tag, running) and fight (rough-and-tumble, throwing as a form of attack at a distance), in recovery of balance (skiing, surfing, skateboarding), and individual and team games. (92)
The skills most necessary to survive and thrive in human societies are the same ones Inuit adults help children develop with the hypothetical dilemma’s Briggs describes. We should expect fiction then to feature similar types of moral dilemmas. Some stories may be designed to convey simple messages—“Don’t hurt your brother,” “Don’t stray from the path”—but others might be much more complicated; they may not even have any viable solutions at all. “Art prepares minds for open-ended learning and creativity,” Boyd writes; “fiction specifically improves our social cognition and our thinking beyond the here and now” (209). 
One of the ways the cognitive play we call novels or TV shows differs from Inuit dilemma games is that the fictional characters take over center stage from the individual audience members. Instead of being forced to decide on a course of action ourselves, we watch characters we’ve become emotionally invested in try to come up with solutions to the dilemmas. When these characters are first introduced to us, our feelings toward them will be based on the same criteria we’d apply to real people who could potentially become a part of our social circles. Boyd explains,
Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch. (130)
We favor characters who are good team players—who communicate honestly, who show concern for others, and who direct aggression toward enemies and cheats—for obvious reasons, but we also assess them in terms of what they might contribute to the group. Characters with exceptional strength, beauty, intelligence, or artistic ability are always especially attention-worthy. Of course, characters with qualities that make them sometimes an asset and sometimes a liability represent a moral dilemma all on their own—it’s no wonder such characters tend to be so compelling.
            The most common fictional dilemma pits a character we like against one or more characters we hate—the good team player versus the power- or money-hungry egoist. We can think of the most straightforward plot as an encroachment of chaos on the providential moral order we might otherwise take for granted. When the bad guy is finally defeated, it’s like a toy that was snatched away from us has just been returned. We embrace the moral order all the more vigorously. But of course our stories aren’t limited to this one basic formula. Around the turn of the last century, the French writer Georges Polti, following up on the work of Italian playwright Carlo Gozzi, tried to write a comprehensive list of all the basic plots in plays and novels, and flipping through his book The Thirty-Six Dramatic Situations, you find that with few exceptions (“Daring Enterprise,” “The Enigma,” “Recovery of a Lost One”) the situations aren’t simply encounters between characters with conflicting goals, or characters who run into obstacles in chasing after their desires. The conflicts are nearly all moral, either between a virtuous character and a less virtuous one or between selfish or greedy impulses and more altruistic ones. Polti’s book could be called The Thirty-Odd Moral Dilemmas in Fiction. Hector Salamanca would be happy (not really) to see the thirteenth situation: “Enmity of Kinsmen,” the first example of which is “Hatred of Brothers” (49).
Sherlock Holmes - from "The Sign of Four"
One type of fictional dilemma that seems to be particularly salient in American society today pits our impulse to punish wrongdoers against our admiration for people with exceptional abilities. Characters like Walter White in Breaking Bad win us over with qualities like altruism, resourcefulness, and ingenuity—but then they go on to behave in strikingly, though somehow not obviously, immoral ways. Variations on Conan-Doyle’s Sherlock Holmes abound; he’s the supergenius who’s also a dick (get the double-entendre?): the BBC’s Sherlock (by far the best), the movies starring Robert Downey Jr., the upcoming series featuring an Asian female Watson (Lucy Liu)—plus all the minor variations like The Mentalist and House 
Though the idea that fiction is a type of low-stakes training simulation to prepare people cognitively and emotionally to take on difficult social problems in real life may not seem all that earthshattering, conceiving of stories as analogous to Inuit moral dilemmas designed to exercise children’s moral reasoning faculties can nonetheless help us understand why worries about the examples set by fictional characters are so often misguided. Many parents and teachers noisily complain about sex or violence or drug use in media. Academic literary critics condemn the way this or that author portrays women or minorities. Underlying these concerns is the crude assumption that stories simply encourage audiences to imitate the characters, that those audiences are passive receptacles for the messages—implicit or explicit—conveyed through the narrative. To be fair, these worries may be well placed when it comes to children so young they lack the cognitive sophistication necessary for separating their thoughts and feelings about protagonists from those they have about themselves, and are thus prone to take the hero for a simple model of emulation-worthy behavior. But, while Inuit adults communicate to children that they can be trusted to arrive at a right or moral solution, the moralizers in our culture betray their utter lack of faith in the intelligence and conscience of the people they try to protect from the corrupting influence of stories with imperfect or unsavory characters. 

           This type of self-righteous and overbearing attitude toward readers and viewers strikes me as more likely by orders of magnitude to provoke defiant resistance to moral lessons than the North Baffin’s isumaqsayuq approach. In other words, a good story is worth a thousand sermons. But if the moral dilemma at the core of the plot has an easy solution—if you can say precisely what the moral of the story is—it’s probably not a very good story.

The Imp of the Underground and the Literature of Low Status

Image courtesy of Block Magazine
The one overarching theme in literature, and I mean all literature since there’s been any to speak of, is injustice. Does the girl get the guy she deserves? If so, the work is probably commercial, as opposed to literary, fiction. If not, then the reason begs pondering. Maybe she isn’t pretty enough, despite her wit and aesthetic sophistication, so we’re left lamenting the shallowness of our society’s males. Maybe she’s of a lower caste, despite her unassailable virtue, in which case we’re forced to question our complacency before morally arbitrary class distinctions. Or maybe the timing was just off—cursed fate in all her fickleness. Another literary work might be about the woman who ends up without the fulfilling career she longed for and worked hard to get, in which case we may blame society’s narrow conception of femininity, as evidenced by all those damn does-the-girl-get-the-guy stories.

            The prevailing theory of what arouses our interest in narratives focuses on the characters’ goals, which magically, by some as yet undiscovered cognitive mechanism, become our own. But plots often catch us up before any clear goals are presented to us, and our partisanship on behalf of a character easily endures shifting purposes. We as readers and viewers are not swept into stories through the transubstantiation of someone else’s striving into our own, with the protagonist serving as our avatar as we traverse the virtual setting and experience the pre-orchestrated plot. Rather, we reflexively monitor the character for signs of virtue and for a capacity to contribute something of value to his or her community, the same way we, in our nonvirtual existence, would monitor and assess a new coworker, classmate, or potential date. While suspense in commercial fiction hinges on high-stakes struggles between characters easily recognizable as good and those easily recognizable as bad, and comfortably condemnable as such, forward momentum in literary fiction—such as it is—depends on scenes in which the protagonist is faced with temptations, tests of virtue, moral dilemmas.

The strain and complexity of coming to some sort of resolution to these dilemmas often serves as a theme in itself, a comment on the mad world we live in, where it’s all but impossible to discern between right and wrong. Indeed, the most common emotional struggle depicted in literature is that between the informal, even intimate handling of moral evaluation—which comes natural to us owing to our evolutionary heritage as a group-living species—and the official, systematized, legal or institutional channels for determining merit and culpability that became unavoidable as societies scaled up exponentially after the advent of agriculture. These burgeoning impersonal bureaucracies are all too often ill-equipped to properly weigh messy mitigating factors, and they’re all too vulnerable to subversion by unscrupulous individuals who know how to game them. Psychopaths who ought to be in prison instead become CEOs of multinational investment firms, while sensitive and compassionate artists and humanitarians wind up taking lowly day jobs at schools or used book stores. But the feature of institutions and bureaucracies—and of complex societies more generally—that takes the biggest toll on our Pleistocene psyches, the one that strikes us as the most glaring injustice, is their stratification, their arrangement into steeply graded hierarchies.

Unlike our hierarchical ape cousins, all present-day societies still living in small groups as nomadic foragers, like those our ancestors lived in throughout the epoch that gave rise to the suite of traits we recognize as uniquely human, collectively enforce an ethos of egalitarianism. As anthropologist Christopher Boehm explains in his book Hierarchy in the Forest: The Evolution of Egalitarianism,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

Since humans evolved from a species that was ancestral to both chimpanzees and gorillas, we carry in us many of the emotional and behavioral capacities that support hierarchies. But, during all those millennia of egalitarianism, we also developed an instinctive distaste for behaviors that undermine an individual’s personal sovereignty. “On their list of serious moral transgressions,” Boehm explains,

hunter-gathers regularly proscribe the enactment of behavior that is politically overbearing. They are aiming at upstarts who threaten the autonomy of other group members, and upstartism takes various forms. An upstart may act the bully simply because he is disposed to dominate others, or he may become selfishly greedy when it is time to share meat, or he may want to make off with another man’s wife by threat or by force. He (or sometimes she) may also be a respected leader who suddenly begins to issue direct orders… An upstart may simply take on airs of superiority, or may aggressively put others down and thereby violate the group’s idea of how its main political actors should be treating one another. (43)

In a band of thirty people, it’s possible to keep a vigilant eye on everyone and head off potential problems. But, as populations grow, encounters with strangers in settings where no one knows one another open the way for threats to individual autonomy and casual insults to personal dignity. And, as professional specialization and institutional complexity increase in pace with technological advancement, power structures become necessary for efficient decision-making. Economic inequality then takes hold as a corollary of professional inequality.

None of this is to suggest that the advance of civilization inevitably leads to increasing injustice. In fact, per capita murder rates are much higher in hunter-gatherer societies. Nevertheless, the impersonal nature of our dealings with others in the modern world often strikes us as overly conducive to perverse incentives and unfair outcomes. And even the most mundane signals of superior status or the most subtle expressions of power, though officially sanctioned, can be maddening. Compare this famous moment in literary history to Boehm’s account of hunter-gatherer political philosophy:

I was standing beside the billiard table, blocking the way unwittingly, and he wanted to pass; he took me by the shoulders and silently—with no warning or explanation—moved me from where I stood to another place, and then passed by as if without noticing. I could have forgiven a beating, but I simply could not forgive his moving me and in the end just not noticing me. (49)

The billiard player's failure to acknowledge his autonomy outrages the narrator, who then considers attacking the man who has treated him with such disrespect. But he can’t bring himself to do it. He explains,

I turned coward not from cowardice, but from the most boundless vanity. I was afraid, not of six-foot-tallness, nor of being badly beaten and chucked out the window; I really would have had physical courage enough; what I lacked was sufficient moral courage. I was afraid that none of those present—from the insolent marker to the last putrid and blackhead-covered clerk with a collar of lard who was hanging about there—would understand, and that they would all deride me if I started protesting and talking to them in literary language. Because among us to this day it is impossible to speak of a point of honor—that is, not honor, but a point of honor (point d’honneur) otherwise than in literary language. (50)

The languages of law and practicality are the only ones whose legitimacy is recognized in modern societies. The language of morality used to describe sentiments like honor has been consigned to literature. This man wants to exact his revenge for the slight he suffered, but that would require his revenge to be understood by witnesses as such. The derision he can count on from all the bystanders would just compound the slight. In place of a close-knit moral community, there is only a loose assortment of strangers. And so he has no recourse.

            The character in this scene could be anyone. Males may be more keyed into the physical dimension of domination and more prone to react with physical violence, but females likewise suffer from slights and belittlements, and react aggressively, often by attacking their tormenter's reputation through gossip. Treating a person of either gender as an insensate obstacle is easier when that person is a stranger you’re unlikely ever to encounter again. But another dynamic is at play in the scene which makes it still easier—almost inevitable. After being unceremoniously moved aside, the narrator becomes obsessed with the man who treated him so dismissively. Desperate to even the score, he ends up stalking the man, stewing resentfully, trying to come up with a plan. He writes,

And suddenly… suddenly I got my revenge in the simplest, the most brilliant way! The brightest idea suddenly dawned on me. Sometimes on holidays I would go to Nevsky Prospect between three and four, and stroll along the sunny side. That is, I by no means went strolling there, but experienced countless torments, humiliations and risings of bile: that must have been just what I needed. I darted like an eel among the passers-by, in a most uncomely fashion, ceaselessly giving way now to generals, now to cavalry officers and hussars, now to ladies; in those moments I felt convulsive pains in my heart and a hotness in my spine at the mere thought of the measliness of my attire and the measliness and triteness of my darting little figure. This was a torment of torments, a ceaseless, unbearable humiliation from the thought, which would turn into a ceaseless and immediate sensation, of my being a fly before that whole world, a foul, obscene fly—more intelligent, more developed, more noble than everyone else—that went without saying—but a fly, ceaselessly giving way to everyone, humiliated by everyone, insulted by everyone. (52)

So the indignity, it seems, was not borne of being moved aside like a piece of furniture so much as it was of being afforded absolutely no status. That’s why being beaten would have been preferable; a beating implies a modicum of worthiness in that it demands recognition, effort, even risk, no matter how slight.

            The idea that occurs to the narrator for the perfect revenge requires that he first remedy the outward signals of his lower social status, “the measliness of my attire and the measliness… of my darting little figure,” as he calls them. The catch is that to don the proper attire for leveling a challenge, he has to borrow money from a man he works with—which only adds to his daily feelings of humiliation. Psychologists Derek Rucker and Adam Galinsky have conducted experiments demonstrating that people display a disturbing readiness to compensate for feelings of powerlessness and low status by making pricy purchases, even though in the long run such expenditures only serve to perpetuate their lowly economic and social straits. The irony is heightened in the story when the actual revenge itself, the trappings for which were so dearly purchased, turns out to be so bathetic.

Suddenly, within three steps of my enemy, I unexpectedly decided, closed my eyes, and—we bumped solidly shoulder against shoulder! I did not yield an inch and passed by on perfectly equal footing! He did not even look back and pretended not to notice: but he only pretended, I’m sure of that. To this day I’m sure of it! Of course, I got the worst of it; he was stronger, but that was not the point. The point was that I had achieved my purpose, preserved my dignity, yielded not a step, and placed myself publicly on an equal social footing with him. I returned home perfectly avenged for everything. (55)

But this perfect vengeance has cost him not only the price of a new coat and hat; it has cost him a full two years of obsession, anguish, and insomnia as well. The implication is that being of lowly status is a constant psychological burden, one that makes people so crazy they become incapable of making rational decisions.

Dostoevsky
            Literature buffs will have recognized these scenes from Dostoevsky’s Notes from Underground (as translated by Richard Prevear and Larissa Volokhnosky), which satirizes the idea of a society based on the principle of “rational egotism” as symbolized by N.G. Chernyshevsky’s image of a “crystal palace” (25), a well-ordered utopia in which every citizen pursues his or her own rational self-interests. Dostoevsky’s underground man hates the idea because regardless of how effectively such a society may satisfy people’s individual needs the rigid conformity it would demand would be intolerable. The supposed utopia, then, could never satisfy people’s true interests. He argues,

That’s just the thing, gentlemen, that there may well exist something that is dearer for almost every man than his very best profit, or (so as not to violate logic) that there is this one most profitable profit (precisely the omitted one, the one we were just talking about), which is chiefer and more profitable than all other profits, and for which a man is ready, if need be, to go against all laws, that is, against reason, honor, peace, prosperity—in short, against all these beautiful and useful things—only so as to attain this primary, most profitable profit which is dearer to him than anything else. (22)

The underground man cites examples of people behaving against their own best interests in this section, which serves as a preface to the story of his revenge against the billiard player who so blithely moves him aside. The way he explains this “very best profit” which makes people like himself behave in counterproductive, even self-destructive ways is to suggest that nothing else matters unless everyone’s freedom to choose how to behave is held inviolate. He writes,

One’s own free and voluntary wanting, one’s own caprice, however wild, one’s own fancy, though chafed sometimes to the point of madness—all this is that same most profitable profit, the omitted one, which does not fit into any classification, and because of which all systems and theories are constantly blown to the devil… Man needs only independent wanting, whatever this independence may cost and wherever it may lead. (25-6)
Arthur Rackham's Imp of Perverse Illustration

Notes from Underground was originally published in 1864. But the underground man echoes, wittingly or not, the narrator of Edgar Allan Poe’s story from almost twenty years earlier, "The Imp of the Perverse," who posits an innate drive to perversity, explaining,

Through its promptings we act without comprehensible object. Or if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say that through its promptings we act for the reason that we should not. In theory, no reason can be more unreasonable, but in reality there is none so strong. With certain minds, under certain circumstances, it becomes absolutely irresistible. I am not more sure that I breathe, than that the conviction of the wrong or impolicy of an action is often the one unconquerable force which impels us, and alone impels us, to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution to ulterior elements. (403)

This narrator’s suggestion of the irreducibility of the impulse notwithstanding, it’s noteworthy how often the circumstances that induce its expression include the presence of an individual of higher status.
Dov Cohen

            The famous shoulder bump in Notes from Underground has an uncanny parallel in experimental psychology. In 1996, Dov Cohen, Richard Nisbett, and their colleagues published the research article, “Insult, Aggression, and the Southern Culture of Honor: An ‘Experimental Ethnography’,” in which they report the results of a comparison between the cognitive and physiological responses of southern males to being bumped in a hallway and casually called an asshole to those of northern males. The study showed that whereas men from northern regions were usually amused by the run-in, southern males were much more likely to see it as an insult and a threat to their manhood, and they were much more likely to respond violently. The cortisol and testosterone levels of southern males spiked—the clever experimental setup allowed meaures before and after—and these men reported believing physical confrontation was the appropriate way to redress the insult. The way Cohen and Nisbett explain the difference is that the “culture of honor” that emerges in southern regions originally developed as a safeguard for men who lived as herders. Cultures that arise in farming regions place less emphasis on manly honor because farmland is difficult to steal. But if word gets out that a herder is soft then his livelihood is at risk. Cohen and Nisbett write,
Richard Nisbett

Such concerns might appear outdated for southern participants now that the South is no longer a lawless frontier based on a herding economy. However, we believe these experiments may also hint at how the culture of honor has sustained itself in the South. It is possible that the culture-of-honor stance has become “functionally autonomous” from the material circumstances that created it. Culture of honor norms are now socially enforced and perpetuated because they have become embedded in social roles, expectations, and shared definitions of manhood. (958)

            More recently, in a 2009 article titled “Low-Status Compensation: A Theory for Understanding the Role of Status in Cultures of Honor,” psychologist P.J. Henry takes another look at Cohen and Nisbett’s findings and offers another interpretation based on his own further experimentation. Henry’s key insight is that herding peoples are often considered to be of lower status than people with other professions and lifestyles. After establishing that the southern communities with a culture of honor are often stigmatized with negative stereotypes—drawling accents signaling low intelligence, high incidence of incest and drug use, etc.—both in the minds of outsiders and those of the people themselves, Henry suggests that a readiness to resort to violence probably isn’t now and may not ever have been adaptive in terms of material benefits.
P.J. Henry

An important perspective of low-status compensation theory is that low status is a stigma that brings with it lower psychological worth and value. While it is true that stigma also often accompanies lower economic worth and, as in the studies presented here, is sometimes defined by it (i.e., those who have lower incomes in a society have more of a social stigma compared with those who have higher incomes), low-status compensation theory assumes that it is psychological worth that is being protected, not economic or financial worth. In other words, the compensation strategies used by members of low-status groups are used in the service of psychological self-protection, not as a means of gaining higher status, higher income, more resources, etc. (453)

And this conception of honor brings us closer to the observations of the underground man and Poe’s boastful murderer. If psychological worth is what’s being defended, then economic considerations fall by the wayside. Unfortunately, since our financial standing tends to be so closely tied to our social standing, our efforts to protect our sense of psychological worth have a nasty tendency to backfire in the long run.

            Henry found evidence for the importance of psychological reactance, as opposed to cultural norms, in causing violence when he divided participants of his study into either high or low status categories and then had them respond to questions about how likely they would be to respond to insults with physical aggression. But before being asked about the propriety of violent reprisals half of the members of each group were asked to recall as vividly as they could a time in their lives when they felt valued by their community. Henry describes the findings thus:

When lower status participants were given the opportunity to validate their worth, they were less likely to endorse lashing out aggressively when insulted or disrespected. Higher status participants were unaffected by the manipulation. (463)

The implication is that people who feel less valuable than others, a condition that tends to be associated with low socioeconomic status, are quicker to retaliate because they are almost constantly on-edge, preoccupied at almost every moment with assessments of their standing in relation to others. Aside from a readiness to engage in violence, this type of obsessive vigilance for possible slights, and the feeling of powerlessness that attends it, can be counted on to keep people in a constant state of stress. The massive longitudinal study of British Civil Service employees called the Whitehall Study, which tracks the health outcomes of people at the various levels of the bureaucratic hierarchy, has found that the stress associated with low status also has profound effects on our physical well-being.  
When Americans are asked to imagine an ideal distribution of
wealth, the results show far less stratification than actually
exists.

            Though it may seem that violence-prone poor people occupying lowly positions on societal and professional totem poles are responsible for aggravating and prolonging their own misery because they tend to spend extravagantly and lash out at their perceived overlords with nary a concern for the consequences, the regularity with which low status leads to self-defeating behavior suggests the impulses are much more deeply rooted than some lazily executed weighing of pros and cons. If the type of wealth or status inequality the underground man finds himself on the short end of would have begun to take root in societies like the ones Christopher Boehm describes, a high-risk attempt at leveling the playing field would not only have been understandable—it would have been morally imperative. In a group of nomadic foragers, though, a man endeavoring to knock a would-be alpha down a few pegs would be able to count on the endorsement of most of the other group members. And the success rate for re-establishing and maintaining egalitarianism would have been heartening. Today, we are forced to live with inequality, even though beyond a certain point most people (regardless of political affiliation) see it as an injustice. 

            Some of the functions of literature, then, are to help us imagine just how intolerable life on the bottom can be, sympathize with those who get trapped in downward spirals of self-defeat, and begin to imagine what a more just and equitable society might look like. The catch is that we will be put off by characters who mistreat others or simply show a dearth of redeeming qualities.



Also read The Adaptive Appeal of Bad Boys

and Can't Win for Losing: Why there are So Many Losers in Literature and Why it has to Change

The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins – Part 3 of A Crash Course in Multilevel Selection Theory

Start with Part 1.
In a 1969 account of her time in Labrador studying the culture of the Montagnais-Naskapi people, anthropologist Eleanor Leacock describes how a man named Thomas, who was serving as her guide and informant, responded to two men they encountered while far from home on a hunting trip. The men, whom Thomas recognized but didn’t know very well, were on the brink of starvation. Even though it meant ending the hunting trip early and hence bringing back fewer furs to trade, Thomas gave the hungry men all the flour and lard he was carrying. Leacock figured that Thomas must have felt at least somewhat resentful for having to cut short his trip and that he was perhaps anticipating some return favor from the men in the future. But Thomas didn’t seem the least bit reluctant to help or frustrated by the setback. Leacock kept pressing him for an explanation until he got annoyed with her probing. She writes,
Eleanor Leacock
This was one of the very rare times Thomas lost patience with me, and he said with deep, if suppressed anger, “suppose now, not to give them flour, lard—just dead inside.” More revealing than the incident itself were the finality of his tone and the inference of my utter inhumanity in raising questions about his action. (Quoted in Boehm 219)
The phrase “just dead inside” expresses how deeply internalized the ethic of sympathetic giving is for people like Thomas who live in cultures more similar to those our earliest human ancestors created at the time, around 45,000 years ago, when they began leaving evidence of engaging in all the unique behaviors that are the hallmarks of our species. The Montagnais-Naskapi don’t qualify as an example of what anthropologist Christopher Boehm labels Late Pleistocene Appropriate, or LPA, cultures because they had been involved in fur trading with people from industrialized communities going back long before their culture was first studied by ethnographers. But Boehm includes Leacock’s description in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame because he believes Thomas’s behavior is in fact typical of nomadic foragers and because, infelicitously for his research, standard ethnographies seldom cover encounters like the one Thomas had with those hungry acquaintances of his.
            In our modern industrialized civilization, people donate blood, volunteer to fight in wars, sign over percentages of their income to churches, and pay to keep organizations like Doctors without Borders and Human Rights Watch in operation even though the people they help live in far-off countries most of us will never visit. One approach to explaining how this type of extra-familial generosity could have evolved is to suggest people who live in advanced societies like ours are, in an important sense, not in their natural habitat. Among evolutionary psychologists, it has long been assumed that in humans’ ancestral environments, most of the people individuals encountered would either be close kin who carried many genes in common, or at the very least members of a moderately stable group they could count on running into again, at which time they would be disposed to repay any favors. Once you take kin selection and reciprocal altruism into account, the consensus held, there was not much left to explain. Whatever small acts of kindness that weren’t directed toward kin or done with an expectation of repayment were, in such small groups, probably performed for the sake of impressing all the witnesses and thus improving the social status of the performer. As the biologist Michael Ghiselin once famously put it, “Scratch an altruist and watch a hypocrite bleed.” But this conception of what evolutionary psychologists call the Environment of Evolutionary Adaptedness, or EEA, never sat right with Boehm.
Christopher Boehm
            One problem with the standard selfish gene scenario that has just recently come to light is that modern hunter-gatherers, no matter where in the world they live, tend to form bands made up of high percentages of non-related or distantly related individuals. In an article published in Science in March of 2011, anthropologist Kim Hill and his colleagues report the findings of their analysis of thirty-two hunter-gatherer societies. The main conclusion of the study is that the members of most bands are not closely enough related for kin selection to sufficiently account for the high levels of cooperation ethnographers routinely observe. Assuming present-day forager societies are representative of the types of groups our Late Pleistocene ancestors lived in, we can rule out kin selection as a likely explanation for altruism of the sort displayed by Thomas or by modern philanthropists in complex civilizations. Boehm offers us a different scenario, one that relies on hypotheses derived from ethological studies of apes and archeological records of our human prehistory as much as on any abstract mathematical accounting of the supposed genetic payoffs of behaviors.
            In three cave paintings discovered in Spain that probably date to the dawn of the Holocene epoch around 12,000 years ago, groups of men are depicted with what appear to be bows lifted above their heads in celebration while another man lay dead nearby with one arrow from each of them sticking out of his body. We can only speculate about what these images might have meant to the people who created them, but Boehm points out that all extant nomadic foraging peoples, no matter what part of the world they live in, are periodically forced to reenact dramas that resonate uncannily well with these scenes portrayed in ancient cave art. “Given enough time,” he writes, “any band society is likely to experience a problem with a homicide-prone unbalanced individual. And predictably band members will have to solve the problem by means of execution” (253). One of the more gruesome accounts of such an incident he cites comes from Richard Lee’s ethnography of !Kung Bushmen. After a man named /Twi had killed two men, Lee writes, “A number of people decided that he must be killed.” According to Lee’s informant, a man named =Toma (the symbols before the names represent clicks), the first attempt to kill /Twi was botched, allowing him to return to his hut, where a few people tried to help him. But he ended up becoming so enraged that he grabbed a spear and stabbed a woman in the face with it. When the woman’s husband came to her aid, /Twi shot him with a poisoned arrow, killing him and bringing his total body count to four. =Toma continues the story,
Now everyone took cover, and others shot at /Twi, and no one came to his aid because all those people had decided he had to die. But he still chased after some, firing arrows, but he didn’t hit any more…Then they all fired on him with poisoned arrows till he looked like a porcupine. Then he lay flat. All approached him, men and women, and stabbed his body with spears even after he was dead. (261-2)
The two most important elements of this episode for Boehm are the fact that the death sentence was arrived at through a partial group consensus which ended up being unanimous, and that it was carried out with weapons that had originally been developed for hunting. But this particular case of collectively enacted capital punishment was odd not just in how clumsy it was. Boehm writes,
!Kung Hunters
In this one uniquely detailed description of what seems to begin as a delegated execution and eventually becomes a fully communal killing, things are so chaotic that it’s easy to understand why with hunter-gatherers the usual mode of execution is to efficiently delegate a kinsman to quickly kill the deviant by ambush. (261)
The prevailing wisdom among evolutionary psychologists has long been that any appearance of group-level adaptation, like the collective killing of a dangerous group member, must be an illusory outcome caused by selection at the level of individuals or families. As Steven Pinker explains, “If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate.” To demonstrate that some trait or behavior humans reliably engage in really is for the sake of the group as opposed to the individual engaging in it, there would have to be some conflict between the two motives—serving the group would have to entail incurring some kind of cost for the individual. Pinker explains,
It’s only when humans display traits that are disadvantageous to themselves while benefiting their group that group selection might have something to add. And this brings us to the familiar problem which led most evolutionary biologists to reject the idea of group selection in the 1960s. Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.
The ever-present potential for cooperative or altruistic group norms to be subverted by selfish individuals keen on exploitation is known in game theory as the free rider problem. To see how strong selfish individuals can lord over groups of their conspecifics we can look to the hierarchically organized bands great apes naturally form.
            In groups of chimpanzees, for instance, an alpha male gets to eat his fill of the most nutritious food, even going so far at times as seizing meat from the subordinates who hunted it down. The alpha chimp also works to secure, as best he can, sole access to reproductively receptive females. For a hierarchical species like this, status is a winner-take-all competition, and so genes for dominance and cutthroat aggression proliferate. Subordinates tolerate being bullied because they know the more powerful alpha will probably kill them if they try to stand up for themselves. If instead of mounting some ill-fated resistance, however, they simply bide their time, they may eventually grow strong enough to more effectively challenge for the top position. Meanwhile, they can also try to sneak off with females to couple behind the alpha’s back. Boehm suggests that two competing motives keep hierarchies like this in place: one is a strong desire for dominance and the other is a penchant for fear-based submission. What this means is that subordinates only ever submit ambivalently. They even have a recognizable vocalization, which Boehm transcribes as the “waa,” that they use to signal their discontent. In his 1999 book Hierarchy in the Forest: The Evolution of Egalitarian Behavior, Boehm explains,
When an alpha male begins to display and a subordinate goes screaming up a tree, we may interpret this as a submissive act of fear; but when that same subordinate begins to waa as the display continues, it is an open, hostile expression of insubordination. (167)
Since the distant ancestor humans shared in common with chimpanzees likely felt this same ambivalence toward alphas, Boehm theorizes that it served as a preadaptation for the type of treatment modern human bullies can count on in every society of nomadic foragers anthropologists have studied. “I believe,” he writes, “that a similar emotional and behavioral orientation underlies the human moral community’s labeling of domination behaviors as deviant” (167).
            Boehm has found accounts of subordinate chimpanzees, bonobos, and even gorillas banding together with one or more partner to take on an excessively domineering alpha—though there was only one case in which this happened with gorillas and the animals in question lived in captivity. But humans are much better at this type of coalition building. Two of the most crucial developments in our own lineage that lead to the differences in social organization between ourselves and the other apes were likely to have been an increased capacity for coordinated hunting and the invention of weapons designed to kill big game. As Boehm explains,
Weapons made possible not only killing at a distance, but far more effective threat behavior; brandishing a projectile could turn into an instant lethal attack with relatively little immediate risk to the attacker. (175)
Deadly weapons fundamentally altered the dynamic between lone would-be bullies and those they might try to dominate. As Boehm points out, “after weapons arrived, the camp bully became far more vulnerable” (177). With the advent of greater coalition-building skills and the invention of tools for efficient killing, the opportunities for an individual to achieve alpha status quickly vanished.

            It’s dangerous to assume that any one group of modern people provides the key to understanding our Pleistocene ancestors, but when every group living with similar types of technology and subsistence methods as those ancestors follows a similar pattern it’s much more suggestive. “A distinctively egalitarian political style,” Boehm writes, “is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-6). This egalitarianism must be vigilantly guarded because “A potential bully always seems to be waiting in the wings” (68). Boehm explains what he believes is the underlying motivation,
Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)
The methods used to prevent powerful or influential individuals from acquiring too much control include such collective behaviors as gossiping, ostracism, banishment, and even, in extreme cases, execution. “In egalitarian hierarchies the pyramid of power is turned upside down,” Boehm explains, “with a politically united rank and file dominating the alpha-male types” (66).
            The implications for theories about our ancestors are profound. The groups humans were living in as they evolved the traits that made them what we recognize today as human were highly motivated and well-equipped to both prevent and when necessary punish the type of free-riding that evolutionary psychologists and other selfish gene theorists insist would undermine group cohesion. Boehm makes this point explicit, writing,
The overall hypothesis is straightforward: basically, the advent of egalitarianism shifted the balance of forces within natural selection so that within-group selection was substantially debilitated and between-group selection was amplified. At the same time, egalitarian moral communities found themselves uniquely positioned to suppress free-riding… at the level of phenotype. With respect to the natural selection of behavior genes, this mechanical formula clearly favors the retention of altruistic traits. (199)
This is the point where he picks up the argument again in Moral Origins. The story of the homicidal man named /Twi is an extreme example of the predictable results of overly aggressive behaviors. Any nomadic forager who intransigently tries to throw his weight around the way alpha male chimpanzees do will probably end up getting “porcupined” (158) like /Twi and the three men depicted in the Magdalenian cave art in Spain.
Bone from 200,000 years ago shows marks made by multiple
butchers. Soon after this period, butchering began to be
delegated to individuals. 
Murder is an extreme example of the types of free-riding behavior that nomadic foragers reliably sanction. Any politically overbearing treatment of group mates, particularly the issuing of direct commands, is considered a serious moral transgression. But alongside this disapproval of bossy or bullying behavior there exists an ethic of sharing and generosity, so people who are thought to be stingy are equally disliked. As Boehm writes in Hierarchy in the Forest, “Politically egalitarian foragers are also, to a significant degree, materially egalitarian” (70). The image many of us grew up with of the lone prehistoric male hunter going out to stalk his prey, bringing it back as a symbol of his prowess in hopes of impressing beautiful and fertile females, turns out to be completely off-base. In most hunter-gather groups, the males hunt in teams, and whatever they kill gets turned over to someone else who distributes the meat evenly among all the men so each of their families gets an equal portion. In some cultures, “the hunter who made the kill gets a somewhat larger share,” Boehm writes in Moral Origins, “perhaps as an incentive to keep him at his arduous task” (185). But every hunter knows that most of the meat he procures will go to other group members—and the sharing is done without any tracking of who owes whom a favor. Boehm writes,
The models tell us that the altruists who are helping nonkin more than they are receiving help must be “compensated” in some way, or else they—meaning their genes—will go out of business. What we can be sure of is that somehow natural selection has managed to work its way around these problems, for surely humans have been sharing meat and otherwise helping others in an unbalanced fashion for at least 45,000 years. (184)
Following biologist Richard Alexander, Boehm sees this type of group beneficial generosity as an example of “indirect reciprocity.” And he believes it functions as a type of insurance policy, or, as anthropologists call it, “variance reduction.” It’s often beneficial for an individual’s family to pay in, as it were, but much of the time people contribute knowing full well the returns will go to others.
            Less extreme cases than the psychopaths who end up porcupined involve what Boehm calls “meat-cheaters.” A prominent character in Moral Origins is an Mbuti Pygmy man named Cephu, whose story was recounted in rich detail by the anthropologist Colin Turnbull. One of the cooperative hunting strategies the Pygmies use has them stretching a long net through the forest while other group members create a ruckus to scare animals into it. Each net holder is entitled to whatever runs into his section of the net, which he promptly spears to death. What Cephu did was sneak farther ahead of the other men to improve his chances of having an animal run into his section of the net before the others. Unfortunately for him, everyone quickly realized what was happening. Returning to the camp after depositing his ill-gotten gains in his hut, Cephu hears someone call out that he is an animal. Beyond that, everyone was silent. Turnbull writes,
Cephu walked into the group, and still nobody spoke. He went to where a youth was sitting in a chair. Usually he would have been offered a seat without his having to ask, and now he did not dare to ask, and the youth continued to sit there in as nonchalant a manner as he could muster. Cephu went to another chair where Amabosu was sitting. He shook it violently when Amabosu ignored him, at which point he was told, “Animals lie on the ground.” (Quoted 39)
Thus began the accusations. Cephu burst into tears and tried to claim that his repositioning himself in the line was an accident. No one bought it. Next, he made the even bigger mistake of trying to suggest he was entitled to his preferential position. “After all,” Turnbull writes, “was he not an important man, a chief, in fact, of his own band?” At this point, Manyalibo, who was taking the lead in bringing Cephu to task, decided that the matter was settled. He said that
there was obviously no use prolonging the discussion. Cephu was a big chief, and Mbuti never have chiefs. And Cephu had his own band, of which he was chief, so let him go with it and hunt elsewhere and be a chief elsewhere. Manyalibo ended a very eloquent speech with “Pisa me taba” (“Pass me the tobacco”). Cephu knew he was defeated and humiliated. (40)
The guilty verdict Cephu had to accept to avoid being banished from the band came with the sentence that he had to relinquish all the meat he brought home that day. His attempt at free-riding therefore resulted not only in a loss of food but also in a much longer-lasting blow to his reputation.
            Boehm has built a large database from ethnographic studies like Lee’s and Turnbull’s, and it shows that in their handling of meat-cheaters and self-aggrandizers nomadic foragers all over the world use strategies similar to those of the Pygmies. First comes the gossip about your big ego, your dishonesty, or your cheating. Soon you’ll recognize a growing reluctance of other’s to hunt with you, or you’ll have a tough time wooing a mate. Next, you may be directly confronted by someone delegated by a quorum of group members. If you persist in your free-riding behavior, especially if it entails murder or serious attempts at domination, you’ll probably be ambushed and turned into a porcupine. Alexander put forth the idea of “reputational selection,” whereby individuals benefit in terms of survival and reproduction from being held in high esteem by their group mates. Boehm prefers the term “social selection,” however, because it encompasses the idea that people are capable of figuring out what’s best for their groups and codifying it in their culture. How well an individual internalizes a group’s norms has profound effects on his or her chances for survival and reproduction. Boehm’s theory is that our consciences are the mechanisms we’ve evolved for such internalization.
Though there remain quite a few chicken-or-egg conundrums to work out, Boehm has cobbled together archeological evidence from butchering cites, primatological evidence from observations of apes in the wild and in captivity, and quantitative analyses of ethnographic records to put forth a plausible history of how our consciences evolved and how we became so concerned for the well-being of people we may barely know. As humans began hunting larger game, demanding greater coordination and more effective long-distance killing tools, an already extant resentment of alphas expressed itself in collective suppression of bullying behavior. And as our developing capacity for language made it possible to keep track of each other’s behavior long-term it started to become important for everyone to maintain a reputation for generosity, cooperativeness, and even-temperedness. Boehm writes,
Ultimately, the social preferences of groups were able to affect gene pools profoundly, and once we began to blush with shame, this surely meant that the evolution of conscientious self-control was well under way. The final result was a full-blown, sophisticated modern conscience, which helps us to make subtle decisions that involve balancing selfish interests in food, power, sex, or whatever against the need to maintain a decent personal moral reputation in society and to feel socially valuable as a person. The cognitive beauty of having such a conscience is that it directly facilitates making useful social decisions and avoiding negative social consequences. Its emotional beauty comes from the fact that we in effect bond with the values and rules of our groups, which means we can internalize our group’s mores, judge ourselves as well as others, and, hopefully, end up with self-respect. (173)
Social selection is actually a force that acts on individuals, selecting for those who can most strategically suppress their own selfish impulses. But in establishing a mechanism that guards the group norm of cooperation against free riders, it increased the potential of competition between groups and quite likely paved the way for altruism of the sort Leacock’s informant Thomas displayed. Boehm writes,
Thomas surely knew that if he turned down the pair of hungry men, they might “bad-mouth” him to people he knew and thereby damage his reputation as a properly generous man. At the same time, his costly generosity might very well be mentioned when they arrived back in their camp, and through the exchange of favorable gossip he might gain in his public esteem in his own camp. But neither of these socially expedient personal considerations would account for the “dead” feeling he mentioned with such gravity. He obviously had absorbed his culture’s values about sharing and in fact had internalized them so deeply that being selfish was unthinkable. (221)
In response to Ghiselin’s cynical credo, “Scratch an altruist and watch a hypocrite bleed,” Boehm points out that the best way to garner the benefits of kindness and sympathy is to actually be kind and sympathetic. He points out further that if altruism is being selected for at the level of phenotypes (the end-products of genetic processes) we should expect it to have an impact at the level of genes. In a sense, we’ve bred altruism into ourselves. Boehm writes,
If such generosity could be readily faked, then selection by altruistic reputation simply wouldn’t work. However, in an intimate band of thirty that is constantly gossiping, it’s difficult to fake anything. Some people may try, but few are likely to succeed. (189)
The result of the social selection dynamic that began all those millennia ago is that today generosity is in our bones. There are of course circumstances that can keep our generous impulses from manifesting themselves, and those impulses have a sad tendency to be directed toward members of our own cultural groups and no one else. But Boehm offers a slightly more optimistic formula than Ghiselin’s:
I do acknowledge that our human genetic nature is primarily egoistic, secondarily nepotistic, and only rather modestly likely to support acts of altruism, but the credo I favor would be “Scratch an altruist, and watch a vigilant and successful suppressor of free riders bleed. But watch out, for if you scratch him too hard, he and his group may retaliate and even kill you. (205)


Read Part 1: The Groundwork Laid by Dawkins and Gould.
And Part 2: Steven Pinker Falls Prey to the Averaging Fallacy.
Also of interest: The Adaptive Appeal of Bad Boys

The Enlightened Hypocrisy of Jonathan Haidt's Righteous Mind

A Review of Jonathan Haidt's new book, The Righteous Mind: Why Good People are Divided by Politics and Religion
            Back in the early 1950s, Muzafer Sherif and his colleagues conducted a now-infamous experiment that validated the central premise of Lord of the Flies. Two groups of 12-year-old boys were brought to a camp called Robber’s Cave in southern Oklahoma where they were observed by researchers as the members got to know each other. Each group, unaware at first of the other’s presence at the camp, spontaneously formed a hierarchy, and they each came up with a name for themselves, the Eagles and the Rattlers. That was the first stage of the study. In the second stage, the two groups were gradually made aware of each other’s presence, and then they were pitted against each other in several games like baseball and tug-o-war. The goal was to find out if animosity would emerge between the groups. This phase of the study had to be brought to an end after the groups began staging armed raids on each other’s territory, wielding socks they’d filled with rocks. Prepubescent boys, this and several other studies confirm, tend to be highly tribal.
           
            So do conservatives.
           
           This is what University of Virginia psychologist Jonathan Haidt heroically avoids saying explicitly for the entirety of his new 318-page, heavily endnoted The Righteous Mind: Why Good People Are Divided by Politics and Religion. In the first of three parts, he takes on ethicists like John Stuart Mill and Immanuel Kant, along with the so-called New Atheists like Sam Harris and Richard Dawkins, because, as he says in a characteristically self-undermining pronouncement, “Anyone who values truth should stop worshipping reason” (89). Intuition, Haidt insists, is more worthy of focus. In part two, he lays out evidence from his own research showing that all over the world judgments about behaviors rely on a total of six intuitive dimensions, all of which served some ancestral, adaptive function. Conservatives live in “moral matrices” that incorporate all six, while liberal morality rests disproportionally on just three. At times, Haidt intimates that more dimensions is better, but then he explicitly disavows that position. He is, after all, a liberal himself. In part three, he covers some of the most fascinating research to emerge from the field of human evolutionary anthropology over the past decade and a half, concluding that tribalism emerged from group selection and that without it humans never would have become, well, human. Again, the point is that tribal morality—i.e. conservatism—cannot be all bad.
Jonathan Haidt

            One of Haidt’s goals in writing The Righteous Mind, though, was to improve understanding on each side of the central political divide by exploring, and even encouraging an appreciation for, the moral psychology of those on the rival side. Tribalism can’t be all bad—and yet we need much less of it in the form of partisanship. “My hope,” Haidt writes in the introduction, “is that this book will make conversations about morality, politics, and religion more common, more civil, and more fun, even in mixed company” (xii). Later he identifies the crux of his challenge, “Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide” (49). There are plenty of books by conservative authors which gleefully point out the contradictions and errors in the thinking of naïve liberals, and there are plenty by liberals returning the favor. What Haidt attempts is a willful disregard of his own politics for the sake of transcending the entrenched divisions, even as he’s covering some key evidence that forms the basis of his beliefs. Not surprisingly, he gives the impression at several points throughout the book that he’s either withholding the conclusions he really draws from the research or exercising great discipline in directing his conclusions along paths amenable to his agenda of bringing about greater civility.

            Haidt’s focus is on intuition, so he faces the same challenge Daniel Kahneman did in writing Thinking, Fast and Slow: how to convey all these different theories and findings in a book people will enjoy reading from first page to last? Kahneman’s attempt was unsuccessful, but his encyclopedic book is still readable because its topic is so compelling. Haidt’s approach is to discuss the science in the context of his own story of intellectual development. The product reads like a postmodern hero’s journey in which the unreliable narrator returns right back to where he started, but with a heightened awareness of how small his neighborhood really is. It’s a riveting trip down the rabbit hole of self-reflection where the distinction between is and ought gets blurred and erased and reinstated, as do the distinctions between intuition and reason, and even self and other. Since, as Haidt reports, liberals tend to score higher on the personality trait called openness to new ideas and experiences, he seems to have decided on a strategy of uncritically adopting several points of conservative rhetoric—like suggesting liberals are out-of-touch with most normal people—in order to subtly encourage less open members of his audience to read all the way through. Who, after all, wants to read a book by a liberal scientist pointing out all the ways conservatives go wrong in their thinking?

The Elephant in the Room
            Haidt’s first move is to challenge the primacy of thinking over intuiting. If you’ve ever debated someone into a corner, you know simply demolishing the reasons behind a position will pretty much never be enough to change anyone’s mind. Citing psychologist Tom Gilovich, Haidt explains that when we want to believe something, we ask ourselves, “Can I believe it?” We begin a search, “and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have justification, in case anyone asks.” But if we don’t like the implications of, say, global warming, or the beneficial outcomes associated with free markets, we ask a different question:

when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. (84)

Haidt’s early research was designed to force people into making weak moral arguments so that he could explore the intuitive foundations of judgments of right and wrong. When presented with stories involving incest, or eating the family dog, which in every case were carefully worded to make it clear no harm would result to anyone—the incest couldn’t result in pregnancy; the dog was already dead—“subjects tried to invent victims” (24). It was clear that they wanted there to be a logical case based on somebody getting hurt so they could justify their intuitive answer that a wrong had been done.

They said things like ‘I know it’s wrong, but I just can’t think of a reason why.’ They seemed morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively. These subjects were reasoning. They were working quite hard reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. (25)

Reading this section, you get the sense that people come to their beliefs about the world and how to behave in it by asking the same three questions they’d ask before deciding on a t-shirt: how does it feel, how much does it cost, and how does it make me look? Quoting political scientist Don Kinder, Haidt writes, “Political opinions function as ‘badges of social membership.’ They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support” (86)—or like the skinny jeans showing everybody how hip you are.
Image courtesy of Useable Learning

           Kahneman uses the metaphor of two systems to explain the workings of the mind. System 1, intuition, does most of the work most of the time. System 2 takes a lot more effort to engage and can never manage to operate independently of intuition. Kahneman therefore proposes educating your friends about the common intuitive mistakes—because you’ll never recognize them yourself. Haidt uses the metaphor of an intuitive elephant and a cerebrating rider. He first used this image for an earlier book on happiness, so the use of the GOP mascot was accidental. But because of the more intuitive nature of conservative beliefs it’s appropriate. Far from saying that republicans need to think more, though, Haidt emphasizes the point that rational thought is never really rational and never anything but self-interested. He argues,

the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (46)

The futility of trying to avoid motivated reasoning provides Haidt some justification of his own to engage in what can only be called pandering. He cites cultural psychologists Joe Henrich, Steve Heine, and Ara Noenzayan, who argued in their 2010 paper “The Weirdest People in the World?” that researchers need to do more studies with culturally diverse subjects. Haidt commandeers the acronym WEIRD—western, educated, industrial, rich, and democratic—and applies it somewhat derisively for most of his book, even though it applies both to him and to his scientific endeavors. Of course, he can’t argue that what’s popular is necessarily better. But he manages to convey that attitude implicitly, even though he can’t really share the attitude himself.
  
            Haidt is at his best when he’s synthesizing research findings into a holistic vision of human moral nature; he’s at his worst, his cringe-inducing worst, when he tries to be polemical. He succumbs to his most embarrassingly hypocritical impulses in what are transparently intended to be concessions to the religious and the conservative. WEIRD people are more apt to deny their intuitive, judgmental impulses—except where harm or oppression are involved—and insist on the fair application of governing principles derived from reasoned analysis. But apparently there’s something wrong with this approach:

Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. (28)

This is disingenuous. For one thing, he doesn’t refer to the rationalist delusion throughout the book; it only shows up one other time. Both instances implicate the New Atheists. Haidt coins the term rationalist delusion in response to Dawkins’s The God Delusion. An atheist himself, Haidt is throwing believers a bone. To make this concession, though, he’s forced to seriously muddle his argument. “I’m not saying,” he insists,

we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. (90)

As far as I know, neither Harris nor Dawkins has ever declared himself dictator of reason—nor, for that matter, did Mill or Rawls (Hitchens might have). Haidt, in his concessions, is guilty of making points against arguments that were never made. He goes on to make a point similar to Kahneman’s.

We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. (90)

What Haidt probably realizes but isn’t saying is that the environment he’s describing is a lot like scientific institutions in academia. In other words, if you hang out in them, you’ll be WEIRD.

A Taste for Self-Righteousness
            The divide over morality can largely be reduced to the differences between the urban educated and the poor not-so-educated. As Haidt says of his research in South America, “I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university” (22). One of the major differences he and his research assistants serendipitously discovered was that educated people think it’s normal to discuss the underlying reasons for moral judgments while everyone else in the world—who isn’t WEIRD—thinks it’s odd:

But what I didn’t expect was that these working-class subjects would sometimes find my request for justifications so perplexing. Each time someone said that the people in a story had done something wrong, I asked, “Can you tell me why that was wrong?” When I had interviewed college students on the Penn campus a month earlier, this question brought forth their moral justifications quite smoothly. But a few blocks west, this same question often led to long pauses and disbelieving stares. Those pauses and stares seemed to say, You mean you don’t know why it’s wrong to do that to a chicken? I have to explain it to you? What planet are you from? (95)

The Penn students “were unique in their unwavering devotion to the ‘harm principle,’” Mill’s dictum that laws are only justified when they prevent harm to citizens. Haidt quotes one of the students as saying, “It’s his chicken, he’s eating it, nobody is getting hurt” (96). (You don’t want to know what he did before cooking it.)

Having spent a little bit of time with working-class people, I can make a point that Haidt overlooks: they weren’t just looking at him as if he were an alien—they were judging him. In their minds, he was wrong just to ask the question. The really odd thing is that even though Haidt is the one asking the questions he seems at points throughout The Righteous Mind to agree that we shouldn’t ask questions like that:

There’s more to morality than harm and fairness. I’m going to try to convince you that this principle is true descriptively—that is, as a portrait of the moralities we see when we look around the world. I’ll set aside the question of whether any of these alternative moralities are really good, true, or justifiable. As an intuitionist, I believe it is a mistake to even raise that emotionally powerful question until we’ve calmed our elephants and cultivated some understanding of what such moralities are trying to accomplish. It’s just too easy for our riders to build a case against every morality, political party, and religion that we don’t like. So let’s try to understand moral diversity first, before we judge other moralities. (98)
The Brits always get better book covers.

But he’s already been busy judging people who base their morality on reason, taking them to task for worshipping it. And while he’s expending so much effort to hold back his own judgments he’s being judged by those whose rival conceptions he’s trying to understand. His open-mindedness and disciplined restraint are as quintessentially liberal as they are unilateral.

            In the book’s first section, Haidt recounts his education and his early research into moral intuition. The second section is the story of how he developed his Moral Foundations Theory. It begins with his voyage to Bhubaneswar, the capital of Orissa in India. He went to conduct experiments similar to those he’d already been doing in the Americas. “But these experiments,” he writes, “taught me little in comparison to what I learned just from stumbling around the complex social web of a small Indian city and then talking with my hosts and advisors about my confusion.” It was an earlier account of this sojourn Haidt had written for the online salon The Edge that first piqued my interest in his work and his writing. In both, he talks about his two “incompatible identities.”

On one hand, I was a twenty-nine-year-old liberal atheist with very definite views about right and wrong. On the other hand, I wanted to be like those open-minded anthropologists I had read so much about and had studied with. (101)

The people he meets in India are similar in many ways to American conservatives. “I was immersed,” Haidt writes, “in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine” (102). The conversion to what he calls pluralism doesn’t lead to any realignment of his politics. But supposedly for the first time he begins to feel and experience the appeal of other types of moral thinking. He could see why protecting physical purity might be fulfilling. This is part of what's known as the “ethic of divinity,” and it was missing from his earlier way of thinking. He also began to appreciate certain aspects of the social order, not to the point of advocating hierarchy or rigid sex roles but seeing value in the complex network of interdependence.

Image Courtesy of The New York Times
            The story is thoroughly engrossing, so engrossing that you want it to build up into a life-changing insight that resolves the crisis. That’s where the six moral dimensions come in (though he begins with just five and only adds the last one much later), which he compares to the different dimensions of taste that make up our flavor palette. The two that everyone shares but that liberals give priority to whenever any two or more suggest different responses are Care and Harm—hurting people is wrong and we should help those in need—and Fairness. The other three from the original set are Loyalty, Authority, and Sanctity, loyalty to the tribe, respect for the hierarchy, and recognition of the sacredness of the tribe’s symbols, like the flag. Libertarians are closer to liberals; they just rely less on the Care dimension and much more on the recently added sixth one, Liberty from Opression, which Haidt believes evolved in the context of ancestral egalitarianism similar to that found among modern nomadic foragers. Haidt suggests that restricting yourself to one or two dimensions is like swearing off every flavor but sweet and salty, saying,

many authors reduce morality to a single principle, usually some variant of welfare maximization (basically, help people, don’t hurt them). Or sometimes it’s justice or related notions of fairness, rights, or respect for individuals and their autonomy. There’s The Utilitarian Grill, serving only sweeteners (welfare), and The Deontological Diner, serving only salts (rights). Those are your options. (113)

Haidt doesn’t make the connection between tribalism and the conservative moral trifecta explicit. And he insists he’s not relying on what’s called the Naturalistic Fallacy—reasoning that what’s natural must be right. Rather, he’s being, he claims, strictly descriptive and scientific.

Moral judgment is a kind of perception, and moral science should begin with a careful study of the moral taste receptors. You can’t possibly deduce the list of five taste receptors by pure reasoning, nor should you search for it in scripture. There’s nothing transcendental about them. You’ve got to examine tongues. (115)

But if he really were restricting himself to description he would have no beef with the utilitarian ethicists like Mill, the deontological ones like Kant, or for that matter with the New Atheists, all of whom are operating in the realm of how we should behave and what we should believe as opposed to how we’re naturally, intuitively primed to behave and believe. At one point, he goes so far as to present a case for Kant and Jeremy Bentham, father of utilitarianism, being autistic (the trendy psychological diagnosis du jour) (120). But, like a lawyer who throws out a damning but inadmissible comment only to say “withdrawn” when the defense objects, he assures us that he doesn’t mean the autism thing as an ad hominem.
From The Moral Foundation Website

            I think most of my fellow liberals are going to think Haidt’s metaphor needs some adjusting. Humans evolved a craving for sweets because in our ancestral environment fruits were a rare but nutrient-rich delicacy. Likewise, our taste for salt used to be adaptive. But in the modern world our appetites for sugar and salt have created a health crisis. These taste receptors are also easy for industrial food manufacturers to exploit in a way that enriches them and harms us. As Haidt goes on to explain in the third section, our tribal intuitions were what allowed us to flourish as a species. But what he doesn’t realize or won’t openly admit is that in the modern world tribalism is dangerous and far too easily exploited by demagogues and PR experts.

In his story about his time in India, he makes it seem like a whole new world of experiences was opened to him. But this is absurd (and insulting). Liberals experience the sacred too; they just don’t attempt to legislate it. Liberals recognize intuitions pushing them toward dominance and submission. They have feelings of animosity toward outgroups and intense loyalty toward members of their ingroup. Sometimes, they even indulge these intuitions and impulses. The distinction is not that liberals don’t experience such feelings; they simply believe they should question whether acting on them is appropriate in the given context. Loyalty in a friendship or a marriage is moral and essential; loyalty in business, in the form of cronyism, is profoundly immoral. Liberals believe they shouldn’t apply their personal feelings about loyalty or sacredness to their judgments of others because it’s wrong to try to legislate your personal intuitions, or even the intuitions you share with a group whose beliefs may not be shared in other sectors of society. In fact, the need to consider diverse beliefs—the pluralism that Haidt extolls—is precisely the impetus behind the efforts ethicists make to pare down the list of moral considerations.

           Moral intuitions, like food cravings, can be seen as temptations requiring discipline to resist. It’s probably no coincidence that the obesity epidemic tracks the moral divide Haidt found when he left the Penn campus. As I read Haidt’s account of Drew Westen’s fMRI experiments with political partisans, I got a bit anxious because I worried a scan might reveal me to be something other than what I consider myself. The machine in this case is a bit like the Sorting Hat at Hogwarts, and I hoped, like Harry Potter, not to be placed in Slytherin. But this hope, even if it stems from my wish to identify with the group of liberals I admire and feel loyalty toward, cannot be as meaningless as Haidt’s “intuitionism” posits.

            Ultimately, the findings Haidt brings together under the rubric of Moral Foundations Theory don’t lend themselves in any way to his larger program of bringing about greater understanding and greater civility. He fails to understand that liberals appreciate all the moral dimensions but don’t think they should all be seen as guides to political policies. And while he may want there to be less tribalism in politics he has to realize that most conservatives believe tribalism is politics—and should be.

Resistance to the Hive Switch is Futile

            “We are not saints,” Haidt writes in the third section, “but we are sometimes good team players” (191). Though his efforts to use Moral Foundations to understand and appreciate conservatives lead to some bizarre contortions and a profound misunderstanding of liberals, his synthesis of research on moral intuitions with research and theorizing on multi-level selection, including selection at the level of the group, is an important contribution to psychology and anthropology. He writes that

anytime a group finds a way to suppress selfishness, it changes the balance of forces in a multi-level analysis: individual-level selection becomes less important, and group-level selection becomes more powerful. For example, if there is a genetic basis for feelings of loyalty and sanctity (i.e., the Loyalty and Sanctity Foundations), then intense intergroup competition will make these genes become more common in the next generation. (194)

The most interesting idea in this section is that humans possess what Haidt calls a “hive switch” that gets flipped whenever we engage in coordinated group activities. He cites historian William McNeil who recalls an “altered state of consciousness” when he was marching in formation with fellow soldiers in his army days. He describes it as a “sense of pervasive well-being…a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life” (221). Sociologist Emile Durkheim referred to this same experience as “collective effervescence.” People feel it today at football games, at concerts as they dance to a unifying beat, and during religious rituals. It’s a profoundly spiritual experience, and it likely evolved to create a greater sense of social cohesion within groups competing with other groups.

            Surprisingly, the altruism inspired by this sense of the sacred triggered by coordinated activity, though primarily directed at fellow group members—parochial altruism—can also flow out in ways that aren’t entirely tribal.  Haidt cites political scientists Robert Putnam and David Campbell’s book, American Grace: How Religion Divides and Unites Us, where they report the finding that “the more frequently people attend religious services, the more generous and charitable they become across the board” (267); they do give more to religious charities, but they also give more to secular ones. Putnam and Campbell write that “religiously observant Americans are better neighbors and better citizens.” The really astonishing finding from Putnam and Campbell’s research, though, is that the social advantages enjoyed by religious people had nothing to do with the actual religious beliefs. Haidt explains,

These beliefs and practices turned out to matter very little. Whether you believe in hell, whether you pray daily, whether you are a Catholic, Protestant, Jew, or Mormon… none of these things correlated with generosity. The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people. (267)

The Sacred foundation, then, is an integral aspect of our sense of community, as well as a powerful inspiration for altruism. Haidt cites the work of Richard Sosis, who combed through all the records he could find on communes in America. His central finding is that “just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.” Socis went on to identify “one master variable” which accounted for the difference between success and failure for religious groups: “the number of costly sacrifices that each commune demanded from its members” (257). But sacrifices demanded by secular groups made no difference whatsoever. Haidt concludes,

In other words, the very ritual practices that the New Atheists dismiss as costly, inefficient, and irrational turn out to be a solution to one of the hardest problems humans face: cooperation without kinship. Irrational beliefs can sometimes help the group function more rationally, particularly when those beliefs rest upon the Sanctity foundation. Sacredness binds people together, and then blinds them to the arbitrariness of the practice. (257)

This section captures the best and the worst of Haidt's work. The idea that humans have an evolved sense of the sacred, and that it came about to help our ancestral groups cooperate and cohere—that’s a brilliant contribution to theories going back through D.S.Wilson, Emile Durkheim, all the way back to Darwin. Contemplating it sparks a sense of wonder that must emerge from that same evolved feeling for the sacred. But then he uses the insight in the service of a really lame argument.

            The costs critics of religion point to aren’t the minor personal ones like giving up alcohol or fasting for a few days. Haidt compares studying the actual, “arbitrary” beliefs and practices of religious communities to observing the movements of a football for the purpose of trying to understand why people love watching games. It’s the coming together as a group, he suggests, the sharing of goals and mutual direction of attention, the feeling of shared triumph or even disappointment. But if the beliefs and rituals aren’t what’s important then there’s no reason they have to be arbitrary—and there’s no reason they should have to entail any degree of hostility toward outsiders. How then can Haidt condemn Harris and Dawkins for “worshipping reason” and celebrating the collective endeavor known as science? Why doesn’t he recognize that for highly educated people, especially scientists, discovery is sacred? He seriously mars his otherwise magnificent work by wrongly assuming anyone who doesn’t think flushing an American flag down the toilet is wrong has no sense of the sacred, shaking his finger at them, effectively saying, rallying around a cause is what being human is all about, but what you flag-flushers think is important just isn’t worthy—even though it’s exactly what I think is important too, what I’ve devoted my career and this book you're holding to anyway.

            As Kahneman stresses in his book, resisting the pull of intuition takes a great deal of effort. The main difference between highly educated people and everyone else isn’t a matter of separate moral intuitions. It’s a different attitude toward intuitions in general. Those of us who worship reason believe in the Enlightenment ideals of scientific progress and universal human rights. I think most of us even feel those ideals are sacred and inviolable. But the Enlightenment is a victim of its own success. No one remembers the unchecked violence and injustice that were the norms before it came about—and still are the norms in many parts of the world. In some academic sectors, the Enlightenment is even blamed for some of the crimes its own principles are used to combat, like patriarchy and colonialism. Intuitions are still very much a part of human existence, even among those who are the most thoroughly steeped in Enlightenment values. But worshipping them is far more dangerous than worshipping reason. As the world becomes ever more complicated, nostalgia for simpler times becomes an ever more powerful temptation. And surmounting the pull of intuition may ultimately be an impossible goal. But it’s still a worthy, and even sacred ideal.  

But if Haidt’s attempt to inspire understanding and appreciation misfires how are we to achieve the goal of greater civility and less partisanship? Haidt does offer some useful suggestions. Still, I worry that his injunction to “Talk to the elephant” will merely contribute to the growing sway of the burgeoning focus-groupocracy. Interestingly, the third stage of the Robber's Cave experiment may provide some guidance. Sherif and his colleagues did manage to curtail the escalating hostility between the Eagles and the Rattlers. And all it took was some shared goals they had to cooperate to achieve, like when their bus got stuck on the side of the road and all the boys in both groups had to work together to work it free. Maybe it’s time for a mission to Mars all Americans could support (credit Neil de Grasse Tyson). Unfortunately, the conservatives would probably never get behind it. Maybe we should do another of our liberal conspiracy hoaxes to convince them China is planning to build a military base on the Red Planet. Then we’ll be there in no time.


The Adaptive Appeal of Bad Boys

Image Courtesy of Why We Reason


Excerpt from Hierarchies in Hell and Leaderless Fight ClubsAltruism, Narrative Interest, and the Adaptive Appeal of Bad Boys

            In a New York Times article published in the spring of 2010, psychologist Paul Bloom tells the story of a one-year-old boy’s remarkable response to a puppet show. The drama the puppets enacted began with a central character’s demonstration of a desire to play with a ball. After revealing that intention, the character roles the ball to a second character who likewise wants to play and so rolls the ball back to the first. When the first character rolls the ball to a third, however, this puppet snatches it up and quickly absconds. The second, nice puppet and the third, mean one are then placed before the boy, who’s been keenly attentive to their doings, and they both have placed before them a few treats. The boy is now instructed by one of the adults in the room to take a treat away from one of the puppets. Most children respond to the instructions by taking the treat away from the mean puppet, and this particular boy is no different. He’s not content with such a meager punishment, though, and after removing the treat he proceeds to reach out and smack the mean puppet on the head.

            Brief stage shows like the one featuring the nice and naughty puppets are part of an ongoing research program lead by Karen Wynn, Bloom’s wife and colleague, and graduate student Kiley Hamlin at Yale University’s Infant Cognition Center. An earlier permutation of the study was featured on PBS’s Nova series The Human Spark (jump to chapter 5), which shows host Alan Alda looking on as an infant named Jessica attends to a puppet show with the same script as the one that riled the boy Bloom describes. Jessica is so tiny that her ability to track and interpret the puppets’ behavior on any level is impressive, but when she demonstrates a rudimentary capacity for moral judgment by reaching with unchecked joy for the nice puppet while barely glancing at the mean one, Alda—and Nova viewers along with him—can’t help but demonstrate his own delight. Jessica shows unmistakable signs of positive emotion in response to the nice puppet’s behaviors, and Alda in turn feels positive emotions toward Jessica. Bloom attests that “if you watch the older babies during the experiments, they don’t act like impassive judges—they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events” (6). Any adult witnessing the children’s reactions can be counted on to mirror these expressions and to feel delight at the babies’ incredible precocity.

            The setup for these experiments with children is very similar to experiments with adult participants that assess responses to anonymously witnessed exchanges. In their research report, “Third-Party Punishment and Social Norms,” Ernst Fehr and Urs Fischbacher describe a scenario inspired by economic game theory called the Dictator Game. It begins with an experimenter giving a first participant, or player, a sum of money. The experimenter then explains to the first player that he or she is to propose a cut of the money to the second player. In the Dictator Game—as opposed to other similar game theory scenarios—the second player has no choice but to accept the cut from the first player, the dictator. The catch is that the exchange is being witnessed by a third party, the analogue of little Jessica or the head-slapping avenger in the Yale experiments.  This third player is then given the opportunity to reward or punish the dictator. As Fehr and Fischbacher explain, “Punishment is, however, costly for the third party so a selfish third party will never punish” (3).

It turns out, though, that adults, just like the infants in the Yale studies, are not selfish—at least not entirely. Instead, they readily engage in indirect, or strong, reciprocity. Evolutionary literary theorist William Flesch explains that “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their interactions with the reciprocator” (21-2). According to Flesch, strong reciprocity is the key to solving what he calls “the puzzle of narrative interest,” the mystery of why humans so readily and eagerly feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). The human tendency toward strong reciprocity reaches beyond any third party witnessing an exchange between two others; as Alda, viewers of Nova, and even readers of Bloom’s article in the Times watch or read about Wynn and Hamlin’s experiments, they have no choice but to become participants in the experiments themselves, because their own tendency to reward good behavior with positive emotion and to punish bad behavior with negative emotion is automatically engaged. Audiences’ concern, however, is much less with the puppets’ behavior than with the infants’ responses to it.

The studies of social and moral development conducted at the Infant Cognition Center pull at people’s heartstrings because they demonstrate babies’ capacity to behave in a way that is expected of adults. If Jessica had failed to discern between the nice and the mean puppets, viewers probably would have readily forgiven her. When older people fail to make moral distinctions, however, those in a position to witness and appreciate that failure can be counted on to withdraw their favor—and may even engage in some type of sanctioning, beginning with unflattering gossip and becoming more severe if the immorality or moral complacency persists. Strong reciprocity opens the way for endlessly branching nth –order reciprocation, so not only will individuals be considered culpable for offenses they commit but also for offenses they passively witness. Flesch explains,

Among the kinds of behavior that we monitor through tracking or through report, and that we have a tendency to punish or reward, is the way others monitor behavior through tracking or through report, and the way they manifest a tendency to punish and reward. (50)

Failing to signal disapproval makes witnesses complicit. On the other hand, signaling favor toward individuals who behave altruistically simultaneously signals to others the altruism of the signaler. What’s important to note about this sort of indirect signaling is that it does not necessarily require the original offense or benevolent act to have actually occurred. People take a proclivity to favor the altruistic as evidence of altruism—even if the altruistic character is fictional. 

        That infants less than a year old respond to unfair or selfish behavior with negative emotions—and a readiness to punish—suggests that strong reciprocity has deep evolutionary roots in the human lineage. Humans’ profound emotional engagement with fictional characters and fictional exchanges probably derives from a long history of adapting to challenges whose Darwinian ramifications were far more serious than any attempt to while away some idle afternoons. Game theorists and evolutionary anthropologists have a good idea what those challenges might have been: for cooperativeness or altruism to be established and maintained as a norm within a group of conspecifics, some mechanism must be in place to prevent the exploitation of cooperative or altruistic individuals by selfish and devious ones. Flesch explains,

Darwin himself had proposed a way for altruism to evolve through the mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries are too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)

If, however, individuals given to trying to take advantage of cooperative norms were reliably met with slaps on the head—or with ostracism in the wake of spreading gossip—any benefits they (or their genes) might otherwise count on to redound from their selfish behavior would be much diminished. Flesch’s theory is “that we have explicitly evolved the ability and desire to track others and to learn their stories precisely in order to punish the guilty (and somewhat secondarily to reward the virtuous)” (21). Before strong reciprocity was driving humans to bookstores, amphitheaters, and cinemas, then, it was serving the life-and-death cause of ensuring group cohesion and sealing group boundaries against neighboring exploiters. 

Game theory experiments that have been conducted since the early 1980s have consistently shown that people are willing, even eager to punish others whose behavior strikes them as unfair or exploitative, even when administering that punishment involves incurring some cost for the punisher. Like the Dictator Game, the Ultimatum Game involves two people, one of whom is given a sum of money and told to offer the other participant a cut. The catch in this scenario is that the second player must accept the cut or neither player gets to keep any money. “It is irrational for the responder not to accept any proposed split from the proposer,” Flesch writes. “The responder will always come out better by accepting than vetoing” (31). What the researchers discovered, though, was that a line exists beneath which responders will almost always refuse the cut. “This means they are paying to punish,” Flesch explains. “They are giving up a sure gain in order to punish the selfishness of the proposer” (31). Game theorists call this behavior altruistic punishment because “the punisher’s willingness to pay this cost may be an important part in enforcing norms of fairness” (31). In other words, the punisher is incurring a cost to him or herself in order to ensure that selfish actors don’t have a chance to get a foothold in the larger, cooperative group. 

The economic logic notwithstanding, it seems natural to most people that second players in Ultimatum Game experiments should signal their disapproval—or stand up for themselves, as it were—by refusing to accept insultingly meager proposed cuts. The cost of the punishment, moreover, can be seen as a symbol of various other types of considerations that might prevent a participant or a witness from stepping up or stepping in to protest. Discussing the Three-Player Dictator Game experiments conducted by Fehr and Fischbacher, Flesch points out that strong reciprocity is even more starkly contrary to any selfish accounting:

Note that the third player gets nothing out of paying to reward or punish except the power or agency to do just that. It is highly irrational for this player to pay to reward or punish, but again considerations of fairness trump rational self-interest. People do pay, and pay a substantial amount, when they think that someone has been treated notably unfairly, or when they think someone has evinced marked generosity, to affect what they have observed. (33)

Neuroscientists have even zeroed in on the brain regions that correspond to our suppression of immediate self-interest in the service of altruistic punishment, as well as those responsible for the pleasure we take in anticipating—though not in actually witnessing—free riders meeting with their just deserts (Knoch et al. 829Quevain et al. 1254). Outside of laboratories, though, the cost punishers incur can range from the risks associated with a physical confrontation to time and energy spent convincing skeptical peers a crime has indeed been committed.

Flesch lays out his theory of narrative interest in a book aptly titled Comeuppance:Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. A cursory survey of mainstream fiction, in both blockbuster movies and best-selling novels, reveals the good guys versus bad guys dynamic as preeminent in nearly every plot, and much of the pleasure people get from the most popular narratives can quite plausibly be said to derive from the goodie prevailing—after a long, harrowing series of close calls and setbacks—while the baddie simultaneously gets his or her comeuppance. Audiences love to see characters get their just deserts. When the plot fails to deliver on this score, they walk away severely disturbed. That disturbance can, however, serve the author’s purposes, particularly when the goal is to bring some danger or injustice to readers’ or viewers’ attention, as in the case of novels like Orwell’s 1984. Plots, of course, seldom feature simple exchanges with meager stakes on the scale of game theory experiments, and heroes can by no means count on making it to the final scene both vindicated and rewarded—even in stories designed to give audiences exactly what they want. The ultimate act of altruistic punishment, and hence the most emotionally poignant behavior a character can engage in, is martyrdom. It’s no coincidence that the hero dies in the act of vanquishing the villain in so many of the most memorable books and movies.
Tom Sawyer
            If narrative interest really does emerge out of a propensity to monitor each other’s behaviors for signs of a capacity for cooperation and to volunteer affect on behalf of altruistic individuals and against selfish ones they want to see get their comeuppance, the strong appeal of certain seemingly bad characters emerges as a mystery calling for explanation.  From England’s tradition of Byronic heroes like Rochester to America’s fascination with bad boys like Tom Sawyer, these characters win over audiences and stand out as perennial favorites even though at first blush they seem anything but eager to establish their nice guy bone fides. On the other hand, Rochester was eventually redeemed in Jane Eyre, and Tom Sawyer, though naughty to be sure, shows no sign whatsoever of being malicious. Tellingly, though, these characters, and a long list of others like them, also demonstrate a remarkable degree of cleverness: Rochester passing for a gypsy woman, for instance, or Tom Sawyer making fence painting out to be a privilege. One hypothesis that could account for the appeal of bad boys is that their badness demonstrates undeniably their ability to escape the negative consequences most people expect to result from their own bad behavior.

This type of demonstration likely functions in a way similar to another mechanism that many evolutionary biologists theorize must have been operating for cooperation to have become established in human societies, a process referred to as the handicap principle, or costly signaling. A lone altruist in any group is unlikely to fare well in terms of survival and reproduction. So the question arises as to how the minimum threshold of cooperators in a population was first surmounted. Flesch’s fellow evolutionary critic, Brian Boyd, in his book On the Origin of Stories, traces the process along a path from mutualism, or coincidental mutual benefits, to inclusive fitness, whereby organisms help others who are likely to share their genes—primarily family members—to reciprocal altruism, a quid pro quo arrangement in which one organism will aid another in anticipation of some future repayment (54-57). However, a few individuals in our human ancestry must have benefited from altruism that went beyond familial favoritism and tit-for-tat bartering.

Rochester disguised as a gypsy
In their classic book The Handicap Principal, Amotz and Avishag Zahavi suggest that altruism serves a function in cooperative species similar to the one served by a peacock’s feathers. The principle could also help account for the appeal of human individuals who routinely risk suffering consequences which deter most others. The idea is that conspecifics have much to gain from accurate assessments of each other’s fitness when choosing mates or allies. Many species have thus evolved methods for honestly signaling their fitness, and as the Zahavis explain, “in order to be effective, signals have to be reliable; in order to be reliable, signals have to be costly” (xiv). Peacocks, the iconic examples of the principle in action, signal their fitness with cumbersome plumage because their ability to survive in spite of the handicap serves as a guarantee of their strength and resourcefulness. Flesch and Boyd, inspired by evolutionary anthropologists, find in this theory of costly signaling the solution the mystery of how altruism first became established; human altruism is, if anything, even more elaborate than the peacock’s display. 

Humans display their fitness in many ways. Not everyone can be expected to have the wherewithal to punish free-riders, especially when doing so involves physical conflict. The paradoxical result is that humans compete for the status of best cooperator. Altruism is a costly signal of fitness. Flesch explains how this competition could have emerged in human populations:

If there is a lot of between-group competition, then those groups whose modes of costly signaling take the form of strong reciprocity, especially altruistic punishment, will outcompete those whose modes yield less secondary gain, especially less secondary gain for the group as a whole. (57)

Taken together, the evidence Flesch presents suggests the audiences of narratives volunteer affect on behalf of fictional characters who show themselves to be altruists and against those who show themselves to be selfish actors or exploiters, experiencing both frustration and delight in the unfolding of the plot as they hope to see the altruists prevail and the free-riders get their comeuppance. This capacity for emotional engagement with fiction likely evolved because it serves as a signal to anyone monitoring individuals as they read or view the story, or as they discuss it later, that they are disposed either toward altruistic punishment or toward third-order free-riding themselves—and altruism is a costly signal of fitness.

The hypothesis emerging from this theory of social monitoring and volunteered affect to explain the appeal of bad boy characters is that their bad behavior will tend to redound to the detriment of still worse characters. Bloom describes the results of another series of experiments with eight-month-old participants:

When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior. (5)

These characters’ bad behavior will also likely serve an obvious function as costly signaling; they’re bad because they’re good at getting away with it. Evidence that the bad boy characters are somehow truly malicious—for instance, clear signals of a wish harm to innocent characters—or that they’re irredeemable would severely undermine the theory. As the first step toward a preliminary survey, the following sections examine two infamous instances in which literary characters whose creators intended audiences to recognize as bad nonetheless managed to steal the show from the supposed good guys.
(Watch Hamlin discussing the research in an interview from earlier today.)
And check out this video of the experiments.

Campaigning Deities: Justifying the ways of Satan


Milton believed Christianity more than worthy of a poetic canon in the tradition of the classical poets, and Paradise Lost represents his effort at establishing one. What his Christian epic has offered for many readers over the centuries, however, is an invitation to weigh the actions and motivations of immortals in mortal terms. In the story, God becomes a human king, albeit one with superhuman powers, while Satan becomes an upstart subject. As Milton attempts to “justify the ways of God to Man,” he is taking it upon himself simultaneously, and inadvertently, to justify the absolute dominion of a human dictator. One of the consequences of this shift in perspective is the transformation of a philosophical tradition devoted to parsing the logic of biblical teachings into something akin to a political campaign between two rival leaders, each laying out his respective platform alongside a case against his rival. What was hitherto recondite and academic becomes in Milton’s work immediate and visceral.

Keats famously penned the wonderfully self-proving postulate, “Axioms in philosophy are not axioms until they are proved upon our pulses,” which leaves open the question of how an axiom might be so proved. Milton’s God responds to Satan’s approach to Earth, and his foreknowledge of Satan’s success in tempting the original pair, with a preemptive defense of his preordained punishment of Man:

…Whose fault?
Whose but his own? Ingrate! He had of Me
All he could have. I made him just and right,
Sufficient to have stood though free to fall.
Such I created all th’ ethereal pow’rs
And spirits, both them who stood and who failed:
Freely they stood who stood and fell who fell.
Not free, what proof could they have giv’n sincere
Of true allegiance, constant faith or love
Where only what they needs must do appeared,
Not what they would? What praise could they receive?
What pleasure I from such obedience paid
When will and reason… had served necessity,
Not me? (3.96-111)

God is defending himself against the charge that his foreknowledge of the fall implies that Man’s decision to disobey was borne of something other than his free will. What choice could there have been if the outcome of Satan’s temptation was predetermined? If it wasn’t predetermined, how could God know what the outcome would be in advance? God’s answer—of course I granted humans free will because otherwise their obedience would mean nothing—only introduces further doubt. Now we must wonder why God cherishes Man’s obedience so fervently. Is God hungry for political power? If we conclude he is—and that conclusion seems eminently warranted—then we find ourselves on the side of Satan. It’s not so much God’s foreknowledge of Man’s fall that undermines human freedom; it’s God’s insistence on our obedience, under threat of God’s terrible punishment.

            Milton faces a still greater challenge in his attempt to justify God’s ways “upon our pulses” when it comes to the fallout of Man’s original act of disobedience. The Son argues on behalf of Man, pointing out that the original sin was brought about through temptation. If God responds by turning against Man, then Satan wins. The Son thus argues that God must do something to thwart Satan: “Or shall the Adversary thus obtain/ His end and frustrate Thine?” (3.156-7). Before laying out his plan for Man’s redemption, God explains why punishment is necessary:

            …Man disobeying
            Disloyal breaks his fealty and sins
            Against the high supremacy of Heav’n,
            Affecting godhead, and so, losing all,
            To expiate his treason hath naught left
            But to destruction sacred and devote
            He with his whole posterity must die. (3. 203-9)

The potential contradiction between foreknowledge and free choice may be abstruse enough for Milton’s character to convincingly discount: “If I foreknew/ Foreknowledge had no influence on their fault/ Which had no less proved certain unforeknown” (3.116-9). There is another contradiction, however, that Milton neglects to take on. If Man is “Sufficient to have stood though free to fall,” then God must justify his decision to punish the “whole posterity” as opposed to the individuals who choose to disobey. The Son agrees to redeem all of humanity for the offense committed by the original pair. His knowledge that every last human will disobey may not be logically incompatible with their freedom to choose; if every last human does disobey, however, the case for that freedom is severely undermined. The axiom of collective guilt precludes the axiom of freedom of choice both logically and upon our pulses.

            In characterizing disobedience as a sin worthy of severe punishment—banishment from paradise, shame, toil, death—an offense he can generously expiate for Man by sacrificing the (his) Son, God seems to be justifying his dominion by pronouncing disobedience to him evil, allowing him to claim that Man’s evil made it necessary for him to suffer a profound loss, the death of his offspring. In place of a justification for his rule, then, God resorts to a simple guilt trip.

            Man shall not quite be lost but saved who will,
            Yet not of will in him but grace in me
            Freely vouchsafed. Once more I will renew
            His lapsed pow’rs though forfeit and enthralled
            By sin to foul exorbitant desires.
            Upheld by me, yet once more he shall stand
            On even ground against his mortal foe,
            By me upheld that he may know how frail
            His fall’n condition is and to me owe
            All his deliv’rance, and to none but me. (3.173-83)

Having decided to take on the burden of repairing the damage wrought by Man’s disobedience to him, God explains his plan:

            Die he or justice must, unless for him
            Some other as able and as willing pay
            The rigid satisfaction, death for death. (3.210-3)

He then asks for a volunteer. In an echo of an earlier episode in the poem which has Satan asking for a volunteer to leave hell on a mission of exploration, there is a moment of hesitation before the Son offers himself up to die on Man’s behalf.

            …On Me let thine anger fall.
            Account Me Man. I for his sake will leave
            Thy bosom and this glory next to Thee
            Freely put off and for him lastly die
            Well pleased. On Me let Death wreck all his rage! (3.37-42)

This great sacrifice, which is supposed to be the basis of the Son’s privileged status over the angels, is immediately undermined because he knows he won’t stay dead for long: “Yet that debt paid/ Thou wilt not leave me in the loathsome grave” (246-7). The Son will only die momentarily. This sacrifice doesn’t stack up well against the real risks and sacrifices made by Satan.

            All the poetry about obedience and freedom and debt never takes on the central questionSatan’s rebellion forces readers to ponder: Does God deserve our obedience? Or are the labels of good and evil applied arbitrarily? The original pair was forbidden from eating from the Tree of Knowledge—could they possibly have been right to contravene the interdiction? Since it is God being discussed, however, the assumption that his dominion requires no justification, that it is instead simply in the nature of things, might prevail among some readers, as it does for the angels who refuse to join Satan’s rebellion. The angels, after all, owe their very existence to God, as Abdiel insists to Satan. Who, then, are any of them to question his authority? This argument sets the stage for Satan’s remarkable rebuttal:

                        …Strange point and new!
Doctrine which we would know whence learnt: who saw
When this creation was? Remember’st thou
Thy making while the Maker gave thee being?
We know no time when we were not as now,
Know none before us, self-begot, self-raised
By our own quick’ning power…
Our puissance is our own. Our own right hand
Shall teach us highest deeds by proof to try
Who is our equal. (5.855-66)

Just as a pharaoh could claim credit for all the monuments and infrastructure he had commissioned the construction of, any king or dictator might try to convince his subjects that his deeds far exceed what he is truly capable of. If there’s no record and no witness—or if the records have been doctored and the witnesses silenced—the subjects have to take the king’s word for it.

            That God’s dominion depends on some natural order, which he himself presumably put in place, makes his tendency to protect knowledge deeply suspicious. Even the angels ultimately have to take God’s claims to have created the universe and them along with it solely on faith. Because that same unquestioning faith is precisely what Satan and the readers of Paradise Lost are seeking a justification for, they could be forgiven for finding the answer tautological and unsatisfying. It is the Tree of Knowledge of Good and Evil that Adam and Eve are forbidden to eat fruit from. When Adam, after hearing Raphael’s recounting of the war in heaven, asks the angel how the earth was created, he does receive an answer, but only after a suspicious preamble:

                        …such commission from above
            I have received to answer thy desire
            Of knowledge with bounds. Beyond abstain
            To ask nor let thine own inventions hope
            Things not revealed which the invisible King
            Only omniscient hath suppressed in night,
            To none communicable in Earth or Heaven:
            Enough is left besides to search and know. (7.118-125)

Raphael goes on to compare knowledge to food, suggesting that excessively indulging curiosity is unhealthy. This proscription of knowledge reminded Shelley of the Prometheus myth. It might remind modern readers of The Wizard of Oz—“Pay no attention to that man behind the curtain”—or to the space monkeys in Fight Club, who repeatedly remind us that “The first rule of Project Mayhem is, you do not ask questions.” It may also resonate with news about dictators in Asia or the Middle East trying to desperately to keep social media outlets from spreading word of their atrocities.

            Like the protesters of the Arab Spring, Satan is putting himself at great risk by challenging God’s authority. If God’s dominion over Man and the angels is evidence not of his benevolence but of his supreme selfishness, then Satan’srebellion becomes an attempt at altruistic punishment. The extrapolation from economic experiments like the ultimatum and dictator games to efforts to topple dictators may seem like a stretch, especially if humans are predisposed to forming and accepting positions in hierarchies, as a casual survey of virtually any modern organization suggests is the case.

Organized institutions, however, are a recent development in terms of human evolution. The English missionary Lucas Bridges wrote about his experiences with the Ona foragers in Tierra del Fuego in his 1948 book Uttermost Part of the Earth, and he expresses his amusement at his fellow outsiders’ befuddlement when they learn about the Ona’s political dynamics:

A certain scientist visited our part of the world and, in answer to his inquiries on this matter, I told him that the Ona had no chieftains, as we understand the word. Seeing that he did not believe me, I summoned Kankoat, who by that time spoke some Spanish. When the visitor repeated his question, Kankoat, too polite to answer in the negative, said: “Yes, senor, we, the Ona, have many chiefs. The men are all captains and all the women are sailors” (quoted in Boehm 62).

At least among Ona men, it seems there was no clear hierarchy. The anthropologist Richard Lee discovered a similar dynamic operating among the !Kung foragers of the Kalahari. In order to ensure that no one in the group can attain an elevated status which would allow him to dominate the others, several leveling mechanisms are in place. Lee quotes one of his informants:

When a young man kills much meat, he comes to think of himself as a chief or a big man, and he thinks of the rest of us as his servants or inferiors. We can’t accept this. We refuse one who boasts, for someday his pride will make him kill somebody. So we always speak of his meat as worthless. In this way we cool his heart and make him gentle. (quoted in Boehm 45)

These examples of egalitarianism among nomadic foragers are part of anthropologist Christopher Boehm’s survey of every known group of hunter-gatherers. His central finding is that “A distinctively egalitarian political style is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-36). This finding bears on any discussion of human evolution and human nature because small groups like these constituted the whole of humanity for all but what amounts to the final instants of geological time.

           

           

Hierarchies in Hell and Leaderless Fight Clubs: a More Modest Thesis Prospectus

Question:

Do the sciences of human behavior as practiced and understood in the Twenty-First Century have anything of value to contribute to the study of literature? Will the application of theories arising from the fields of evolutionary psychology and evolutionary anthropology to literary works yield anything beyond one more perspective in the seemingly endless succession of momentarily fashionable approaches to literary scholarship? Or is the scientific exploration of human behavior itself hopelessly incapable of transcending the culture in which it is undertaken? And, assuming any ultimate verdict on the value of evolutionary theories of literature is at present impossible to render, might they nonetheless shed some light on issues posing difficulties for other theoretical approaches? For instance, what accounts for centuries of readers’ sympathy toward characters who are on the surface meant to serve as villains? Milton’s Satan is a classic example of this phenomenon, while Palahniuk’s Tyler Durden is a more contemporary one. Are reader’s strong feelings on behalf of these antagonists understandable in terms of evolutionary theories of human behavior? And, if so, what does that suggest about the nature of human interest in fictional narratives like Paradise Lost and Fight Club?

Implications:

William Flesch, in his book Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction, theorizes that humans’ passion for fictional narratives emerges from a predilection for monitoring one another for signals of their capacity for cooperative relationships. Humans naturally favor conspecifics who prove themselves capable of setting aside their own rational self-interests to act on behalf of others or on behalf of the larger group to which they belong. And they demonstrate their own altruistic tendencies by favoring other altruists and punishing those who would take advantage of them. Does the character Satan in Milton’s epic poem somehow signal to readers that he is altruistic? And is there some type of underlying message about cooperation in the seemingly senseless violence in Palahniuk’s novel?

Flesch, however, leaves another dimension of evolutionary psychology unexplored, one which could provide much insight into the appeal of both Milton’s and Palahniuk’s stories. Anthropologist Christopher Boehm explores the human propensity toward forming hierarchies in his book Hierarchy in the Forest: The Evolution of Egalitarian Behavior. It turns out that, contrary to conventional wisdom, humans in foraging bands similar to those they have lived in for the vast majority of their time on earth are strictly egalitarian. Indeed, most contemporary hunter-gatherers would, with little prompting, express support for Satan’s famous line about it being better to reign in hell than to serve in heaven. And they would likely recognize many of the group dynamics Tyler Durden manipulates to gain ascendancy among the members of the fight clubs—as well as the ultimate necessity of having someone end his reign.

The theoretical foundation established by Flesch can likely support considerations of male competition for status, since one of the conditions thought necessary for the evolution of cooperation among humans is a relative absence of hierarchical behavior. One common form of selfishness humans are vigilant of in their neighbors is a strong motivation to dominate others. When a person, or a fictional representation of one, acquires influence incommensurate with others in the group, those other group members can be counted on to pay close attention to the way that person yields his (or less often her) power. If it turns out to be for the benefit of the group, the higher status individual will continue to have the support of the group. If it is to further selfish gains, the lower-ranking group members will usually act collectively to bring an end to his dominance. And this dynamic plays out in stories told by hunter-gatherers and writers in more complex societies alike.

Methods:

This project will explore the central characters of Paradise Lost and Fight Club in an attempt to illuminate readers’ feelings toward them. In particular, it will focus on Milton’s Satan and Palahniuk’s Tyler Durder, and will examine the way in which they are portrayed in search of recognizable signals of either selfishness or altruism. Such an exploration might also yield insights into how Boehm’s theories of human hierarchical or egalitarian proclivities can be integrated into the approach to literature set out by Flesch.

1st and Overly Ambitious Prospectus for a Master's Thesis


I'll be paring this down a bit. My advisors felt that the project spelled out here is more appropriate for doctoral disseration or some such longer work.

Grandeur in This View of Literature?

Question:

What, if anything, can evolutionary theory contribute to the study of literature? Is it possible to study literature scientifically, and if so what are the advantages and disadvantages of doing so? The trend among literary theorists is to regard science in general, and evolutionary theory in particular as deeply suspect since they have historically functioned as ideological justifications for various types of violence and oppression. Yet, by unmooring literary scholarship from sound epistemology, critics almost inevitably fall victim to what Frederick Crews calls “the fast-talking superstars who have prostituted it to crank theory, political conformism, and cliquishness” (xv). Will E.O. Wilson’s idea of consilience between science and the humanities be just another trendy fashion among literary scholars—if it ever takes hold at all? Will science ever serve any role in the humanities other than that of ideological bastion of European male hegemony? Does an evolutionary approach to literature hold promise in the quest for insights based on sound reasoning that go beyond mere justification for the political status quo?

Implications:
The primary function of a literary theory is to offer insight into works of literature, what they mean, why they appeal or fail to appeal to readers, how they are influenced by and how they in turn influence the cultures in which they emerge and in which they are appreciated. But the insights borne of the application of a theory to a text cannot be taken as evidence of that theory’s validity. Many literary works have been interpreted psychoanalytically, for instance, and the application of Freud’s theory has yielded insights into those works. But, as evidence against psychoanalytic theories mounts, those insights must be called into question. Theories must be validated independently of their application to texts. And the validity of insights produced through the application of theories is contingent on the validity of those theories.

Interpreting a literary work from the perspective of one or another ideology is usually an easy task, regardless of whether that ideology is scientifically grounded. The question then becomes are there empirically validated theories that might be of interest to literary scholars? If so, do they yield insights into literary works beyond simple distillations of the prevailing culture? Once the difficulty of arriving at scientifically sound theories and the threat that such theories somehow encourage the oppression of women and minorities are dealt with, a third potential stumbling block remains. If a scientific theory of narrative is possible, might it reduce literature to a set of mechanistic principles, and thus rob it of some of its mysterious capacity to enchant audiences? Or might such a theory somehow enrich the experience of literature?

Methods:
This project will begin with an exploration of some current approaches to bringing literature into the realm of human biological and cultural evolution. The most promising of these approaches to date sees storytelling as emerging from evolved dispositions toward monitoring other people for signals of their propensity for either selfishness or altruism, and toward signaling one’s own altruism by emotionally favoring altruistic characters. This approach is described by William Flesch in his book, Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. Is Flesch’s theory valid? Does it offer any insight into actual literary works?

The second part of the project will explore possible methods whereby theories of narrative may be tested to establish their validity. Of course, these tests must go beyond seeing whether or not applying the theory generates insights into a literary work, because it’s possible for invalid theories to generate invalid insights. The tests must involve predictions emerging from the theories that can either fail or succeed. One possible way to test Flesch’s social monitoring and volunteered affect theory, for instance, would be to sample a large body of works to see if a strong trend exists for stories to focus on conflicts between selfish characters and altruistic ones. If such conflicts only show up in a minority of literary works, or if they take place only at the periphery of most stories, then the prediction, and the theory along with it, fail.

Since gathering such a large sample would be a daunting endeavor, bringing with it a large risk of confirmation bias, previous attempts by scholars to come up with exhaustive catalogues of plot and character types may be of use. Ronald Tobias’s 20 Master Plots and Georges Polti’s The Thirty-six Dramatic Situations suggests themselves as good sources for data.

The third and final part of this project will consist of an application of evolutionary theories of literature to diverse works so that an (unavoidably subjective) assessment of the value of the insights can be made. Works from different historical eras and spanning a wide breadth of geographical space may serve to highlight the complementary roles of universal cognitive mechanisms and cultural traditions. What counts as altruism, for instance, might vary across cultures. Likewise, each culture tends to sanction certain selfish acts more than others. So the basic framework of selfless protagonist and selfish antagonist can take on countless forms and carry with it important information about a culture and what’s expected of individuals living in it. Possible candidates for this type of analysis are Milton’s Paradise Lost—an interesting case because many readers sympathize strongly with Satan, the antagonist—and Palahniuk’s Fight Club, a modern cult classic in which one character teaches the other the importance of self-destruction.

The Inverted Pyramid: Our Millennia-Long Project of Keeping Alpha Males in their Place

Imagine this familiar hypothetical scenario: you’re a prehistoric hunter, relying on cleverness, athleticism, and well-honed skills to track and kill a gazelle on the savannah. After carting the meat home, your wife is grateful—or your wives rather. As the top hunter in the small tribe with which your family lives and travels, you are accorded great power over all the other men, just as you enjoy great power over your family. You are the leader, the decision-maker, the final arbiter of disputes and the one everyone looks to for direction in times of distress. The payoff for all this responsibility is that you and your family enjoy larger shares of whatever meat is brought in by your subordinates. And you have sexual access to almost any woman you choose. Someday, though, you know your prowess will be on the wane, and you’ll be subjected to more and more challenges from younger men, until eventually you will be divested of all your authority. This is the harsh reality of man the hunter.

It’s easy to read about “chimpanzee politics” (I’ll never forget reading Frans de Waal’s book by that title) or watch nature shows in which the stentorian, accented narrator assigns names and ranks to chimps or gorillas and then, looking around at the rigid hierarchies of the institutions where we learn or where we work, as well as those in the realm of actual human politics, and conclude that there must have been a pretty linear development from the ancestors we share with the apes to the eras of pharaohs and kings to the Don Draperish ‘60’s till today, at least in terms of our natural tendency to form rank and follow leaders.

What to make, then, of these words spoken to anthropologist Richard Lee by a hunter-gatherer teaching him of the ways of the !Kung San?

“Say that a man has been hunting. He must not come home and announce like a braggart, ‘I have killed a big one in the bush!’ He must first sit down in silence until I or someone else comes up to his fire and asks, ‘What did you see today?’ He replies quietly, ‘Ah, I’m no good for hunting. I saw nothing at all…maybe just a tiny one.’ Then I smile to myself because I now know he has killed something big.” (Quoted in Boehm 45)

Even more puzzling from a selfish gene perspective is that the successful hunter gets no more meat for himself or his family than any of the other hunters. They divide it equally. Lee asked his informants why they criticized the hunters who made big kills.

“When a young man kills much meat, he comes to think of himself as a chief or big man, and he thinks of the rest of us as his servants or inferiors. We can’t accept this.”

So what determines who gets to be the Alpha, if not hunting prowess? According to Christopher Boehm, the answer is simple. No one gets to be the Alpha. “A distinctly egalitarian political style is highly predictable wherever people live in small, locally autonomous social and economic groups” (36). These are exactly the types of groups humans have lived in for the vast majority of their existence on Earth. This means that, uniquely among the great apes, humans evolved mechanisms to ensure egalitarianism alongside those for seeking and submitting to power.

Boehm’s Hierarchy in the Forest: The Evolution of Egalitarian Behavior is a miniature course in anthropology, as dominance and submission—as well as coalition building and defiance—are examined not merely in the ethnographic record, but in the ethological descriptions of our closest ape relatives. Building on Bruce Knauft’s observations of the difference between apes and hunter-gatherers, Boehm argues that “with respect to political hierarchy human evolution followed a U-shaped trajectory” (65). But human egalitarianism is not based on a simple of absence of hierarchy; rather, Boehm theorizes that the primary political actors (who with a few notable exceptions tend to be men) decide on an individual basis that, while power may be desirable, the chances of any individual achieving it are small, and the time span during which he would be able to sustain it would be limited. Therefore, they all submit to the collective will that no man should have authority over any other, thus all of them maintain their own personal autonomy. Boehm explains

“In despotic social dominance hierarchies the pyramid of power is pointed upward, with one or a few individuals (usually male) at the top exerting authority over a submissive rank and file. In egalitarian hierarchies the pyramid of power is turned upside down, with a politically united rank and file decisively dominating the alpha-male types” (66).

This isn’t to say that there aren’t individuals who by dint of their prowess and intelligence enjoy more influence over the band than others, but such individuals are thought of as “primus inter pares” (33), a first among equals. “Foragers,” Boehm writes, “are not intent on true and absolute equality, but on a kind of mutual respect that leaves individual autonomy intact” (68). It’s as though the life of the nomadic hunter and forager is especially conducive to thinking in terms of John Rawls’s “veil of ignorance.”

The mechanisms whereby egalitarianism is enforced will be familiar to anyone who’s gone to grade school or who works with a group of adult peers. Arrogant and bullying individuals are the butt of jokes, gossip, and ostracism. For a hunter-gatherer these can be deadly. Reputations are of paramount importance. If all else fails and a despot manages to secure some level authority, instigating a “dominance episode,” his reign will be short-lived. Even the biggest and strongest men are vulnerable to sizable coalitions of upstarts—especially in species who excel at making weapons for felling big game.

Boehm address several further questions, like what conditions bring about the reinstitution of pyramidal hierarchies, and how have consensus decision-making and social pressure against domineering affected human evolution? But what I find most interesting are his thoughts about the role of narrative in the promulgation and maintenance of the egalitarian ethos:

“As practical political philosophers, foragers perceive quite correctly that self-aggrandizement and individual authority are threats to personal autonomy. When upstarts try to make inroads against an egalitarian social order, they will be quickly recognized and, in many cases, quickly curbed on a preemptive basis. One reason for this sensitivity is that the oral tradition of a band (which includes knowledge from adjacent bands) will preserve stories about serious domination episodes. There is little doubt that many of the ethnographic reports of executions in my survey were based on such traditions, as opposed to direct ethnographic observation” (87).