READING SUBTLY

This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
 

Dennis Junk Dennis Junk

Just Another Piece of Sleaze: The Real Lesson of Robert Borofsky's "Fierce Controversy"

Robert Borofsky and his cadre of postmodernist activists try desperately to resuscitate the case against scientific anthropologist Napoleon Chagnon after disgraced pseudo-journalist Brian Tierney’s book “Darkness in El Dorado” is exposed as a work of fraud. The product is something only an ideologue can appreciate.

Robert Borofsky’sYanomami: The Fierce Controversy and What We Can Learn from It is the source book participants on a particular side of the debate over Patrick Tierney’s Darkness in El Dorado would like everyone to read, even more than Tierney’s book itself. To anyone on the opposing side, however—and, one should hope, to those who have yet to take a side—there’s an unmissable element of farce running throughout Borofsky’s book, which ultimately amounts to little more than a transparent attempt at salvaging the campaign against anthropologist Napoleon Chagnon. That campaign had initially received quite a boost from the publication of Darkness in El Dorado, but then support began to crumble as various researchers went about exposing Tierney as a fraud. With The Fierce Controversy, Borofsky and some of the key members of the anti-Chagnon campaign are doing their best to dissociate themselves and their agenda from Tierney, while at the same time taking advantage of the publicity he brought to their favorite talking points. 

The book is billed as an evenhanded back-and-forth between anthropologists on both sides of the debate. But, despite Borofsky’s pretentions to impartiality, The Fierce Controversy is about as fair and balanced as Fox News’s political coverage—there’s even a chapter titled “You Decide.” By giving the second half of the book over to an exchange of essays and responses by what he refers to as “partisans” for both sides, Borofsky makes himself out to be a disinterested mediator, and he wants us to see the book as an authoritative representation of some quasi-democratic collection of voices—think Occupy Wall Street’s human microphones, with all the repetition, incoherence, and implicit signaling of a lack of seriousness. “Objectivity does not lie in the assertions of authorities,” Borofsky insists in italics. “It lies in the open, public analysis of divergent perspectives” (18). In the first half of the book, however, Borofsky gives himself the opportunity to convey his own impressions of the controversy under the guise of providing necessary background. Unfortunately, he’s not nearly as subtle in pushing his ideology as he’d like to be.

Borofsky claims early on that his “book seeks, in empowering readers, to develop a new political constituency for transforming the discipline.” But is Borofsky empowering readers, or is he trying to foment a revolution? The only way the two goals could be aligned would be if readers already felt the need for the type of change Borofsky hopes to instigate. What does that change entail? He writes,

It is understandable that many anthropologists have had trouble addressing the controversy’s central issues because they are invested in the present system. These anthropologists worked their way through the discipline’s existing structures as they progressed from being graduate students to employed professionals. While they may acknowledge the limitations of the discipline, these structures represent the world they know, the world they feel comfortable with. One would not expect most of them to lead the charge for change. But introductory and advanced students are less invested in this system. If anything, they have a stake in changing it so as to create new spaces for themselves. (21)

In other words, Borofsky simultaneously wants his book to be open-ended—the outcome of the debate in the second half reflecting the merits of each side’s case, with the ultimate position taken by readers left to their own powers of critical thought—while at the same time inspiring those same readers to work for the goals he himself believes are important. He utterly neglects the possibility that anthropology students won’t share his markedly Marxist views. From this goal statement, you may expect the book to focus on the distribution of power and the channels for promotion in anthropology departments, but that’s not at all what Borofsky and his coauthors end up discussing. Even more problematically, though, Borofsky is taking for granted here the seriousness of “the controversy’s central issues,” the same issues whose validity is the very thing that’s supposed to be under debate in the second half of the book.  

The most serious charges in Tierney’s book were shown to be false almost as soon as it was published, and Tierney himself was thoroughly discredited when it was discovered that countless of his copious citations bore little or no relation to the claims they were supposed to support. A taskforce commissioned by the American Society of Human Genetics, for instance, found that Tierney spliced together parts of different recorded conversations to mislead his readers about the actions and intentions of James V. Neel, a geneticist he accuses of unethical conduct. Reasonably enough, many supporters of Chagnon, who Tierney likewise accuses of grave ethical breaches, found such deliberately misleading tactics sufficient cause to dismiss any other claims by the author. But Borofsky treats this argument as an effort on the part of anthropologists to dodge inconvenient questions:

Instead of confronting the breadth of issues raised by Tierney and the media, many anthropologists focused on Tierney’s accusations regarding Neel… As previously noted, focusing on Neel had a particular advantage for those who wanted to continue sidestepping the role of anthropologists in all this. Neel was a geneticist, and soon after the book’s publication most experts realized that the accusation that Neel helped facilitate the spread of measles was false. Focusing on Neel allowed anthropologists to downplay the role of the discipline in the whole affair. (46)

When Borofsky accuses some commenters of “sidestepping the role of anthropologists in all this,” we’re left wondering, all what? The Fierce Controversy is supposed to be about assessing the charges Tierney made in his book, but again the book’s editor and main contributor is assuming that where there’s smoke there’s fire. It’s also important to note that the nature of the charges against Chagnon make them much more difficult to prove or disprove. A call to a couple of epidemiologists and vaccination experts established that what Tierney accused Neel of was simply impossible. It’s hardly sidestepping the issue to ask why anyone would trust Tierney’s reporting on more complicated matters.

Anyone familiar with the debates over postmodernism taking place among anthropologists over the past three decades will see at a glance that The Fierce Controversy is disingenuous in its very conception. Borofsky and the other postmodernist contributors desperately want to have a conversation about how Napoleon Chagnon’s approach to fieldwork, and even his conception of anthropology as a discipline are no longer aligned with how most anthropologists conceive of and go about their work. Borofsky is explicit about this, writing in one of the chapters that’s supposed to merely provide background for readers new to the debate,

Chagnon writes against the grain of accepted ethical practice in the discipline. What he describes in detail to millions of readers are just the sorts of practices anthropologists claim they do not practice. (39)   

This comes in a section titled “A Painful Contradiction,” which consists of Borofsky straightforwardly arguing that Chagnon, whose first book on the Yanomamö is perhaps the most widely read ethnography in history, disregarded the principles of the American Anthropological Association by actively harming the people he studied and by violating their privacy (though most of Chagnon’s time in the field predated the AAA’s statements of the principles in question). In Borofsky’s opinion, these ethical breaches are attested to in Chagnon’s own works and hence beyond dispute. In reality, though, whether Chagnon’s techniques amount to ethical violations (by any day’s standards) is very much in dispute, as we see clearly in the second half of the book. 

(Yanomamö was Chagnon’s original spelling, but his detractors can’t bring themselves to spell it the same way—hence Yanomami.)

Borofsky is of course free to write about his issues with Chagnon’s methods, but inserting his own argument into a book he’s promoting as an open and fair exchange between experts on both sides of the debate, especially when he’s responding to the others’ contributions after the fact, is a dubious sort of bait and switch. The second half of the book is already lopsided, with Bruce Albert, Leda Martins, and Terence Turner attacking Neel’s and Chagnon’s reputations, while Raymond Hames and Kim Hill argue for the defense. (The sixth contributor, John Peters, doesn’t come down clearly on either side.) When you factor in Borofsky’s own arguments, you’ve got four against two—and if you go by page count the imbalance is quite a bit worse; indeed, the inclusion of the two Chagnon defenders in the forum starts to look more like a ploy to gain a modicum of credibility for what’s best characterized as just another anti-Chagnon screed by a few of his most outspoken detractors.

Notably absent from the list of contributors is Chagnon himself, who probably reasoned that lending his name to the title page would give the book an undeserved air of legitimacy. Given the unmasked contempt that Albert, Martins, and Turner evince toward him in their essays, Chagnon was wise not to go anywhere near the project. It’s also far from irrelevant—though it goes unmentioned by Borofsky—that Martins and Tierney were friends at the time he was writing his book; on his acknowledgements page, Tierney writes,

I am especially indebted to Leda Martins, who is finishing her Ph.D. at Cornell University, for her support throughout this long project and for her and her family’s hospitality in Boa Vista, Brazil. Leda’s dossier on Napoleon Chagnon was an important resource for my research. (XVII)

(Martins later denied, in an interview given to ethicist and science historian Alice Dreger, that she was the source of the dossier Tierney mentions.) Equally relevant is that one of the professors at Cornell where Martins was finishing her Ph.D. was none other than Terence Turner, whom Tierney also thanks in his acknowledgements. To be fair, Hames is a former student of Chagnon’s, and Hill also knows Chagnon well. But the earlier collaboration with Tierney of at least two contributors to Borofsky’s book is suspicious to say the least.   

Confronted with the book’s inquisitorial layout and tone, I believe undecided readers are going to wonder whether it’s fair to focus a whole book on the charges laid out in another book that’s been so thoroughly discredited. Borofsky does provide an answer of sorts to this objection: The Fierce Controversy is not about Tierney’s book; it’s about anthropology as a discipline. He writes that

beyond the accusations surrounding Neel, Chagnon, and Tierney, there are critical—indeed, from my perspective, far more critical—issues that need to be addressed in the controversy: those involving relations with informants as well as professional integrity and competence. Given how central these issues are to anthropology, readers can understand, perhaps, why many in the discipline have sought to sidestep the controversy. (17)

With that rhetorical flourish, Borofsky makes any concern about Tierney’s credibility, along with any concern for treating the accused fairly, seem like an unwillingness to answer difficult questions. But, in reality, the stated goals of the book raise yet another important ethical question: is it right for a group of scholars to savage their colleagues’ reputations in furtherance of their reform agenda for the discipline? How do they justify their complete disregard for the principle of presumed innocence?   

What’s going on here is that Borofsky and his fellow postmodernists really needed The Fierce Controversy to be about the dramatis personae featured in Tierney’s book, because Tierney’s book is what got the whole discipline’s attention, along with the attention of countless people outside of anthropology. The postmodernists, in other words, are riding the scandal’s coattails. Turner had been making many of the allegations that later ended up in Tierney’s book for years, but he couldn’t get anyone to take him seriously. Now that headlines about anthropologists colluding in eugenic experiments were showing up in newspapers around the world, Turner and the other members of the anti-Chagnon campaign finally got their chance to be heard. Naturally enough, even after Tierney’s book was exposed as mostly a work of fiction, they still really wanted to discuss how terribly Chagnon and other anthropologists of his ilk behaved in the field so they could take control of the larger debate over what anthropology is and what anthropological fieldwork should consist of. This is why even as Borofsky insists the debate isn’t about the people at the center of the controversy, he has no qualms about arranging his book as a trial:

We can address this problem within the discipline by applying the model of a jury trial. In such a trial, jury members—like many readers—do not know all the ins and outs of a case. But by listening to people who do know these details argue back and forth, they are able to form a reasonable judgment regarding the case. (73)

But, if the book isn’t about Neel, Chagnon, and Tierney, then who exactly is being tried? Borofsky is essentially saying, we’re going to try these men in abstentia (Neel died before Darkness in El Dorado was published) with no regard whatsoever for the effect repeating the likely bogus charges against them ad nauseam will have on their reputations, because it’s politically convenient for us to do so, since we hope it will help us achieve our agenda of discipline-wide reform, for which there’s currently either too little interest or too much resistance.

As misbegotten, duplicitous, and morally dubious as its goals and premises are, there’s a still more fatal shortcoming to The Fierce Controversy, and that’s the stance its editor, moderator, and chief contributor takes toward the role of evidence. Here again, it’s important to bear in mind the context out of which the scandal surrounding Darkness in El Dorado erupted. The reason so many of Chagnon’s colleagues responded somewhat gleefully to the lurid and appalling charges leveled against him by Tierney is that Chagnon stands as a prominent figure in the debate over whether anthropology should rightly be conceived of and conducted as a science. The rival view is that science is an arbitrary label used to give the appearance of authority. As Borofsky argues,

the issue is not whether a particular anthropologist’s work is scientific. It is whether that anthropologist’s work is credible. Calling particular research scientific in anthropology is often an attempt to establish credibility by name-dropping. (96)

What he’s referring to here as name-dropping the scientific anthropologists would probably describe as attempts at tying their observations to existing theories, as when Chagnon interprets aspects of Yanomamö culture in light of inclusive fitness theory, with reference to works by evolutionary biologists like W.D. Hamilton and G.C. Williams. But Borofsky’s characterization of how an anthropologist might collect and present data is even more cynical than his attitude toward citations of other scientists’ work. He writes of Chagnon’s descriptions of his field methods,  

To make sure readers understand that he was seriously at work during this time—because he could conceivably have spent much of his time lounging around taking in the sights—he reinforces his expertise with personal anecdotes, statistics, and photos. In Studying the Yanomamö, Chagnon presents interviews, detailed genealogies, computer printouts, photographs, and tables. All these data convey an important message: Chagnon knows what he’s talking about. (57-8)

Borofsky is either confused about or skeptical of the role evidence plays in science—or, more likely, a little of both. Anthropologists in the field could relay any number of vague impressions in their writings, as most of them do. Or those same anthropologists could measure and record details uncovered through systematic investigation. Analyzing the data collected in all those tables and graphs of demographic information could lead to the discovery of facts, trends, and correlations no amount of casual observation would reveal. Borofsky himself drops the names of some postmodern theorists in support of his cynical stance toward science—but it’s hard not to wonder if perhaps his dismissal of even the possibility of data leading to new discoveries has as much to do with him simply not liking the discoveries Chagnon actually made.

            One of the central tenets of postmodernism is that any cultural artifact, including any scientific text, is less a reflection of facts about the real world than a product of, and an attempt to perpetuate, power disparities in the political environment which produces it. From the postmodern perspective, in other words, science is nothing but disguised political rhetoric—and its message is always reactionary. This is why Borofsky is so eager to open the debate to more voices; he believes scientific credentials are really just markers of hegemonic authority, and he further believes that creating a more just society would demand a commitment that no one be excluded from the debate for a lack of expertise.

As immediately apparent as the problems with this perspective are, the really scary thing is that The Fierce Controversy applies this conception of evidence not only to Chagnon’s anthropological field work, but to his and Neel’s culpability as well. And this is where it’s easiest to see how disastrous postmodern ideas would be if they were used as legal or governing principles. Borofsky writes,

in the jury trial model followed in part 2, it is not necessary to recognize (or remember) each and every citation, each and every detail, but rather to note how participants reply to one another’s criticisms [sic]. The six participants, as noted, must respond to critiques of their positions. Readers may not be able to assess—simply by reading certain statements—which assertions are closer to what we might term “the truth.” But readers can evaluate how well a particular participant responds to another’s criticisms as a way of assessing the credibility of that person’s argument. (110)

These instructions betray a frightening obliviousness of the dangers of moral panics and witch hunts. It’s all well and good to put the truth in scare quotes—until you stand falsely accused of some horrible offense and the exculpatory evidence is deemed inadmissible. Imagine if our legal system were set up this way; if you wanted to have someone convicted of a crime, all you’d have to do is stage a successful campaign against this person. Imagine if other prominent social issues were handled this way: climate change, early childhood vaccination, genetically modified foods.

            By essentially coaching readers to attend only to the contributors’ rhetoric and not to worry about the evidence they cite, Borofsky could reasonably be understood as conceding that the evidence simply doesn’t support the case he’s trying to make with the book. But the members of the anti-Chagnon camp seem to believe that the “issues” they want to discuss are completely separable from the question of whether the accusations against Chagnon are true. Kim Hill does a good job of highlighting just how insane this position is, writing,

Turner further observes that some people seem to feel that “if the critical allegations against Neel and Chagnon can be refuted on scientific grounds, then the ethical questions raised…about the effects of their actions on the Yanomami can be made to go away.” In fact, those of us who have criticized Tierney have refuted his allegations on factual and scientific grounds, and those allegations refuted are specifically about the actions of the two accused and their effects. There are no ethical issues to “dismiss” when the actions presented never took place and the effects on the Yanomamö were never experienced as described. Thus, the facts of the book are indeed central to some ethical discussions, and factual findings can indeed “obviate ethical issues” by rendering the discussions moot. But the discussion of facts reported by Tierney have been placed outside this forum of debate (we are to consider only ethical issues raised by the book, not evaluate each factual claim in the book). (180)

One wonders whether Hill knew that evaluations of factual claims would be out of bounds when he agreed to participate in the exchange. Turner, it should be noted, violates this proscription in the final round of the exchange when he takes advantage of his essays’ privileged place as the last contribution by listing the accusations in Tierney’s book he feels are independently supported. Reading this final essay, it’s hard not to think the debate is ending just where it ought to have begun. 

            Hill’s and Hames’s contributions in each round are sandwiched in between those of the three anti-Chagnon campaigners, but whatever value the book has as anything other than an illustration of how paranoid and bizarre postmodern rhetoric can be is to be found in their essays. These sections are like little pockets of sanity in a maelstrom of deranged moralizing. In scoring the back-and-forth, most readers will inevitably favor the side most closely aligned with their own convictions, but two moments really stand out as particularly embarrassing for the prosecution. One of them has Hames catching Martins doing some pretty egregious cherry-picking to give a misleading impression. He explains,

Martins in her second-round contribution cites a specific example of a highly visible and allegedly unflattering image of the Yanomamö created by Chagnon. In the much-discussed Veja interview (entitled “Indians Are Also People”), she notes that “When asked in Veja to define the ‘real Indians,’ Chagnon said, ‘The real Indians get dirty, smell bad, use drugs, belch after they eat, covet and sometimes steal each other’s women, fornicate and make war.’” This quote is accurate. However, in the next sentence after that quote she cites, Chagnon states: “They are normal human beings. And that is sufficient reason for them to merit care and attention.” This tactic of partial quotation mirrors a technique used by Tierney. The context of the statement and most of the interview was Chagnon’s observation that some NGOs and missionaries characterized the Yanomamö as “angelic beings without faults.” His goal was to simply state that the Yanomamö and other native peoples are human beings and deserve our support and sympathy. He was concerned that false portrayals could harm native peoples when later they were discovered to be just like us. (236)

Such deliberate misrepresentations raise the question of whether postmodern thinking justifies, and even encourages, playing fast and loose with the truth—since all writing is just political rhetoric without any basis in reality anyway. What’s clear either way is that an ideology that scants the importance of evidence simply can’t support a moral framework that recognizes individual human rights, because it makes every individual vulnerable to being falsely maligned for the sake of some political cause.   

            The other supremely embarrassing moment for the anti-Chagnon crowd comes in an exchange between Hill and Turner. Hill insists in his first essay that Tierney’s book and the ensuing controversy were borne of ideological opposition to sociobiology, the theoretical framework Chagnon uses to interpret his data on the Yanomamö. On first encountering phrases like “ideological terrorism” (127) and “holy war of ideology” (135), you can’t help thinking that Hill has succumbed to hyperbole, but Turner’s response lends a great deal of credence to Hill’s characterization. Turner’s defense is the logical equivalent of a dangerously underweight teenager saying, “I’m not anorexic—I just need to lose about fifteen pounds.” He first claims his campaign against Chagnon has nothing to do with sociobiology, but then he tries to explain sociobiology as an outgrowth of eugenics, even going so far as to suggest that the theoretical framework somehow inspires adherents to undermine indigenous activists. Even Chagnon’s characterization of the Yanomamö as warlike, which the activists trying to paint a less unsavory picture of them take such issue with, is, according to Turner, more a requirement of sociobiological thinking than an observed reality. He writes,

“Fierceness” and the high level of violent conflict with which it is putatively associated are for Chagnon and like-minded sociobiologists the primary indexes of the evolutionary priority of the Yanomami as an earlier, and supposedly therefore more violent, phase of the development of human society. Most of the critics of Chagnon’s fixation on “fierceness” have had little idea of this integral connection of “fierceness” as a Yanomami trait and the deep structure of sociobiological-selectionist theory. (202)

Turner isn’t by any stretch making a good faith effort to explain the theory and its origins according to how it’s explicitly discussed in the relevant literature. He’s reading between the lines in precisely the way prescribed by his postmodernism, treating the theory as a covert effort at justifying the lower status of indigenous peoples. But his analysis is so far off-base that it not only casts doubt on his credibility on the topic of sociobiology; it calls into question his credibility as a scholarly researcher in general. As Hames points out,

Anyone who has basic knowledge of the origins of sociobiology in anthropology will quickly realize that Turner’s attempt to show a connection between Neel’s allegedly eugenic ideas and Chagnon’s analysis of the Yanomamö to be far-fetched. (238)

            Turner’s method of uncovering secret threads supposedly connecting scientific theories to abhorrent political philosophies is closer to the practices of internet conspiracy theorists than to those of academic researchers. He constructs a scary story with some prominent villains, and then he retrofits the facts to support it. The only problem is that anyone familiar with the theories and the people in the story he tells will recognize it as pure fantasy. As Hames attests,

I don’t know of any “sociobiologists” who regard the Yanomamö as any more or less representative of an “earlier, and supposedly therefore more violent, phase of the development of human society” than any other relatively isolated indigenous society. Some sociobiologists are interested in indigenous populations because they live under social and technological conditions that more closely resemble humanity for most of its history as a species than conditions found in urban population centers. (238)

And Hill, after pointing out how Turner rejects the claim that his campaign against Chagnon is motivated by his paranoid opposition to sociobiology only to turn around and try to explain why attacking the reputations of sociobiologists is justified, takes on the charge that sociobiology somehow prohibits working with indigenous activists, writing,

Indeed he concludes by suggesting that sociobiological theory leads its adherents to reject legitimate modern indigenous leaders. This suggestion is malicious slander that has no basis in reality (where most sociobiologists not only accept modern indigenous leaders but work together with them to help solve modern indigenous problems). (250)

These are people Hill happens to work with and know personally. Unfortunately, Turner himself has yet to be put on trial for these arrant misrepresentations the way he and Borofsky put Chagnon on trial for the charges they’ve so clearly played a role in trumping up.

In explaining why a book like The Fierce Controversy is necessary, Borofsky repeatedly accuses the American Anthropological Association of using a few examples of sloppy reporting on Tierney’s part as an excuse to “sidestep” the ethical issues raised by Darkness in El Dorado. As we’ve seen, however, Tierney’s misrepresentations are far too extensive, and far too conveniently selective, to have resulted from anything but an intentional effort to deceive readers. In Borofsky’s telling, the issues Tierney raises were so important that pressure from several AAA members, along with hundreds of students who commented on the organization’s website, forced the leadership to commission the El Dorado Task Force to investigate. It turns out, though, that on this critical element of the story too Borofsky is completely mistaken. The Task Force wasn’t responding to pressure from inside its own ranks; its members were instead concerned about the reputation of American anthropologists, whose ability to do future work in Latin American was threatened by the scandal. In a 2002 email uncovered by Alice Dreger, the Chair of the Task Force, former AAA President Jane Hill, wrote of Darkness in El Dorado,

Burn this message. The book is just a piece of sleaze, that’s all there is to it (some cosmetic language will be used in the report, but we all agree on that). But I think the AAA had to do something because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations, and silence on the part of the AAA would have been interpreted as either assent or cowardice. Whether we’re doing the right thing will have to be judged by posterity.

Far from the overdue examination of anthropological ethics he wants his book to be seen as, all Borofsky has offered us with The Fierce Controversy is another piece of sleaze, a sequel of sorts meant to rescue the original from its fatal, and highly unethical, distortions and wholesale fabrications. What Borofsky’s book is more than anything else, though, is a portrait of postmodernism’s powers of moral perversion. As such, and only as such, it is of some historical value.

            In a debate over teaching intelligent design in public schools, Richard Dawkins once called attention to what should have been an obvious truth. “When two opposite points of view are expressed with equal intensity,” he said, “the truth does not necessarily lie exactly halfway between them. It is possible for one side to be simply wrong.” This line came to mind again and again as I read The Fierce Controversy. If we take presumption of innocence at all seriously, we can’t avoid concluding that the case brought by the anti-Chagnon crowd is simply wrong. The entire scandal began with a campaign of character assassination, which then blew up into a media frenzy, which subsequently induced a moral panic. It seems even some of Chagnon’s old enemies were taken aback by the mushrooming scale of the allegations. And yet many of the participants whose unscrupulous or outright dishonest scholarship and reporting originally caused the hysteria saw fit years later to continue stoking the controversy. Since they don’t appear to feel any shame, all we can do is agree that they’ve forfeited any right to be heard on the topic of Napoleon Chagnon and the Yanomamö. 

            Still, the inquisitorial zealotry of the anti-Chagnon contributors notwithstanding, the most repugnant thing about Borofsky’s book is how the proclamations of concern first and foremost for the Yanomamö begin to seem pro forma through repetition, as each side tries to paint itself as more focused on the well-being of indigenous peoples than the other. You know a book that’s supposed to address ethical issues has gone terribly awry when references to an endangered people start to seem like mere rhetorical maneuvers. 

Other popular posts like this:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

SCIENCE’S DIFFERENCE PROBLEM: NICHOLAS WADE’S TROUBLESOME INHERITANCE AND THE MISSING MORAL FRAMEWORK FOR DISCUSSING THE BIOLOGY OF BEHAVIOR

You can also watch "Secrets of the Tribe," Jose Padiha's documentary about the controversy, online. 

Read More
Dennis Junk Dennis Junk

The Soul of the Skeptic: What Precisely Is Sam Harris Waking Up from?

In my first foray into Sam Harris’s work, I struggle with some of the concepts he holds up as keys to a more contented, more spiritual life. Along the way, though, I find myself captivated by the details of Harris’s own spiritual journey, and I’m left wondering if there just may be more to this meditation stuff than I’m able to initially wrap my mind around.

             Sam Harris believes that we can derive many of the benefits people cite as reasons for subscribing to one religion or another from non-religious practices and modes of thinking, ones that don’t invoke bizarre scriptures replete with supernatural absurdities. In The Moral Landscape,for instance, he attempted to show that we don’t need a divine arbiter to settle our ethical and political disputes because reason alone should suffice. Now, with Waking Up, Harris is taking on an issue that many defenders of Christianity, or religion more generally, have long insisted he is completely oblivious to. By focusing on the truth or verifiability of religious propositions, Harris’s critics charge, he misses the more important point: religion isn’t fundamentally about the beliefs themselves so much as the effects those beliefs have on a community, including the psychological impact on individuals of collective enactments of the associated rituals—feelings of connectedness, higher purpose, and loving concern for all one’s neighbors.

            Harris likes to point out that his scholarly critics simply have a hard time appreciating just how fundamentalist most religious believers really are, and so they turn a blind eye toward the myriad atrocities religion sanctions, or even calls for explicitly. There’s a view currently fashionable among the more politically correct scientists and academics that makes criticizing religious beliefs seem peevish, even misanthropic, because religion is merely something people do, like reading stories or playing games, to imbue their lives with texture and meaning, or to heighten their sense of belonging to a community. According to this view, the particular religion in question—Islam, Buddhism, Hinduism, Jainism, Christianity—isn’t as important as the people who subscribe to it, nor do any specific tenets of a given faith have any consequence. That’s why Harris so frequently comes under fire—and is even accused of bigotry—for suggesting things like that the passages in the Koran calling for violence actually matter and that Islam is much more likely to inspire violence because of them.

We can forgive Harris his impatience with this line of reasoning, which leads his critics to insist that violence is in every case politically and never religiously motivated. This argument can only be stated with varying levels of rancor, never empirically supported, and is hence dismissible as a mere article of faith in its own right, one that can’t survive any encounter with the reality of religious violence. Harris knows how important a role politics plays and that it’s often only the fundamentalist subset of the population of believers who are dangerous. But, as he points out, “Fundamentalism is only a problem when the fundamentals are a problem” (2:30:09). It’s only by the lights of postmodern identity politics that an observation this banal could strike so many as so outrageous.

            But what will undoubtedly come as a disappointment to Harris’s more ardently anti-religious readers, and as a surprise to fault-seeking religious apologists, is that from the premise that not all religions are equally destructive and equally absurd follows the conclusion that some religious ideas or practices may actually be beneficial or point the way toward valid truths. Harris has discussed his experiences with spiritual retreats and various forms of  meditation in past works, but now with Waking Up he goes so far as to advocate certain of the ancient contemplative practices he’s experimented with. Has he abandoned his scientific skepticism? Not by any means; near the end of the book, he writes, “As a general matter, I believe we should be very slow to draw conclusions about the nature of the cosmos on the basis of inner experiences—no matter how profound they seem” (192). What he’s doing here, and with the book as a whole, is underscoring the distinction between religious belief on the one hand and religious experience on the other.

Acknowledging that some practices which are nominally religious can be of real value, Harris goes on to argue that we need not accept absurd religious doctrines to fully appreciate them. And this is where the subtitle of his book, A Guide to Spirituality without Religion, comes from. As paradoxical as this concept may seem to people of faith, Harris cites a survey finding that 20% of Americans describe themselves as “spiritual but not religious” (6). And he argues that separating the two terms isn’t just acceptable; it’s logically necessary.

Spirituality must be distinguished from religion—because people of every faith, and of none, have the same sorts of spiritual experiences. While these states of mind are usually interpreted through the lens of one or another religious doctrine, we know this is a mistake. Nothing that a Christian, a Muslim, and a Hindu can experience—self-transcending love, ecstasy, bliss, inner light—constitutes evidence in support of their traditional beliefs, because their beliefs are logically incompatible with one another. A deeper principle must be at work. (9)

People of faith frequently respond to the criticism that their beliefs fly in the face of logic and evidence by claiming they simply know God is real because they have experiences that can only be attributed to a divine presence. Any failure on the part of skeptics to acknowledge the lived reality of such experiences makes their arguments all the more easily dismissible as overly literal or pedantic, and it makes the skeptics themselves come across as closed-minded and out-of-touch.

            On the other hand, Harris’s suggestion of a “deeper principle” underlying religious experiences smacks of New Age thinking at its most wooly. For one thing, church authorities often condemn, excommunicate, or execute congregants with mystical leanings for their heresy. (Harris cites a few examples.) But the deeper principle Harris is referring to isn’t an otherworldly one. And he’s perfectly aware of the unfortunate connotations the words he uses often carry:

I share the concern, expressed by many atheists, that the terms spiritual and mystical are often used to make claims not merely about the quality of certain experiences but about reality at large. Far too often, these words are invoked in support of religious beliefs that are morally and intellectually grotesque. Consequently, many of my fellow atheists consider all talk of spirituality to be a sign of mental illness, conscious imposture, or self-deception. This is a problem, because millions of people have had experiences for which spiritual and mystical seem the only terms available. (11)

You can’t expect people to be convinced their religious beliefs are invalid when your case rests on a denial of something as perfectly real to them as their own experiences. And it’s difficult to make the case that these experiences must be separated from the religious claims they’re usually tied to while refusing to apply the most familiar labels to them, because that comes awfully close to denying their legitimacy.

*****

            Throughout Waking Up, Harris focuses on one spiritual practice in particular, a variety of meditation that seeks to separate consciousness from any sense of self, and he argues that the insights one can glean from experiencing this rift are both personally beneficial and neuroscientifically sound. Certain Hindu and Buddhist traditions hold that the self is an illusion, a trick of the mind, and our modern scientific understanding of the mind, Harris argues, corroborates this view. By default, most of us think of the connection between our minds and our bodies dualistically; we believe we have a spirit, a soul, or some other immaterial essence that occupies and commands our physical bodies. Even those of us who profess not to believe in any such thing as a soul have a hard time avoiding a conception of the self as a unified center of consciousness, a homunculus sitting at the controls. Accordingly, we attach ourselves to our own thoughts and perceptions—we identify with them. Since it seems we’re programmed to agonize over past mistakes and worry about impending catastrophes, we can’t help feeling the full brunt of a constant barrage of negative thoughts. Most of us recognize the sentiment Harris expresses in writing that “It seems to me that I spend much of my life in a neurotic trance” (11). And this is precisely the trance we need to wake up from.

            To end the spiraling chain reaction of negative thoughts and foul feelings, we must detach ourselves from our thinking, and to do this, Harris suggests, we must recognize that there is no us doing the thinking. The “I” in the conventional phrasing “I think” or “I feel” is nowhere to be found. Is it in our brains? Which part? Harris describes the work of the Nobel laureate neuroscientist Roger Sperry, who in the 1950s did a series a fascinating experiments with split-brain patients, so called because the corpus callosum, the bundle of fibers connecting the two hemispheres of their brains, had been surgically severed to reduce the severity of epileptic seizures. Sperry found that he could present instructions to the patients’ left visual fields—which would only be perceived by the right hemisphere—and induce responses that the patients themselves couldn’t explain, because language resides predominantly in the left hemisphere. When asked to justify their behavior, though, the split-brain patients gave no indication that they had no idea why they were doing what they’d been instructed to do. Instead, they confabulated answers. For instance, if the right hemisphere is instructed to pick up an egg from among an assortment of objects on a table, the left hemisphere may explain the choice by saying something like, “Oh, I picked it because I had eggs for breakfast yesterday.”

            As weird as this type of confabulation may seem, it has still weirder implications. At any given moment, it’s easy enough for us to form intentions and execute plans for behavior. But where do those intentions really come from? And how can we be sure our behaviors reflect the intentions we believe they reflect? We are only ever aware of a tiny fraction of our minds’ operations, so it would be all too easy for us to conclude we are the ones in charge of everything we do even though it’s really someone—or something else behind the scenes pulling the strings. The reason split-brain patients so naturally confabulate about their motives is that the language centers of our brains probably do it all the time, even when our corpus callosa are intact. We are only ever dimly aware of our true motivations, and likely completely in the dark about them as often as not. Whenever we attempt to explain ourselves, we’re really just trying to make up a plausible story that incorporates all the given details, one that makes sense both to us and to anyone listening.

            If you’re still not convinced that the self is an illusion, try to come up with a valid justification for locating the self in either the left or the right hemisphere of split-brain patients. You may be tempted to attribute consciousness, and hence selfhood, to the hemisphere with the capacity for language. But you can see for yourself how easy it is to direct your attention away from words and fill your consciousness solely with images or wordless sounds. Some people actually rely on their right hemispheres for much of their linguistic processing, and after split-brain surgery these people can speak for the right hemisphere with things like cards that have written words on them. We’re forced to conclude that both sides of the split brain are conscious. And, since the corpus callosum channels a limited amount of information back and forth in the brain, we probably all have at least two independent centers of consciousness in our minds, even those of us whose hemispheres communicate.

What this means is that just because your actions and intentions seem to align, you still can’t be sure there isn’t another conscious mind housed in your brain who is also assured its own actions and intentions are aligned. There have even been cases where the two sides of a split-brain patient’s mind have expressed conflicting beliefs and desires. For some, phenomena like these sound the death knell for any dualistic religious belief. Harris writes,

Consider what this says about the dogma—widely held under Christianity and Islam—that a person’s salvation depends upon her believing the right doctrine about God. If a split-brain patient’s left hemisphere accepts the divinity of Jesus, but the right doesn’t, are we to imagine that she now harbors two immortal souls, one destined for the company of angels and the other for an eternity of hellfire? (67-8)

Indeed, the soul, the immaterial inhabitant of the body, can be divided more than once. Harris makes this point using a thought experiment originally devised by philosopher Derek Parfit. Imagine you are teleported Star Trek-style to Mars. The teleporter creates a replica of your body, including your brain and its contents, faithful all the way down to the orientation of the atoms. So everything goes black here on Earth, and then you wake up on Mars exactly as you left. But now imagine something went wrong on Earth and the original you wasn’t destroyed before the replica was created. In that case, there would be two of you left whole and alive. Which one is the real you? There’s no good basis for settling the question one way or the other.

            Harris uses the split-brain experiments and Parfit’s thought experiment to establish the main insight that lies at the core of the spiritual practices he goes on to describe: that the self, as we are programmed to think of and experience it, doesn’t really exist. Of course, this is only true in a limited sense. In many contexts, it’s still perfectly legitimate to speak of the self. As Harris explains,

The self that does not survive scrutiny is the subject of experience in each present moment—the feeling of being a thinker of thoughts inside one’s head, the sense of being an owner or inhabitant of a physical body, which this false self seems to appropriate as a kind of vehicle. Even if you don’t believe such a homunculus exists—perhaps because you believe, on the basis of science, that you are identical to your body and brain rather than a ghostly resident therein—you almost certainly feel like an internal self in almost every waking moment. And yet, however one looks for it, this self is nowhere to be found. It cannot be seen amid the particulars of experience, and it cannot be seen when experience itself is viewed as a totality. However, its absence can be found—and when it is, the feeling of being a self disappears. (92)

The implication is that even if you come to believe as a matter of fact that the self is an illusion you nevertheless continue to experience that illusion. It’s only under certain circumstances, or as a result of engaging in certain practices, that you’ll be able to experience consciousness in the absence of self.

****

            Harris briefly discusses avenues apart from meditation that move us toward what he calls “self-transcendence”: we often lose ourselves in our work, or in a good book or movie; we may feel a diminishing of self before the immensities of nature and the universe, or as part of a drug-induced hallucination; or it could be attendance at a musical performance where you’re just one tiny part of a vast pulsing crowd of exuberant fans. It could be during intense sex. Or you may of course also experience some fading away of your individuality through participation in religious ceremonies. But Harris’s sights are set on one specific method for achieving self-transcendence. As he writes in his introduction,

This book is by turns a seeker’s memoir, an introduction to the brain, a manual of contemplative instruction, and a philosophical unraveling of what most people consider to be the center of their inner lives: the feeling of self we call “I.” I have not set out to describe all the traditional approaches to spirituality and to weigh their strengths and weaknesses. Rather, my goal is to pluck the diamond from the dunghill of esoteric religion. There is a diamond there, and I have devoted a fair amount of my life to contemplating it, but getting it in hand requires that we remain true to the deepest principles of scientific skepticism and make no obeisance to tradition. (10)

This is music to the ears of many skeptics who have long suspected that there may actually be something to meditative techniques but are overcome with fits of eye-rolling every time they try to investigate the topic. If someone with skeptical bona fides as impressive as Harris’s has taken the time to wade through all the nonsense to see if there are any worthwhile takeaways, then I imagine I’m far from alone in being eager to find out what he’s discovered.

            So how does one achieve a state of consciousness divorced from any sense of self? And how does this experience help us escape the neurotic trance most of us are locked in? Harris describes some of the basic principles of Advaita, a Hindu practice, and Dzogchen, a Tibetan Buddhist one. According to Advaita, one can achieve “cessation”—an end to thinking, and hence to the self—at any stage of practice. But Dzogchen practitioners insist it comes only after much intense practice. In one of several inset passages with direct instructions to readers, Harris invites us to experiment with the Dzogchen technique of imagining a moment in our lives when we felt positive emotions, like the last time we accomplished something we’re proud of. After concentrating on the thoughts and feelings for some time, we are then encouraged to think of a time when we felt something negative, like embarrassment or fear. The goal here is to be aware of the ideas and feelings as they come into being. “In the teachings of Dzogchen,” Harris writes, “it is often said that thoughts and emotions arise in consciousness the way that images appear on the surface of the mirror.” Most of the time, though, we are tricked into mistaking the mirror for what’s reflected in it.

In subjective terms, you are consciousness itself—you are not the next, evanescent image or string of words that appears in your mind. Not seeing it arise, however, the next thought will seem to become what you are. (139)

This is what Harris means when he speaks of separating your consciousness from your thoughts. And he believes it’s a state of mind you can achieve with sufficient practice calling forth and observing different thoughts and emotions, until eventually you experience—for moments at a time—a feeling of transcending the self, which entails a ceasing of thought, a type of formless and empty awareness that has us sensing a pleasant unburdening of the weight of our identities.

            Harris also describes a more expeditious route to selflessness, one discovered by a British Architect named Douglas Harding, who went on to be renowned among New Agers for his insight. His technique, which was first inspired by a drawing made by physicist Ernst Mach that was a literal rendition of his first-person viewpoint, including the side of his nose and the ridge of his eyebrow, consists simply of trying to imagine you have no head. Harris quotes at length from Harding’s description of what happened when he originally succeeded:

What actually happened was something absurdly simply and unspectacular: I stopped thinking. A peculiar quiet, an odd kind of alert limpness or numbness, came over me. Reason and imagination and all mental chatter died down. For once, words really failed me. Past and future dropped away. I forgot who and what I was, my name, manhood, animal-hood, all that could be called mine. It was as if I had been born that instant, brand new, mindless, innocent of all memories. There existed only the Now, the present moment and what was clearly given it. (143) 

Harris recommends a slight twist to this approach—one that involves looking out at the world and simply trying to reverse your perspective to look for your head. One way to do this is to imagine you’re talking to another person and then “let your attention travel in the direction of the other person’s gaze” (145). It’s not about trying to picture what you look like to another person; it’s about recognizing that your face is absent from the encounter—because obviously you can’t see it. “But looking for yourself in this way can precipitate a sudden change in perspective, of the sort Harding describes” (146). It’s a sort of out-of-body experience.

            If you pull off the feat of seeing through the illusion of self, either through disciplined practice at observing the contents of your own consciousness or through shortcuts like imagining you have no head, you will experience a pronounced transformation. Even if for only a few moments, you will have reached enlightenment. As a reward for your efforts, you will enjoy a temporary cessation of the omnipresent hum of anxiety-inducing thoughts that you hardly even notice drowning out so much of the other elements of your consciousness. “There arose no questions,” Harding writes of his experiments in headlessness, “no reference beyond the experience itself, but only peace and a quiet joy, and the sensation of having dropped an intolerable burden” (143). Skeptics reading these descriptions will have to overcome the temptation to joke about practitioners without a thought in their head.

            Christianity, Judaism, and Islam are all based on dualistic conceptions of the self, and the devout are enjoined to engage in ritual practices in service to God, an entirely separate being. The more non-dualistic philosophies of the East are much more amenable to attempts to reconcile them with science. Practices like meditation aren’t directed at any supernatural entity but are engaged in for their own sake, because they are somehow inherently rewarding. Unfortunately, this leads to a catch-22. Harris explains,

As we have seen, there are good reasons to believe that adopting a practice like meditation can lead to positive changes in one’s life. But the deepest goal of spirituality is freedom from the illusion of the self—and to seek such freedom, as though it were a future state to be attained through effort, is to reinforce the chains of one’s apparent bondage in each moment. (123)

This paradox seems at first like a good recommendation for the quicker routes to self-transcendence like Harding’s. But, according to Harris, “Harding confessed that many of his students recognized the state of ‘headlessness’ only to say, ‘So what?’” To Harris, the problem here is that the transformation was so easily achieved that its true value couldn’t be appreciated:

Unless a person has spent some time seeking self-transcendence dualistically, she is unlikely to recognize that the brief glimpse of selflessness is actually the answer to her search. Having then said, ‘So what?’ in the face of the highest teachings, there is nothing for her to do but persist in her confusion. (148)

We have to wonder, though, if maybe Harding’s underwhelmed students aren’t the ones who are confused. It’s entirely possible that Harris, who has devoted so much time and effort to his quest for enlightenment, is overvaluing the experience to assuage his own cognitive dissonance.

****

             The penultimate chapter of Waking Up gives Harris’s more skeptical fans plenty to sink their teeth into, including a thorough takedown of neurosurgeon Eben Alexander’s so-called Proof of Heaven and several cases of supposedly enlightened gurus taking advantage of their followers by, among other exploits, sleeping with their wives. But Harris claims his own experiences with gurus have been almost entirely positive, and he goes as far as recommending that anyone hoping to achieve self-transcendence seek out the services of one. 

            This is where I began to have issues with the larger project behind Harris’s book. If meditation were a set of skills like those required to play tennis, it would seem more reasonable to claim that the guidance of an expert coach is necessary to develop them. But what is a meditation guru supposed to do if he (I presume they’re mostly male) has no way to measure, or even see, your performance? Harris suggests they can answer questions that arise during practice, but apart from basic instructions like the ones Harris himself provides it seems unlikely an expert could be of much help. If a guru has a useful technique, he shouldn’t need to be present in the room to share it. Harding passed his technique on to Harris through writing for instance. And if self-transcendence is as dramatic a transformation as it’s made out to be, you shouldn’t have any trouble recognizing it when you experience it.

            Harris’s valuation of the teachings he’s received from his own gurus really can’t be sifted from his impression of how rewarding his overall efforts at exploring spirituality have been, nor can it be separated from his personal feelings toward those gurus. This a problem that plagues much of the research on the effectiveness of various forms of psychotherapy; essentially, a patient’s report that the therapeutic treatment was successful means little else but that the patient had a positive relationship with the therapist administering it. Similarly, it may be the case that Harris’s sense of how worthwhile those moments of self-transcendence are has more than he's himself aware of to do with his personal retrospective assessment of how fulfilling his own journey to reach them has been. The view from Everest must be far more sublime to those who’ve made the climb than to those who were airlifted to the top.

            More troublingly, there’s an unmistakable resemblance between, on the one hand, Harris’s efforts to locate convergences between science and contemplative religious practices and, on the other, the tendency of New Age philosophers to draw specious comparisons between ancient Eastern doctrines and modern theories in physics. Zen koans are paradoxical and counterintuitive, this line of reasoning goes, and so are the results of the double-slit experiment in quantum mechanics—the Buddhists must have intuited something about the quantum world centuries ago. Dzogchen Buddhists have believed the self is an illusion and have been seeking a cessation of thinking for centuries, and modern neuroscience demonstrates that the self is something quite different from what most of us think it is. Therefore, the Buddhists must have long ago discovered something essential about the mind. In both of these examples, it seems like you have to do a lot of fudging to make the ancient doctrines line up with the modern scientific findings.

            It’s not nearly as evident as Harris makes out that what the Buddhists mean by the doctrine that the self is an illusion is the same thing neuroscientists mean when they point out that consciousness is divisible, or that we’re often unaware of our own motivations. (Douglas Hofstadter refers to the self as an epiphenomenon, which he does characterize as a type of illusion, but only because the overall experience bears so little resemblance to any of the individual processes that go in to producing it.) I’ve never heard a cognitive scientist discuss the fallacy of identifying with your own thoughts or recommend that we try to stop thinking. Indeed, I don’t think most people really do identify with their thoughts. I for one don’t believe I am my thoughts; I definitely feel like I have my thoughts, or that I do my thinking. To point out that thoughts sometimes arise in my mind independent of my volition does nothing to undermine this belief. And Harris never explains exactly why seeing through the illusion of the self should bring about relief from all the anxiety produced by negative thinking. Cessation sounds a little like simply rendering yourself insensate.

The problem that brings about the neurotic trance so many of us find ourselves trapped in doesn’t seem to be that people fall for the trick of selfhood; it’s that they mistake their most neurotic thinking at any given moment for unquestionable and unchangeable reality. Clinical techniques like cognitive behavioral therapy involve challenging your own thinking, and there’s relief to be had in that—but it has nothing to do with disowning your thoughts or seeing your self as illusory. From this modern cognitive perspective, Dzogchen practices that have us focusing our attention on the effects of different lines of thinking are probably still hugely beneficial. But what’s that got to do with self-transcendence?

            For that matter, is the self really an illusion? Insofar as we think of it as a single object or as something that can be frozen in time and examined, it is indeed illusory. But calling the self an illusion is a bit like calling music an illusion. It’s impossible to point to music as existing in any specific location. You can separate a song into constituent elements that all on their own still constitute music. And of course you can create exact replicas of songs and play them on other planets. But it’s pretty silly to conclude from all these observations that music isn’t real. Rather, music, like the self, is a confluence of many diverse processes that can only be experienced in real time. In claiming that neuroscience corroborates the doctrine that the self is an illusion, Harris may be failing at the central task he set for himself by making too much obeisance to tradition. 

            What about all those reports from people like Harding who have had life-changing experiences while meditating or imagining they have no head? I can attest that I immediately recognized what Harding was describing in the sections Harris quotes. For me, it happened about twenty minutes into a walk I’d gone on through my neighborhood to help me come up with an idea for a short story. I tried to imagine myself as an unformed character at the outset of an as-yet-undeveloped plot. After only a few moments of this, I had a profound sense of stepping away from my own identity, and the attendant feeling of liberation from the disappointments and heartbreaks of my past, from the stresses of the present, and from my habitual forebodings about the future was both revelatory and exhilarating. Since reading Waking Up, I’ve tried both Harding’s and Harris’s approaches to reaching this state again quite a few times. But, though the results have been more impactful than the “So what?” response of Harding’s least impressed students, I haven’t experienced anything as seemingly life-altering as I did on that walk, forcing me to suspect it had as much to do with my state of mind prior to the experiment as it did with the technique itself.

            For me, the experience was of stepping away from my identity—or of seeing the details of that identity from a much broader perspective—than it was of seeing through some illusion of self. I became something like a stem cell version of myself, drastically more pluripotent, more free. It felt much more like disconnecting from my own biography than like disconnecting from the center of my consciousness. This may seem like a finicky distinction. But it goes to the core of Harris’s project—the notion that there’s a convergence between ancient meditative practices and our modern scientific understanding of consciousness. And it bears on just how much of that ancient philosophy we really need to get into if we want to have these kinds of spiritual experiences.

            Personally, I’m not at all convinced by Harris’s case on behalf of pared down Buddhist philosophy and the efficacy of guru guidance—though I probably will continue to experiment with the meditation techniques he lays out. Waking Up, it must be noted, is really less of a guide to spirituality without religion than it is a guide to one particular, particularly esoteric, spiritual practice. But, despite these quibbles, I give the book my highest recommendation, and that’s because its greatest failure is also its greatest success. Harris didn’t even come close to helping me stop thinking—or even persuading me that I should try—because I haven’t been able to stop thinking about his book ever since I started reading it. Perhaps what I appreciate most about Waking Up, though, is that it puts the lie to so many idiotic ideas people tend to have about skeptics and atheists. Just as recognizing that to do what’s right we must sometimes resist the urgings of our hearts in no way makes us heartless, neither does understanding that to be steadfast in pursuit of truth we must admit there’s no such thing as an immortal soul in any way make us soulless. And, while many associate skepticism with closed-mindedness, most of the skeptics I know of are true seekers, just like Harris. The crucial difference, which Harris calls “the sine qua non of the scientific attitude,” is “between demanding good reasons for what one believes and being satisfied with bad ones” (199).  

Also read: 

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

And: 

Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education

And:

The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini

Read More
Dennis Junk Dennis Junk

Science’s Difference Problem: Nicholas Wade’s Troublesome Inheritance and the Missing Moral Framework for Discussing the Biology of Behavior

Nicholas Wade went there. In his book “A Troublesome Inheritance,” he argues that not only is race a real, biological phenomenon, but one that has potentially important implications for our understanding of the fates of different peoples. Is it possible to even discuss such things without being justifiably labeled a racist? More importantly, if biological differences do show up in the research, how can we discuss them without being grossly immoral?

            No sooner had Nicholas Wade’s new book become available for free two-day shipping than a contest began to see who could pen the most devastating critical review of it, the one that best satisfies our desperate urge to dismiss Wade’s arguments and reinforce our faith in the futility of studying biological differences between human races, a faith backed up by a cherished official consensus ever so conveniently in line with our moral convictions. That Charles Murray, one of the authors of the evil tome The Bell Curve, wrote an early highly favorable review for the Wall Street Journal only upped the stakes for all would-be champions of liberal science. Even as the victor awaits crowning, many scholars are posting links to their favorite contender’s critiques all over social media to advertise their principled rejection of this book they either haven’t read yet or have no intention of ever reading.

You don’t have to go beyond the title, A Troublesome Inheritance: Genes, Race and Human History, to understand what all these conscientious intellectuals are so eager to distance themselves from—and so eager to condemn. History has undeniably treated some races much more poorly than others, so if their fates are in any small way influenced by genes the implication of inferiority is unavoidable. Regardless of what he actually says in the book, Wade’s very program strikes many as racist from its inception.

            The going theories for the dawn of the European Enlightenment and the rise of Western culture—and western people—to global ascendency attribute the phenomenon to a combination of geographic advantages and historical happenstance. Wade, along with many other scholars, finds such explanations unsatisfying. Geography can explain why some societies never reached sufficient population densities to make the transition into states. “Much harder to understand,” Wade writes, “is how Europe and Asia, lying on much the same lines of latitude, were driven in the different directions that led to the West’s dominance” (223). Wade’s theory incorporates elements of geography—like the relatively uniform expanse of undivided territory between the Yangtze and Yellow rivers that facilitated the establishment of autocratic rule, and the diversity of fragmented regions in Europe preventing such consolidation—but he goes on to suggest that these different environments would have led to the development of different types of institutions. Individuals more disposed toward behaviors favored by these institutions, Wade speculates, would be rewarded with greater wealth, which would in turn allow them to have more children with behavioral dispositions similar to their own.

            After hundreds of years and multiple generations, Wade argues, the populations of diverse regions would respond to these diverse institutions by evolving subtly different temperaments. In China for instance, favorable, and hence selected for traits may have included intelligence, conformity, and obedience. These behavioral propensities would subsequently play a role in determining the future direction of the institutions that fostered their evolution. Average differences in personality would, according to Wade, also make it more or less likely that certain new types of institution would arise within a given society, or that they could be successfully transplanted into it. And it’s a society’s institutions that ultimately determine its fate relative to other societies. To the objection that geography can, at least in principle, explain the vastly different historical outcomes among peoples of specific regions, Wade responds, “Geographic determinism, however, is as absurd a position as genetic determinism, given that evolution is about the interaction between the two” (222).

            East Asians score higher on average on IQ tests than people with European ancestry, but there’s no evidence that any advantage they enjoy in intelligence, or any proclivity they may display toward obedience and conformity—traits supposedly manifest in their long history of autocratic governance—is attributable to genetic differences as opposed to traditional attitudes toward schoolwork, authority, and group membership inculcated through common socialization practices. So we can rest assured that Wade’s just-so story about evolved differences between the races in social behavior is eminently dismissible. Wade himself at several points throughout A Troublesome Inheritance admits that his case is wholly speculative. So why, given the abominable history of racist abuses of evolutionary science, would Wade publish such a book?

It’s not because he’s unaware of the past abuses. Indeed, in his second chapter, titled “Perversions of Science,” which none of the critical reviewers deigns to mention, Wade chronicles the rise of eugenics and its culmination in the Holocaust. He concludes,

After the Second World War, scientists resolved for the best of reasons that genetics research would never again be allowed to fuel the racial fantasies of murderous despots. Now that new information about human races has been developed, the lessons of the past should not be forgotten and indeed are all the more relevant. (38)

The convention among Wade’s critics is to divide his book into two parts, acknowledge that the first is accurate and compelling enough, and then unload the full academic arsenal of both scientific and moral objections to the second. This approach necessarily scants a few important links in his chain of reasoning in an effort to reduce his overall point to its most objectionable elements. And for all their moralizing, the critics, almost to a one, fail to consider Wade’s expressed motivation for taking on such a fraught issue.

            Even acknowledging Wade’s case is weak for the role of biological evolution in historical developments like the Industrial Revolution, we may still examine his reasoning up to that point in the book, which may strike many as more firmly grounded. You can also start to get a sense of what was motivating Wade when you realize that the first half of A Troublesome Inheritance recapitulates his two previous books on human evolution. The first, Before the Dawn, chronicled the evolution and history of our ancestors from a species that resembled a chimpanzee through millennia as tribal hunter-gatherers to the first permanent settlements and the emergence of agriculture. Thus, we see that all along his scholarly interest has been focused on major transitions in human prehistory.

While critics of Wade’s latest book focus almost exclusively on his attempts at connecting genomics to geopolitical history, he begins his exploration of differences between human populations by emphasizing the critical differences between humans and chimpanzees, which we can all agree came about through biological evolution. Citing a number of studies comparing human infants to chimps, Wade writes in A Troublesome Inheritance,

Besides shared intentions, another striking social behavior is that of following norms, or rules generally agreed on within the “we” group. Allied with the rule following are two other basic principles of human social behavior. One is a tendency to criticize, and if necessary punish, those who do not follow the agreed-upon norms. Another is to bolster one’s own reputation, presenting oneself as an unselfish and valuable follower of the group’s norms, an exercise that may involve finding fault with others. (49)

What separates us from chimpanzees and other apes—including our ancestors—is our much greater sociality and our much greater capacity for cooperation. (Though primatologist Frans de Waal would object to leaving the much more altruistic bonobos out of the story.) The basis for these changes was the evolution of a suite of social emotions—emotions that predispose us toward certain types of social behaviors, like punishing those who fail to adhere to group norms (keeping mum about genes and race for instance). If there’s any doubt that the human readiness to punish wrongdoers and rule violators is instinctual, ongoing studies demonstrating this trait in children too young to speak make the claim that the behavior must be taught ever more untenable. The conclusion most psychologists derive from such studies is that, for all their myriad manifestations in various contexts and diverse cultures, the social emotions of humans emerge from a biological substrate common to us all.  

            After Before the Dawn, Wade came out with The Faith Instinct, which explores theories developed by biologist David Sloan Wilson and evolutionary psychologist Jesse Bering about the adaptive role of religion in human societies. In light of cooperation’s status as one of the most essential behavioral differences between humans and chimps, other behaviors that facilitate or regulate coordinated activity suggest themselves as candidates for having pushed our ancestors along the path toward several key transitions. Language for instance must have been an important development. Religion may have been another. As Wade argues in A Troublesome Inheritance

The fact that every known society has a religion suggests that each inherited a propensity for religion from the ancestral human population. The alternative explanation, that each society independently invented and maintained this distinctive human behavior, seems less likely. The propensity for religion seems instinctual, rather than purely cultural, because it is so deeply set in the human mind, touching emotional centers and appearing with such spontaneity. There is a strong evolutionary reason, moreover, that explains why religion may have become wired in the neural circuitry. A major function of religion is to provide social cohesion, a matter of particular importance among early societies. If the more cohesive societies regularly prevailed over the less cohesive, as would be likely in any military dispute, an instinct for religious behavior would have been strongly favored by natural selection. This would explain why the religious instinct is universal. But the particular form that religion takes in each society depends on culture, just as with language. (125-6)

As is evident in this passage, Wade never suggests any one-to-one correspondence between genes and behaviors. Genes function in the context of other genes in the context of individual bodies in the context of several other individual bodies. But natural selection is only about outcomes with regard to survival and reproduction. The evolution of social behavior must thus be understood as taking place through the competition, not just of individuals, but also of institutions we normally think of as purely cultural.

            The evolutionary sequence Wade envisions begins with increasing sociability enforced by a tendency to punish individuals who fail to cooperate, and moves on to tribal religions which involve synchronized behaviors, unifying beliefs, and omnipresent but invisible witnesses who discourage would-be rule violators. Once humans began living in more cohesive groups, behaviors that influenced the overall functioning of those groups became the targets of selection. Religion may have been among the first institutions that emerged to foster cohesion, but others relying on the same substrate of instincts and emotions would follow. Tracing the trajectory of our prehistory from the origin of our species in Africa, to the peopling of the world’s continents, to the first permanent settlements and the adoption of agriculture, Wade writes,

The common theme of all these developments is that when circumstances change, when a new resource can be exploited or a new enemy appears on the border, a society will change its institutions in response. Thus it’s easy to see the dynamics of how human social change takes place and why such a variety of human social structures exists. As soon as the mode of subsistence changes, a society will develop new institutions to exploit its environment more effectively. The individuals whose social behavior is better attuned to such institutions will prosper and leave more children, and the genetic variations that underlie such a behavior will become more common. (63-4)

First a society responds to shifting pressures culturally, but a new culture amounts to a new environment for individuals to adapt to. Wade understands that much of this adaptation occurs through learning. Some of the challenges posed by an evolving culture will, however, be easier for some individuals to address than others. Evolutionary anthropologists tend to think of culture as a buffer between environments and genes. Many consider it more of a wall. To Wade, though, culture is merely another aspect of the environment individuals and their genes compete to thrive in.

If you’re a cultural anthropologist and you want to study how cultures change over time, the most convenient assumption you can make is that any behavioral differences you observe between societies or over periods of time are owing solely to the forces you’re hoping to isolate. Biological changes would complicate your analysis. If, on the other hand, you’re interested in studying the biological evolution of social behaviors, you will likely be inclined to assume that differences between cultures, if not based completely on genetic variance, at least rest on a substrate of inherited traits. Wade has quite obviously been interested in social evolution since his first book on anthropology, so it’s understandable that he would be excited about genome studies suggesting that human evolution has been operating recently enough to affect humans in distantly separated regions of the globe. And it’s understandable that he’d be frustrated by sanctions against investigating possible behavioral differences tied to these regional genetic differences. But this doesn’t stop his critics from insinuating that his true agenda is something other than solely scientific.

            On the technology and pop culture website io9, blogger and former policy analyst Annalee Newitz calls Wade’s book an “argument for white supremacy,” which goes a half-step farther than the critical review by Eric Johnson the post links to, titled "On the Origin of White Power." Johnson sarcastically states that Wade isn’t a racist and acknowledges that the author is correct in pointing out that considering race as a possible explanatory factor isn’t necessarily racist. But, according to Johnson’s characterization,

He then explains why white people are better because of their genes. In fairness, Wade does not say Caucasians are betterper se, merely better adapted (because of their genes) to the modern economic institutions that Western society has created, and which now dominate the world’s economy and culture.

The clear implication here is that Wade’s mission is to prove that the white race is superior but that he also wanted to cloak this agenda in the garb of honest scientific inquiry. Why else would Wade publish his problematic musings? Johnson believes that scientists and journalists should self-censor speculations or as-yet unproven theories that could exacerbate societal injustices. He writes, “False scientific conclusions, often those that justify certain well-entrenched beliefs, can impact peoples’ lives for decades to come, especially when policy decisions are based on their findings.” The question this position begs is how certain can we be that any scientific “conclusion”—Wade would likely characterize it as an exploration—is indeed false before it’s been made public and become the topic of further discussion and research?

Johnson’s is the leading contender for the title of most devastating critique of A Troublesome Inheritance, and he makes several excellent points that severely undermine parts of Wade’s case for natural selection playing a critical role in recent historical developments. But, like H. Allen Orr’s critique in The New York Review, the first runner-up in the contest, Johnson’s essay is oozing with condescension and startlingly unselfconscious sanctimony. These reviewers profess to be standing up for science even as they ply their readers with egregious ad hominem rhetoric (Wade is just a science writer, not a scientist) and arguments from adverse consequences (racist groups are citing Wade’s book in support of their agendas), thereby underscoring another of Wade’s arguments—that the case against racial differences in social behavior is at least as ideological as it is scientific. Might the principle that researchers should go public with politically sensitive ideas or findings only after they’ve reached some threshold of wider acceptance end up stifling free inquiry? And, if Wade’s theories really are as unlikely to bear empirical or conceptual fruit as his critics insist, shouldn’t the scientific case against them be enough? Isn’t all the innuendo and moral condemnation superfluous—maybe even a little suspicious?

            White supremacists may get some comfort from parts of Wade’s book, but if they read from cover to cover they’ll come across plenty of passages to get upset about. In addition to the suggestion that Asians are more intelligent than Caucasians, there’s the matter of the entire eighth chapter, which describes a scenario for how Ashkenazi Jews became even more intelligent than Asians and even more creative and better suited to urban institutions than Caucasians of Northern European ancestry. Wade also points out more than once that the genetic differences between the races are based, not on the presence or absence of single genes, but on clusters of alleles occurring with varying frequencies. He insists that

the significant differences are those between societies, not their individual members. But minor variations in social behavior, though barely perceptible, if at all, in an individual, combine to create societies of very different character. (244)

In other words, none of Wade’s speculations, nor any of the findings he reports, justifies discriminating against any individual because of his or her race. At best, there would only ever be a slightly larger probability that an individual will manifest any trait associated with people of the same ancestry. You’re still much better off reading the details of the résumé. Critics may dismiss as mere lip service Wade’s disclaimers about how “Racism and discrimination are wrong as a matter of principle, not of science” (7), and how the possibility of genetic advantages in certain traits “does not of course mean that Europeans are superior to others—a meaningless term in any case from an evolutionary perspective” (238).  But if Wade is secretly taking delight in the success of one race over another, it’s odd how casually he observes that “the forces of differentiation seem now to have reversed course due to increased migration, travel and intermarriage” (71).

            Wade does of course have to cite some evidence, indirect though it may be, in support of his speculations. First, he covers several genomic studies showing that, contrary to much earlier scholarship, populations of various regions of the globe are genetically distinguishable. Race, in other words, is not merely a social construct, as many have insisted. He then moves on to research suggesting that a significant portion of the human genome reveals evidence of positive selection recently enough to have affected regional populations differently. Joshua Akey’s 2009 review of multiple studies on markers of recent evolution is central to his argument. Wade interprets Akey’s report as suggesting that as much as 14 percent of the human genome shows signs of recent selection. Orr insists this is a mistake in his review, putting the number at 8 percent.

Steven Pinker, who discusses Akey’s paper in his 2011 book The Better Angels of Our Nature, likewise takes the number to be 8 and not 14. But even that lower proportion is significant. Pinker, an evolutionary psychologist, stresses just how revolutionary this finding might be.

Some journalists have uncomprehendingly lauded these results as a refutation of evolutionary psychology and what they see as its politically dangerous implication of a human nature shaped by adaptation to a hunter-gatherer lifestyle. In fact the evidence for recent selection, if it applies to genes with effects on cognition and emotion, would license a far more radical form of evolutionary psychology, one in which minds have been biologically shaped by recent environments in addition to ancient ones. And it could have the incendiary implication that aboriginal and immigrant populations are less biologically adapted to the demands of modern life than populations that have lived in literate societies for millennia. (614)

Contra critics who paint him as a crypto-supremacist, it’s quite clearly that “far more radical form of evolutionary psychology” Wade is excited about. That’s why he’s exasperated by what he sees as Pinker’s refusal to admit that the case for that form is strong enough to warrant pursuing it further owing to fear of its political ramifications. Pinker does consider much of the same evidence as Wade, but where Wade sees only clear support Pinker sees several intractable complications. Indeed, the section of Better Angels where Pinker discusses recent evolution is an important addendum to Wade’s book, and it must be noted Pinker doesn’t rule out the possibility of regional selection for social behaviors. He simply says that “for the time being, we have no need for that hypothesis” (622).

            Wade is also able to point to one gene that has already been identified whose alleles correspond to varying frequencies of violent behavior. The MAO-A gene comes in high- and low-activity varieties, and the low-activity version is more common among certain ethnic groups, like sub-Saharan Africans and Maoris. But, as Pinker points out, a majority of Chinese men also have the low-activity version of the gene, and they aren’t known for being particularly prone to violence. So the picture isn’t straightforward. Aside from the Ashkenazim, Wade cites another well-documented case in which selection for behavioral traits could have played an important role. In his book A Farewell to Alms, Gregory Clark presents an impressive collection of historical data suggesting that in the lead-up to the Industrial Revolution in England, people with personality traits that would likely have contributed to the rapid change were rewarded with more money, and people with more money had more children. The children of the wealthy would quickly overpopulate the ranks of the upper classes and thus large numbers of them inevitably descended into lower ranks. The effect of this “ratchet of wealth” (180), as Wade calls it, after multiple generations would be genes for behaviors like impulse control, patience, and thrift cascading throughout the population, priming it for the emergence of historically unprecedented institutions.

            Wade acknowledges that Clark’s theory awaits direct confirmation through the discovery of actual alleles associated with the behavioral traits he describes. But he points to experiments with artificial selection that suggest the time-scale Clark considers, about 24 generations, would have been sufficient to effect measurable changes. In his critical review, though, Johnson counters that natural selection is much slower than artificial selection, and he shows that Clark’s own numbers demonstrate a rapid attenuation of the effects of selection. Pinker points to other shortcomings in the argument, like the number of cases in which institutions changed and populations exploded in periods too short to have seen any significant change in allele frequencies. Wade isn’t swayed by any of these objections, which he takes on one-by-one, contrary to Orr’s characterization of the disagreement. As of now, the debate is ongoing. It may not be settled conclusively until scientists have a much better understanding of how genes work to influence behavior, which Wade estimates could take decades.

            Pinker is not known for being politically correct, but Wade may have a point when he accuses him of not following the evidence to the most likely conclusions. “The fact that a hypothesis is politically uncomfortable,” Pinker writes, “does not mean that it is false, but it does mean that we should consider the evidence very carefully before concluding that it is true” (614). This sentiment echoes the position taken by Johnson: Hold off going public with sensitive ideas until you’re sure they’re right. But how can we ever be sure whether an idea has any validity if we’re not willing to investigate it? Wade’s case for natural selection operating through changing institutions during recorded history isn’t entirely convincing, but neither is it completely implausible. The evidence that would settle the issue simply hasn’t been discovered yet. But neither is there any evidence in Wade’s book to support the conclusion that his interest in the topic is political as opposed to purely scientific. “Each gene under selection,” he writes, “will eventually tell a fascinating story about some historical stress to which the population was exposed and then adapted” (105). Fascinating indeed, however troubling they may be.

            Is the best way to handle troublesome issues like the possible role of genes in behavioral variations between races to declare them off-limits to scientists until the evidence is incontrovertible? Might this policy come with the risk that avoiding the topic now will make it all too easy to deny any evidence that does emerge later? If genes really do play a role in violence and impulse-control, then we may need to take that into account when we’re devising solutions to societal inequities.

Genes are not gods whose desires must be bowed to. But neither are they imaginary forces that will go away if we just ignore them. The challenge of dealing with possible biological differences also arises in the context of gender. Because women continue to earn smaller incomes on average than men and are underrepresented in science and technology fields, and because the discrepancy is thought to be the product of discrimination and sexism, many scholars argue that any research into biological factors that may explain these outcomes is merely an effort at rationalizing injustice. The problem is the evidence for biological differences in behavior between the genders is much stronger than it is for those between populations from various regions. We can ignore these findings—and perhaps even condemn the scientists who conduct the studies—because they don’t jive with our preferred explanations. But solutions based on willful ignorance have little chance of being effective.

            The sad fact is that scientists and academics have nothing even resembling a viable moral framework for discussing biological behavioral differences. Their only recourse is to deny and inveigh. The quite reasonable fear is that warnings like Wade’s about how the variations are subtle and may not exist at all in any given individual will go unheeded as the news of the findings is disseminated, and dumbed-down versions of the theories will be coopted in the service of reactionary agendas. A study reveals that women respond more readily to a baby’s vocalizations and the headlines read “Genes Make Women Better Parents.” An allele associated with violent behavior is found to be more common in African Americans and some politician cites it as evidence that the astronomical incarceration rate for black males is justifiable. But is censorship the answer? Average differences between genders in career preferences is directly relevant to any discussion of uneven representation in various fields. And it’s possible that people with a certain allele will respond differently to different types of behavioral intervention. As Carl Sagan explained, in a much different context, in his book Demon-Haunted World, “we cannot have science in bits and pieces, applying it where we feel safe and ignoring it where we feel threatened—again, because we are not wise enough to do so” (297).

            Part of the reason the public has trouble understanding what differences between varying types of people may mean is that scientists are at odds with each other about how to talk about them. And with all the righteous declamations they can start to sound a lot like the talking heads on cable news shows. Conscientious and well-intentioned scholars have so thoroughly poisoned the well when it comes to biological behavioral differences that their possible existence is treated as a moral catastrophe. How should we discuss the topic? Working to convey the importance of the distinction between average and absolute differences may be a good start. Efforts to encourage people to celebrate diversity and to challenge the equating of genes with destiny are already popularly embraced. In the realm of policy, we might shift our focus from equality of outcome to equality of opportunity. It’s all too easy to find clear examples of racial disadvantages—in housing, in schooling, in the job market—that go well beyond simple head counting at top schools and in executive boardrooms. Slight differences in behavioral propensities can’t justify such blatant instances of unfairness. Granted, that type of unfairness is much more difficult to find when it comes to gender disparities, but the lesson there is that policies and agendas based on old assumptions might need to give way to a new understanding, not that we should pretend the evidence doesn’t exist or has no meaning.

            Wade believes it was safe for him to write about race because “opposition to racism is now well entrenched” in the Western world (7). In one sense, he’s right about that. Very few people openly profess a belief in racial hierarchies. In another sense, though, it’s just as accurate to say that racism is itself well entrenched in our society. Will A Troublesome Inheritance put the brakes on efforts to bring about greater social justice? This seems unlikely if only because the publication of every Bell Curve occasions the writing of another Mismeasure of Man.

  The unfortunate result is that where you stand on the issue will become yet another badge of political identity as we form ranks on either side. Most academics will continue to consider speculation irresponsible, apply a far higher degree of scrutiny to the research, and direct the purest moral outrage they can muster, while still appearing rational and sane, at anyone who dares violate the taboo. This represents the triumph of politics over science. And it ensures the further entrenchment of views on either side of the divide.

Despite the few superficial similarities between Wade’s arguments and those of racists and eugenicists of centuries past, we have to realize that our moral condemnation of what we suppose are his invidious extra-scientific intentions is itself borne of extra-scientific ideology. Whether race plays a role in behavior is a scientific question. Our attitude toward that question and the parts of the answer that trickle in despite our best efforts at maintaining its status as taboo just may emerge out of assumptions that no longer apply. So we must recognize that succumbing to the temptation to moralize when faced with scientific disagreement automatically makes hypocrites of us all. And we should bear in mind as well that insofar as racial and gender differences really do exist it will only be through coming to a better understanding of them that we can hope to usher in a more just society for children of any and all genders and races. 

Also read: 

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

Read More
Dennis Junk Dennis Junk

The Better-than-Biblical History of Humanity Hidden in Tiny Cells and a Great Story of Science Hidden in Plain Sight

With “Neanderthal Man,” paleogeneticist Svante Pääbo has penned a deeply personal and sparely stylish paean to the field of paleogenetics and all the colleagues and supporters who helped him create it. The book offers an invaluable look behind the scenes of some of the most fascinating research in recent decades.

            Anthropology enthusiasts became acquainted with the name Svante Pääbo in books or articles published throughout the latter half of this century’s first decade about how our anatomically modern ancestors might have responded to the presence of other species of humans as they spread over new continents tens of thousands of years ago. The first bit of news associated with this unplaceable name was that humans probably never interbred with Neanderthals, a finding that ran counter to the multiregionalist theory of human evolution and lent support to the theory of a single origin in Africa. The significance of the Pääbo team’s findings in the context of this longstanding debate was a natural enough angle for science writers to focus on. But what’s shocking in hindsight is that so little of what was written during those few years conveyed any sense of wonder at the discovery that DNA from Neanderthals, a species that went extinct 30,000 years ago, was still retrievable—that snatches of it had in fact already been sequenced.

Then, in 2010, the verdict suddenly changed; humans really had bred with Neanderthals, and all people alive today who trace their ancestry to regions outside of Africa carry vestiges of those couplings in their genomes. The discrepancy between the two findings, we learned, was owing to the first being based on mitochondrial DNA and the second on nuclear DNA. Even those anthropology students whose knowledge of human evolution derived mostly from what can be gleaned from the shapes and ages of fossil bones probably understood that since several copies of mitochondrial DNA reside in every cell of a creature’s body, while each cell houses but a single copy of nuclear DNA, this latest feat of gene sequencing must have been an even greater challenge. Yet, at least among anthropologists, the accomplishment got swallowed up in the competition between rival scenarios for how our species came to supplant all the other types of humans. Though, to be fair, there was a bit of marveling among paleoanthropologists at the implications of being some percentage Neanderthal.

            Fortunately for us enthusiasts, in his new book Neanderthal Man: In Search of Lost Genomes, Pääbo, a Swedish molecular biologist now working at the Max Planck Institute in Leipzig, goes some distance toward making it possible for everyone to appreciate the wonder and magnificence of his team’s monumental achievements. It would have been a great service to historians for him to simply recount the series of seemingly insurmountable obstacles the researchers faced at various stages, along with the technological advances and bursts of inspiration that saw them through. But what he’s done instead is pen a deeply personal and sparely stylish paean to the field of paleogenetics and all the colleagues and supporters who helped him create it.

It’s been over sixty years since Watson and Crick, with some help from Rosalind Franklin, revealed the double-helix structure of DNA. But the Human Genome Project, the massive effort to sequence all three billion base pairs that form the blueprint for a human, was completed just over ten years ago. As inexorable as the march of technological progress often seems, the jump from methods for sequencing the genes of living creatures to those of long-extinct species only strikes us as foregone in hindsight. At the time when Pääbo was originally dreaming of ancient DNA, which he first hoped to retrieve from Egyptian mummies, there were plenty of reasons to doubt it was possible. He writes,

When we die, we stop breathing; the cells in our body then run out of oxygen, and as a consequence their energy runs out. This stops the repair of DNA, and various sorts of damage rapidly accumulate. In addition to the spontaneous chemical damage that continually occurs in living cells, there are forms of damage that occur after death, once the cells start to decompose. One of the crucial functions of living cells is to maintain compartments where enzymes and other substances are kept separate from one another. Some of these compartments contain enzymes that break down DNA from various microorganisms that the cell may encounter and engulf. Once an organism dies and runs out of energy, the compartment membranes deteriorate, and these enzymes leak out and begin degrading DNA in an uncontrolled way. Within hours and sometimes days after death, the DNA strands in our body are cut into smaller and smaller pieces, while various other forms of damage accumulate. At the same time, bacteria that live in our intestines and lungs start growing uncontrollably when our body fails to maintain the barriers that normally contain them. Together these processes will eventually dissolve the genetic information stored in our DNA—the information that once allowed our body to form, be maintained, and function. When that process is complete, the last trace of our biological uniqueness is gone. In a sense, our physical death is then complete. (6)

The hope was that amid this nucleic carnage enough pieces would survive to restore a single strand of the entire genome. That meant Pääbo needed lots of organic remains and some really powerful extraction tools. It also meant that he’d need some well-tested and highly reliable methods for fitting the pieces of the puzzle together.

            Along with the sense of inevitability that follows fast on the heels of any scientific advance, the impact of the Neanderthal Genome Project’s success in the wider culture was also dampened by a troubling inability on the part of the masses to appreciate that not all ideas are created equal—that any particular theory is only as good as the path researchers followed to arrive at it and the methods they used to validate it. Sadly, it’s in all probability the very people who would have been the most thoroughly gobsmacked by the findings coming out of the Max Planck Institute whose amazement switches are most susceptible to hijacking at the hands of the charlatans and ratings whores behind shows like Ancient Aliens. More serious than the cheap fictions masquerading as science that abound in pop culture, though, is a school of thought in academia that not only fails to grasp, but outright denies, the value of methodological rigor, charging that the methods themselves are mere vessels for the dissemination of encrypted social and political prejudices.

Such thinking can’t survive even the most casual encounter with the realities of how science is conducted. Pääbo, for instance, describes his team’s frustration whenever rival researchers published findings based on protocols that failed to meet the standards they’d developed to rule out contamination from other sources of genetic material. He explains the common “dilemma in science” whereby

doing all the analyses and experiments necessary to tell the complete story leaves you vulnerable to being beaten to the press by those willing to publish a less complete story that nevertheless makes the major point you wanted to make. Even when you publish a better paper, you are seen as mopping up the details after someone who made the real breakthrough. (115)

The more serious challenge for Pääbo, however, was dialing back extravagant expectations on the part of prospective funders against the backdrop of popular notions propagated by the Jurassic Park movie franchise and extraordinary claims from scientists who should’ve known better. He writes,

As we were painstakingly developing methods to detect and eliminate contamination, we were frustrated by flashy publications in Nature and Science whose authors, on the surface of things, were much more successful than we were and whose accomplishments dwarfed the scant products of our cumbersome efforts to retrieve DNA sequences “only” a few tens of thousands of years old. The trend had begun in 1990, when I was still at Berkeley. Scientists at UC Irvine published a DNA sequence from leaves of Magnolia latahensis that had been found in a Miocene deposit in Clarkia, Idaho, and were 17 million years old. This was a breathtaking achievement, seeming to suggest that one could study DNA evolution on a time scale of millions of years, perhaps even going back to the dinosaurs! (56)

            In the tradition of the best scientists, Pääbo didn’t simply retreat to his own projects to await the inevitable retractions and failed replications but instead set out to apply his own more meticulous extraction methods to the fossilized plant material. He writes,

I collected many of these leaves and brought them back with me to Munich. In my new lab, I tried extracting DNA from the leaves and found they contained many long DNA fragments. But I could amplify no plant DNA by PCR. Suspecting that the long DNA was from bacteria, I tried primers for bacterial DNA instead, and was immediately successful. Obviously, bacteria had been growing in the clay. The only reasonable explanation was that the Irvine group, who worked on plant genes and did not use a separate “clean lab” for their ancient work, had amplified some contaminating DNA and thought it came from the fossil leaves. (57)

With the right equipment, it turns out, you can extract and sequence genetic material from pretty much any kind of organic remains, no matter how old. The problem is that sources of contamination are myriad, and chances are whatever DNA you manage to read is almost sure to be from something other than the ancient creature you’re interested in.

            At the time when Pääbo was busy honing his techniques, many scientists thought genetic material from ancient plants and insects might be preserved in the fossilized tree resin known as amber. Sure enough, in the late 80s and early 90s, George Poinar and Raul Cano published a series of articles in which they claimed to have successfully extracted DNA through tiny holes drilled into chunks of amber to reach embedded bugs and leaves. These articles were in fact the inspiration behind Michael Crichton’s description of how the dinosaurs in Jurassic Park were cloned. But Pääbo had doubts about whether these researchers were taking proper precautions to rule out contamination, and no sooner had he heard about their findings than he started trying to find a way to get his hands on some amber specimens. He writes,

The opportunity to find out came in 1994, when Hendrik Poinar joined our lab. Hendrik was a jovial Californian and the son of George Poinar, then a professor at Berkeley and a well-respected expert on amber and the creatures found in it. Hendrik had published some of the amber DNA sequences with Raul Cano, and his father had access to the best amber in the world. Hendrik came to Munich and went to work in our new clean room. But he could not repeat what had been done in San Luis Obisco. In fact, as long as his blank extracts were clean, he got no DNA sequences at all out of the amber—regardless of whether he tried insects or plants. I grew more and more skeptical, and I was in good company. (58)

Those blank extracts were important not just to test for bacteria in the samples but to check for human cells as well. Indeed, one of the special challenges of isolating Neanderthal DNA is that it looks so much like the DNA of the anatomically modern humans handling the samples and the sequencing machines.

A high percentage of the dust that accumulates in houses is made up of our sloughed off skin cells. And Polymerase Chain Reaction (PCR), the technique Pääbo’s team was using to increase the amount of target DNA, relies on a powerful amplification process which uses rapid heating and cooling to split double helix strands up the middle before fitting synthetic chemicals along each side like an amino acid zipper, resulting in exponential replication. The result is that each fragment of a genome gets blown up, and it becomes impossible to tell what percentage of the specimen’s DNA it originally represented. Researchers then try to fit the fragments end-to-end based on repeating overlaps until they have an entire strand. If there’s a great deal of similarity between the individual you’re trying to sequence and the individual whose cells have contaminated the sample, you simply have no way to know whether you’re splicing together fragments of each individual’s genome. Much of the early work Pääbo did was with extinct mammals like giant ground sloths which were easier to disentangle from humans. These early studies were what led to the development of practices like running blank extracts, which would later help his team ensure that their supposed Neanderthal DNA wasn’t really from modern human dust.

Despite all the claims of million-year-old DNA being publicized, Pääbo and his team eventually had to rein in their frustration and stop “playing the PCR police” (61) if they ever wanted to show their techniques could be applied to an ancient species of human. One of the major events in Pääbo’s life that would make this huge accomplishment a reality was the founding of the Max Planck Institute for Evolutionary Anthropology in 1997. As celebrated as the Max Planck Society is today, though, the idea of an institute devoted to scientific anthropology in Germany at the time had to overcome some resistance arising out of fears that history might repeat itself. Pääbo explains,

As do many contemporary German institutions, the MPS had a predecessor before the war. Its name was the Kaiser Wilhelm Society, and it was founded in 1911. The Kaiser Wilhelm Society had built up and supported institutes around eminent scientists such as Otto Hahn, Albert Einstein, Max Planck, and Werner Heisenberg, scientific giants active at a time when Germany was a scientifically dominant nation. That era came to an abrupt end when Hitler rose to power and the Nazis ousted many of the best scientists because they were Jewish. Although formally independent of the government, the Kaiser Wilhelm Society became part of the German war machine—doing, for example, weapons research. This was not surprising. Even worse was that through its Institute for Anthropology, Human Heredity, and Eugenics the Kaiser Wilhelm Society was actively involved in racial science and the crimes that grew out of that. In that institute, based in Berlin, people like Josef Mengele were scientific assistants while performing on inmates at Auschwitz death camp, many of them children. (81-2)

Even without such direct historical connections, many scholars still automatically leap from any mention of anthropology or genetics to dubious efforts to give the imprimatur of science to racial hierarchies and clear the way for atrocities like eugenic culling or sterilizations, even though no scientist in any field would have truck with such ideas and policies after the lessons of the past century.

            Pääbo not only believed that anthropological science could be conducted without repeating the atrocities of the past; he insisted that allowing history to rule real science out of bounds would effectively defeat the purpose of the entire endeavor of establishing an organization for the study of human origins. Called on as a consultant to help steer a course for the institute he was simultaneously being recruited to work for, Pääbo recalls responding to the administrators’ historical concerns,

Perhaps it was easier for me as a non-German born well after the war to have a relaxed attitude toward this. I felt that more than fifty years after the war, Germany could not allow itself to be inhibited in its scientific endeavors by its past crimes. We should neither forget history nor fail to learn from it, but we should also not be afraid to go forward. I think I even said that fifty years after his death, Hitler should not be allowed to dictate what we could or could not do. I stressed that in my opinion any new institute devoted to anthropology should not be a place where one philosophized about human history. It should do empirical science. Scientists who were to work there should collect real hard facts about human history and test their ideas against them. (82-3)

As it turned out, Pääbo wasn’t alone in his convictions, and his vision of what the institute should be and how it should operate came to fruition with the construction of the research facility in Leipzig.

            Faced with Pääbo’s passionate enthusiasm, some may worry that he’s one of those mad scientists we know about from movies and books, willing to push ahead with his obsessions regardless of the moral implications or the societal impacts. But in fact Pääbo goes a long way toward showing that the popular conception of the socially oblivious scientist who calculates but can’t think, and who solves puzzles but is baffled by human emotions is not just a caricature but a malicious fiction. For instance, even amid the excitement of his team’s discovery that humans reproduced with Neanderthals, Pääbo was keenly aware that his results revealed stark genetic differences between Africans, who have no Neanderthal DNA, and non-Africans, most of whose genomes are between one and four percent Neanderthal. He writes,

When we had come this far in our analyses, I began to worry about what the social implications of our findings might be. Of course, scientists need to communicate the truth to the public, but I feel that they should do so in ways that minimize the chance for it to be misused. This is especially the case when it comes to human history and human genetic variation, when we need to ask ourselves: Do our findings feed into prejudices that exist in society? Can our findings be misrepresented to serve racists’ purposes? Can they be deliberately or unintentionally misused in some other way? (199-200)

In light of the Neanderthal’s own caricature—hunched, brutish, dimwitted—their contribution to non-Africans’ genetic makeup may actually seem like more of a drawback than a basis for any claims of superiority. The trouble would come, however, if some of these genes turned out to confer adaptive advantages that made their persistence in our lineage more likely. There are already some indications, for instance, that Neanderthal-human hybrids had more robust immune responses to certain diseases. And the potential for further discoveries along these lines is limitless. Neanderthal Man explores the personal and political dimensions of a major scientific undertaking, but it’s Pääbo’s remembrances of what it was like to work with the other members of his team that bring us closest to the essence of what science is—or at least what it can be. At several points along the team’s journey, they were faced with a series of setbacks and technical challenges that threatened to sink the entire endeavor. Pääbo describes what it was like when during one critical juncture where things looked especially dire everyone brought their heads together in weekly meetings to try to come up with solutions and assign tasks:

To me, these meetings were absorbing social and intellectual experiences: graduate students and postdocs know that their careers depend on the results they achieve and the papers they publish, so there is always a certain amount of jockeying for opportunity to do the key experiments and to avoid doing those that may serve the group’s aim but will probably not result in prominent authorship on an important publication. I had become used to the idea that budding scientists were largely driven by self-interest, and I recognized that my function was to strike a balance between what was good for someone’s career and what was necessary for a project, weighing individual abilities in this regard. As the Neanderthal crisis loomed over the group, however, I was amazed to see how readily the self-centered dynamic gave way to a more group-centered one. The group was functioning as a unit, with everyone eagerly volunteering for thankless and laborious chores that would advance the project regardless of whether such chores would bring any personal glory. There was a strong sense of common purpose in what all felt was a historic endeavor. I felt we had the perfect team. In my more sentimental moments, I felt love for each and every person around the table. This made the feeling that we’d achieved no progress all the more bitter. (146-7)

Those “more sentimental moments” of Pääbo’s occur quite frequently, and he just as frequently describes his colleagues, and even his rivals, in a way that reveals his fondness and admiration for them. Unlike James Watson, who in The Double Helix, his memoir of how he and Francis Crick discovered the underlying structure of DNA, often comes across as nasty and condescending, Pääbo reveals himself to be bighearted, almost to a fault.

            Alongside the passion and the drive, we see Pääbo again and again pausing to reflect with childlike wonder at the dizzying advancement of technology and the incredible privilege of being able to carry on such a transformative tradition of discovery and human progress. He shows at once the humility of recognizing his own limitations and the restless curiosity that propels him onward in spite of them. He writes,

My twenty-five years in molecular biology had essentially been a continuous technical revolution. I had seen DNA sequencing machines come on the market that rendered into an overnight task the toils that took me days and weeks as a graduate student. I had seen cumbersome cloning of DNA in bacteria be replaced by the PCR, which in hours achieved what had earlier taken weeks or months to do. Perhaps that was what had led me to think that within a year or two we would be able to sequence three thousand times more DNA than what we had presented in the proof-of-principle paper in Nature. Then again, why wouldn’t the technological revolution continue? I had learned over the years that unless a person was very, very smart, breakthroughs were best sought when coupled to big improvements in technologies. But that didn’t mean we were simply prisoners awaiting rescue by the next technical revolution. (143)

Like the other members of his team, and like so many other giants in the history of science, Pääbo demonstrates an important and rare mix of seemingly contradictory traits: a capacity for dogged, often mind-numbing meticulousness and a proclivity toward boundless flights of imagination.

What has been the impact of Pääbo and his team’s accomplishments so far? Their methods have already been applied to the remains of a 400,000-year-old human ancestor, led to the discovery of completely new species of hominin known as Denisovans (based on a tiny finger bone), and are helping settle a longstanding debate about the peopling of the Americas. The out-of-Africa hypothesis is, for now, the clear victor over the multiregionalist hypothesis, but of course the single origin theory has become more complicated. Many paleoanthropologists are now talking about what Pääbo calls the “Leaky replacement” model (248). Aside from filling in some of the many gaps in the chronology of humankind’s origins and migrations—or rather fitting together more pieces in the vast mosaic of our species’ history—every new genome helps us to triangulate possible functions for specific genes. As Pääbo explains, “The dirty little secret of genomics is that we still know next to nothing about how a genome translates into the particularities of a living and breathing individual” (208). But knowing the particulars of how human genomes differ from chimp genomes, and how both differ from the genomes of Neanderthals, or Denisovans, or any number of living or extinct species of primates, gives us clues about how those differences contribute to making each of us who and what we are. The Neanderthal genome is not an end-point but rather a link in a chain of discoveries. Nonetheless, we owe Svante Pääbo a debt of gratitude for helping us to appreciate what all went into the forging of this particular, particularly extraordinary link. 

Also read: 

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And: 

THE FEMINIST SOCIOBIOLOGIST: AN APPRECIATION OF SARAH BLAFFER HRDY DISGUISED AS A REVIEW OF “MOTHERS AND OTHERS: THE EVOLUTIONARY ORIGINS OF MUTUAL UNDERSTANDING”

And: 

OLIVER SACKS’S GRAPHOPHILIA AND OTHER COMPENSATIONS FOR A LIFE LIVED “ON THE MOVE”

Read More
Dennis Junk Dennis Junk

Lab Flies: Joshua Greene’s Moral Tribes and the Contamination of Walter White

Joshua Greene’s book “Moral Tribes” posits a dual-system theory of morality, where a quick, intuitive system 1 makes judgments based on deontological considerations—”it’s just wrong—whereas the slower, more deliberative system 2 takes time to calculate the consequences of any given choice. Audiences can see these two systems on display in the series “Breaking Bad,” as well as in critics’ and audiences’ responses.

Walter White’s Moral Math

In an episode near the end of Breaking Bad’s fourth season, the drug kingpin Gus Fring gives his meth cook Walter White an ultimatum. Walt’s brother-in-law Hank is a DEA agent who has been getting close to discovering the high-tech lab Gus has created for Walt and his partner Jesse, and Walt, despite his best efforts, hasn’t managed to put him off the trail. Gus decides that Walt himself has likewise become too big a liability, and he has found that Jesse can cook almost as well as his mentor. The only problem for Gus is that Jesse, even though he too is fed up with Walt, will refuse to cook if anything happens to his old partner. So Gus has Walt taken at gunpoint to the desert where he tells him to stay away from both the lab and Jesse. Walt, infuriated, goads Gus with the fact that he’s failed to turn Jesse against him completely, to which Gus responds, “For now,” before going on to say,

In the meantime, there’s the matter of your brother-in-law. He is a problem you promised to resolve. You have failed. Now it’s left to me to deal with him. If you try to interfere, this becomes a much simpler matter. I will kill your wife. I will kill your son. I will kill your infant daughter.

In other words, Gus tells Walt to stand by and let Hank be killed or else he will kill his wife and kids. Once he’s released, Walt immediately has his lawyer Saul Goodman place an anonymous call to the DEA to warn them that Hank is in danger. Afterward, Walt plans to pay a man to help his family escape to a new location with new, untraceable identities—but he soon discovers the money he was going to use to pay the man has already been spent (by his wife Skyler). Now it seems all five of them are doomed. This is when things get really interesting.

      Walt devises an elaborate plan to convince Jesse to help him kill Gus. Jesse knows that Gus would prefer for Walt to be dead, and both Walt and Gus know that Jesse would go berserk if anyone ever tried to hurt his girlfriend’s nine-year-old son Brock. Walt’s plan is to make it look like Gus is trying to frame him for poisoning Brock with risin. The idea is that Jesse would suspect Walt of trying to kill Brock as punishment for Jesse betraying him and going to work with Gus. But Walt will convince Jesse that this is really just Gus’s ploy to trick Jesse into doing what he has forbidden Gus to do up till now—and kill Walt himself. Once Jesse concludes that it was Gus who poisoned Brock, he will understand that his new boss has to go, and he will accept Walt’s offer to help him perform the deed. Walt will then be able to get Jesse to give him the crucial information he needs about Gus to figure out a way to kill him.

It’s a brilliant plan. The one problem is that it involves poisoning a nine-year-old child. Walt comes up with an ingenious trick which allows him to use a less deadly poison while still making it look like Brock has ingested the ricin, but for the plan to work the boy has to be made deathly ill. So Walt is faced with a dilemma: if he goes through with his plan, he can save Hank, his wife, and his two kids, but to do so he has to deceive his old partner Jesse in just about the most underhanded way imaginable—and he has to make a young boy very sick by poisoning him, with the attendant risk that something will go wrong and the boy, or someone else, or everyone else, will die anyway. The math seems easy: either four people die, or one person gets sick. The option recommended by the math is greatly complicated, however, by the fact that it involves an act of violence against an innocent child.

In the end, Walt chooses to go through with his plan, and it works perfectly. In another ingenious move, though, this time on the part of the show’s writers, Walt’s deception isn’t revealed until after his plan has been successfully implemented, which makes for an unforgettable shock at the end of the season. Unfortunately, this revelation after the fact, at a time when Walt and his family are finally safe, makes it all too easy to forget what made the dilemma so difficult in the first place—and thus makes it all too easy to condemn Walt for resorting to such monstrous means to see his way through.

            Fans of Breaking Bad who read about the famous thought-experiment called the footbridge dilemma in Harvard psychologist Joshua Greene’s multidisciplinary and momentously important book Moral Tribes: Emotion, Reason, and the Gap between Us and Them will immediately recognize the conflicting feelings underlying our responses to questions about serving some greater good by committing an act of violence. Here is how Greene describes the dilemma:

A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Next to you is a railway workman wearing a large backpack. The only way to save the five people is to push this man off the footbridge and onto the tracks below. The man will die as a result, but his body and backpack will stop the trolley from reaching the others. (You can’t jump yourself because you, without a backpack, are not big enough to stop the trolley, and there’s no time to put one on.) Is it morally acceptable to save the five people by pushing this stranger to his death? (113-4)

As was the case for Walter White when he faced his child-poisoning dilemma, the math is easy: you can save five people—strangers in this case—through a single act of violence. One of the fascinating things about common responses to the footbridge dilemma, though, is that the math is all but irrelevant to most of us; no matter how many people we might save, it’s hard for us to see past the murderous deed of pushing the man off the bridge. The answer for a large majority of people faced with this dilemma, even in the case of variations which put the number of people who would be saved much higher than five, is no, pushing the stranger to his death is not morally acceptable.

Another fascinating aspect of our responses is that they change drastically with the modification of a single detail in the hypothetical scenario. In the switch dilemma, a trolley is heading for five people again, but this time you can hit a switch to shift it onto another track where there happens to be a single person who would be killed. Though the math and the underlying logic are the same—you save five people by killing one—something about pushing a person off a bridge strikes us as far worse than pulling a switch. A large majority of people say killing the one person in the switch dilemma is acceptable. To figure out which specific factors account for the different responses, Greene and his colleagues tweak various minor details of the trolley scenario before posing the dilemma to test participants. By now, so many experiments have relied on these scenarios that Greene calls trolley dilemmas the fruit flies of the emerging field known as moral psychology.

The Automatic and Manual Modes of Moral Thinking

One hypothesis for why the footbridge case strikes us as unacceptable is that it involves using a human being as an instrument, a means to an end. So Greene and his fellow trolleyologists devised a variation called the loop dilemma, which still has participants pulling a hypothetical switch, but this time the lone victim on the alternate track must stop the trolley from looping back around onto the original track. In other words, you’re still hitting the switch to save the five people, but you’re also using a human being as a trolley stop. People nonetheless tend to respond to the loop dilemma in much the same way they do the switch dilemma. So there must be some factor other than the prospect of using a person as an instrument that makes the footbridge version so objectionable to us.

Greene’s own theory for why our intuitive responses to these dilemmas are so different begins with what Daniel Kahneman, one of the founders of behavioral economics, labeled the two-system model of the mind. The first system, a sort of autopilot, is the one we operate in most of the time. We only use the second system when doing things that require conscious effort, like multiplying 26 by 47. While system one is quick and intuitive, system two is slow and demanding. Greene proposes as an analogy the automatic and manual settings on a camera. System one is point-and-click; system two, though more flexible, requires several calibrations and adjustments. We usually only engage our manual systems when faced with circumstances that are either completely new or particularly challenging.

According to Greene’s model, our automatic settings have functions that go beyond the rapid processing of information to encompass our basic set of moral emotions, from indignation to gratitude, from guilt to outrage, which motivates us to behave in ways that over evolutionary history have helped our ancestors transcend their selfish impulses to live in cooperative societies. Greene writes,

According to the dual-process theory, there is an automatic setting that sounds the emotional alarm in response to certain kinds of harmful actions, such as the action in the footbridge case. Then there’s manual mode, which by its nature tends to think in terms of costs and benefits. Manual mode looks at all of these cases—switch, footbridge, and loop—and says “Five for one? Sounds like a good deal.” Because manual mode always reaches the same conclusion in these five-for-one cases (“Good deal!”), the trend in judgment for each of these cases is ultimately determined by the automatic setting—that is, by whether the myopic module sounds the alarm. (233)

What makes the dilemmas difficult then is that we experience them in two conflicting ways. Most of us, most of the time, follow the dictates of the automatic setting, which Greene describes as myopic because its speed and efficiency come at the cost of inflexibility and limited scope for extenuating considerations.

The reason our intuitive settings sound an alarm at the thought of pushing a man off a bridge but remain silent about hitting a switch, Greene suggests, is that our ancestors evolved to live in cooperative groups where some means of preventing violence between members had to be in place to avoid dissolution—or outright implosion. One of the dangers of living with a bunch of upright-walking apes who possess the gift of foresight is that any one of them could at any time be plotting revenge for some seemingly minor slight, or conspiring to get you killed so he can move in on your spouse or inherit your belongings. For a group composed of individuals with the capacity to hold grudges and calculate future gains to function cohesively, the members must have in place some mechanism that affords a modicum of assurance that no one will murder them in their sleep. Greene writes,

To keep one’s violent behavior in check, it would help to have some kind of internal monitor, an alarm system that says “Don’t do that!” when one is contemplating an act of violence. Such an action-plan inspector would not necessarily object to all forms of violence. It might shut down, for example, when it’s time to defend oneself or attack one’s enemies. But it would, in general, make individuals very reluctant to physically harm one another, thus protecting individuals from retaliation and, perhaps, supporting cooperation at the group level. My hypothesis is that the myopic module is precisely this action-plan inspecting system, a device for keeping us from being casually violent. (226)

Hitting a switch to transfer a train from one track to another seems acceptable, even though a person ends up being killed, because nothing our ancestors would have recognized as violence is involved.

Many philosophers cite our different responses to the various trolley dilemmas as support for deontological systems of morality—those based on the inherent rightness or wrongness of certain actions—since we intuitively know the choices suggested by a consequentialist approach are immoral. But Greene points out that this argument begs the question of how reliable our intuitions really are. He writes,

I’ve called the footbridge dilemma a moral fruit fly, and that analogy is doubly appropriate because, if I’m right, this dilemma is also a moral pest. It’s a highly contrived situation in which a prototypically violent action is guaranteed (by stipulation) to promote the greater good. The lesson that philosophers have, for the most part, drawn from this dilemma is that it’s sometimes deeply wrong to promote the greater good. However, our understanding of the dual-process moral brain suggests a different lesson: Our moral intuitions are generally sensible, but not infallible. As a result, it’s all but guaranteed that we can dream up examples that exploit the cognitive inflexibility of our moral intuitions. It’s all but guaranteed that we can dream up a hypothetical action that is actually good but that seems terribly, horribly wrong because it pushes our moral buttons. I know of no better candidate for this honor than the footbridge dilemma. (251)

The obverse is that many of the things that seem morally acceptable to us actually do cause harm to people. Greene cites the example of a man who lets a child drown because he doesn’t want to ruin his expensive shoes, which most people agree is monstrous, even though we think nothing of spending money on things we don’t really need when we could be sending that money to save sick or starving children in some distant country. Then there are crimes against the environment, which always seem to rank low on our list of priorities even though their future impact on real human lives could be devastating. We have our justifications for such actions or omissions, to be sure, but how valid are they really? Is distance really a morally relevant factor when we let children die? Does the diffusion of responsibility among so many millions change the fact that we personally could have a measurable impact?

These black marks notwithstanding, cooperation, and even a certain degree of altruism, come natural to us. To demonstrate this, Greene and his colleagues have devised some clever methods for separating test subjects’ intuitive responses from their more deliberate and effortful decisions. The experiments begin with a multi-participant exchange scenario developed by economic game theorists called the Public Goods Game, which has a number of anonymous players contribute to a common bank whose sum is then doubled and distributed evenly among them. Like the more famous game theory exchange known as the Prisoner’s Dilemma, the outcomes of the Public Goods Game reward cooperation, but only when a threshold number of fellow cooperators is reached. The flip side, however, is that any individual who decides to be stingy can get a free ride from everyone else’s contributions and make an even greater profit. What tends to happen is, over multiple rounds, the number of players opting for stinginess increases until the game is ruined for everyone, a process analogical to a phenomenon in economics known as the Tragedy of the Commons. Everyone wants to graze a few more sheep on the commons than can be sustained fairly, so eventually the grounds are left barren.

The Biological and Cultural Evolution of Morality

            Greene believes that humans evolved emotional mechanisms to prevent the various analogs of the Tragedy of the Commons from occurring so that we can live together harmoniously in tight-knit groups. The outcomes of multiple rounds of the Public Goods Game, for instance, tend to be far less dismal when players are given the opportunity to devote a portion of their own profits to punishing free riders. Most humans, it turns out, will be motivated by the emotion of anger to expend their own resources for the sake of enforcing fairness. Over several rounds, cooperation becomes the norm. Such an outcome has been replicated again and again, but researchers are always interested in factors that influence players’ strategies in the early rounds. Greene describes a series of experiments he conducted with David Rand and Martin Nowak, which were reported in an article in Nature in 2012. He writes,

…we conducted our own Public Goods Games, in which we forced some people to decide quickly (less than ten seconds) and forced others to decide slowly (more than ten seconds). As predicted, forcing people to decide faster made them more cooperative and forcing people to slow down made them less cooperative (more likely to free ride). In other experiments, we asked people, before playing the Public Goods Game, to write about a time in which their intuitions served them well, or about a time in which careful reasoning led them astray. Reflecting on the advantages of intuitive thinking (or the disadvantages of careful reflection) made people more cooperative. Likewise, reflecting on the advantages of careful reasoning (or the disadvantages of intuitive thinking) made people less cooperative. (62)

These results offer strong support for Greene’s dual-process theory of morality, and they even hint at the possibility that the manual mode is fundamentally selfish or amoral—in other words, that the philosophers have been right all along in deferring to human intuitions about right and wrong.

            As good as our intuitive moral sense is for preventing the Tragedy of the Commons, however, when given free rein in a society comprised of large groups of people who are strangers to one another, each with its own culture and priorities, our natural moral settings bring about an altogether different tragedy. Greene labels it the Tragedy of Commonsense Morality. He explains,

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups). (23)

Expanding on the story behind the Tragedy of the Commons, Greene describes what would happen if several groups, each having developed its own unique solution for making sure the commons were protected from overgrazing, were suddenly to come into contact with one another on a transformed landscape called the New Pastures. Each group would likely harbor suspicions against the others, and when it came time to negotiate a new set of rules to govern everybody the groups would all show a significant, though largely unconscious, bias in favor of their own members and their own ways.

The origins of moral psychology as a field can be traced to both developmental and evolutionary psychology. Seminal research conducted at Yale’s Infant Cognition Center, led by Karen Wynn, Kiley Hamlin, and Paul Bloom (and which Bloom describes in a charming and highly accessible book called Just Babies), has demonstrated that children as young as six months possess what we can easily recognize as a rudimentary moral sense. These findings suggest that much of the behavior we might have previously ascribed to lessons learned from adults is actually innate. Experiments based on game theory scenarios and thought-experiments like the trolley dilemmas are likewise thought to tap into evolved patterns of human behavior. Yet when University of British Columbia psychologist Joseph Henrich teamed up with several anthropologists to see how people living in various small-scale societies responded to game theory scenarios like the Prisoner’s Dilemma and the Public Goods Game they discovered a great deal of variation. On the one hand, then, human moral intuitions seem to be rooted in emotional responses present at, or at least close to, birth, but on the other hand cultures vary widely in their conventional responses to classic dilemmas. These differences between cultural conceptions of right and wrong are in large part responsible for the conflict Greene envisions in his Parable of the New Pastures.

But how can a moral sense be both innate and culturally variable?  “As you might expect,” Greene explains, “the way people play these games reflects the way they live.” People in some cultures rely much more heavily on cooperation to procure their sustenance, as is the case with the Lamelara of Indonesia, who live off the meat of whales they hunt in groups. Cultures also vary in how much they rely on market economies as opposed to less abstract and less formal modes of exchange. Just as people adjust the way they play economic games in response to other players’ moves, people acquire habits of cooperation based on the circumstances they face in their particular societies. Regarding the differences between small-scale societies in common game theory strategies, Greene writes,

Henrich and colleagues found that payoffs to cooperation and market integration explain more than two thirds of the variation across these cultures. A more recent study shows that, across societies, market integration is an excellent predictor of altruism in the Dictator Game. At the same time, many factors that you might expect to be important predictors of cooperative behavior—things like an individual’s sex, age, and relative wealth, or the amount of money at stake—have little predictive power. (72)

In much the same way humans are programmed to learn a language and acquire a set of religious beliefs, they also come into the world with a suite of emotional mechanisms that make up the raw material for what will become a culturally calibrated set of moral intuitions. The specific language and religion we end up with is of course dependent on the social context of our upbringing, just as our specific moral settings will reflect those of other people in the societies we grow up in.

Jonathan Haidt and Tribal Righteousness

In our modern industrial society, we actually have some degree of choice when it comes to our cultural affiliations, and this freedom opens the way for heritable differences between individuals to play a larger role in our moral development. Such differences are nowhere as apparent as in the realm of politics, where nearly all citizens occupy some point on a continuum between conservative and liberal. According to Greene’s fellow moral psychologist Jonathan Haidt, we have precious little control over our moral responses because, in his view, reason only comes into play to justify actions and judgments we’ve already made. In his fascinating 2012 book The Righteous Mind, Haidt insists,

Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth. (50)

To explain the moral divide between right and left, Haidt points to the findings of his own research on what he calls Moral Foundations, six dimensions underlying our intuitions about moral and immoral actions. Conservatives tend to endorse judgments based on all six of the foundations, valuing loyalty, authority, and sanctity much more than liberals, who focus more exclusively on care for the disadvantaged, fairness, and freedom from oppression. Since our politics emerge from our moral intuitions and reason merely serves as a sort of PR agent to rationalize judgments after the fact, Haidt enjoins us to be more accepting of rival political groups— after all, you can’t reason with them. 

            Greene objects both to Haidt’s Moral Foundations theory and to his prescription for a politics of complementarity. The responses to questions representing all the moral dimensions in Haidt’s studies form two clusters on a graph, Greene points out, not six, suggesting that the differences between conservatives and liberals are attributable to some single overarching variable as opposed to several individual tendencies. Furthermore, the specific content of the questions Haidt uses to flesh out the values of his respondents have a critical limitation. Greene writes,

According to Haidt, American social conservatives place greater value on respect for authority, and that’s true in a sense. Social conservatives feel less comfortable slapping their fathers, even as a joke, and so on. But social conservatives do not respect authority in a general way. Rather, they have great respect for authorities recognized by their tribe (from the Christian God to various religious and political leaders to parents). American social conservatives are not especially respectful of Barack Hussein Obama, whose status as a native-born American, and thus a legitimate president, they have persistently challenged. (339)

The same limitation applies to the loyalty and sanctity foundations. Conservatives feel little loyalty toward the atheists and Muslims among their fellow Americans. Nor do they recognize the sanctity of Mosques or Hindu holy texts. Greene goes on,

American social conservatives are not best described as people who place special value on authority, sanctity, and loyalty, but rather as tribal loyalists—loyal to their own authorities, their own religion, and themselves. This doesn’t make them evil, but it does make them parochial, tribal. In this they’re akin to the world’s other socially conservative tribes, from the Taliban in Afghanistan to European nationalists. According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise may be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic. (340)

Greene believes such persuasion is possible, even with regard to emotionally and morally charged controversies, because he sees our manual-mode thinking as playing a potentially much greater role than Haidt sees it playing.

Metamorality on the New Pastures

            Throughout The Righteous Mind, Haidt argues that the moral philosophers who laid the foundations of modern liberal and academic conceptions of right and wrong gave short shrift to emotions and intuitions—that they gave far too much credit to our capacity for reason. To be fair, Haidt does honor the distinction between descriptive and prescriptive theories of morality, but he nonetheless gives the impression that he considers liberal morality to be somewhat impoverished. Greene sees this attitude as thoroughly wrongheaded. Responding to Haidt’s metaphor comparing his Moral Foundations to taste buds—with the implication that the liberal palate is more limited in the range of flavors it can appreciate—Greene writes,

The great philosophers of the Enlightenment wrote at a time when the world was rapidly shrinking, forcing them to wonder whether their own laws, their own traditions, and their own God(s) were any better than anyone else’s. They wrote at a time when technology (e.g., ships) and consequent economic productivity (e.g., global trade) put wealth and power into the hands of a rising educated class, with incentives to question the traditional authorities of king and church. Finally, at this time, natural science was making the world comprehensible in secular terms, revealing universal natural laws and overturning ancient religious doctrines. Philosophers wondered whether there might also be universal moral laws, ones that, like Newton’s law of gravitation, applied to members of all tribes, whether or not they knew it. Thus, the Enlightenment philosophers were not arbitrarily shedding moral taste buds. They were looking for deeper, universal moral truths, and for good reason. They were looking for moral truths beyond the teachings of any particular religion and beyond the will of any earthly king. They were looking for what I’ve called a metamorality: a pan-tribal, or post-tribal, philosophy to govern life on the new pastures. (338-9)

While Haidt insists we must recognize the centrality of intuitions even in this civilization nominally ruled by reason, Greene points out that it was skepticism of old, seemingly unassailable and intuitive truths that opened up the world and made modern industrial civilization possible in the first place.  

            As Haidt explains, though, conservative morality serves people well in certain regards. Christian churches, for instance, encourage charity and foster a sense of community few secular institutions can match. But these advantages at the level of parochial groups have to be weighed against the problems tribalism inevitably leads to at higher levels. This is, in fact, precisely the point Greene created his Parable of the New Pastures to make. He writes,

The Tragedy of the Commons is averted by a suite of automatic settings—moral emotions that motivate and stabilize cooperation within limited groups. But the Tragedy of Commonsense Morality arises because of automatic settings, because different tribes have different automatic settings, causing them to see the world through different moral lenses. The Tragedy of the Commons is a tragedy of selfishness, but the Tragedy of Commonsense Morality is a tragedy of moral inflexibility. There is strife on the new pastures not because herders are hopelessly selfish, immoral, or amoral, but because they cannot step outside their respective moral perspectives. How should they think? The answer is now obvious: They should shift into manual mode. (172)

Greene argues that whenever we, as a society, are faced with a moral controversy—as with issues like abortion, capital punishment, and tax policy—our intuitions will not suffice because our intuitions are the very basis of our disagreement.

            Watching the conclusion to season four of Breaking Bad, most viewers probably responded to finding out that Walt had poisoned Brock by thinking that he’d become a monster—at least at first. Indeed, the currently dominant academic approach to art criticism involves taking a stance, both moral and political, with regard to a work’s models and messages. Writing for The New Yorker, Emily Nussbaum, for instance, disparages viewers of Breaking Bad for failing to condemn Walt, writing,

When Brock was near death in the I.C.U., I spent hours arguing with friends about who was responsible. To my surprise, some of the most hard-nosed cynics thought it inconceivable that it could be Walt—that might make the show impossible to take, they said. But, of course, it did nothing of the sort. Once the truth came out, and Brock recovered, I read posts insisting that Walt was so discerning, so careful with the dosage, that Brock could never have died. The audience has been trained by cable television to react this way: to hate the nagging wives, the dumb civilians, who might sour the fun of masculine adventure. “Breaking Bad” increases that cognitive dissonance, turning some viewers into not merely fans but enablers. (83)

To arrive at such an assessment, Nussbaum must reduce the show to the impact she assumes it will have on less sophisticated fans’ attitudes and judgments. But the really troubling aspect of this type of criticism is that it encourages scholars and critics to indulge their impulse toward self-righteousness when faced with challenging moral dilemmas; in other words, it encourages them to give voice to their automatic modes precisely when they should be shifting to manual mode. Thus, Nussbaum neglects outright the very details that make Walt’s scenario compelling, completely forgetting that by making Brock sick—and, yes, risking his life—he was able to save Hank, Skyler, and his own two children.

But how should we go about arriving at a resolution to moral dilemmas and political controversies if we agree we can’t always trust our intuitions? Greene believes that, while our automatic modes recognize certain acts as wrong and certain others as a matter of duty to perform, in keeping with deontological ethics, whenever we switch to manual mode, the focus shifts to weighing the relative desirability of each option's outcomes. In other words, manual mode thinking is consequentialist. And, since we tend to assess outcomes according their impact on other people, favoring those that improve the quality of their experiences the most, or detract from it the least, Greene argues that whenever we slow down and think through moral dilemmas deliberately we become utilitarians. He writes,

If I’m right, this convergence between what seems like the right moral philosophy (from a certain perspective) and what seems like the right moral psychology (from a certain perspective) is no accident. If I’m right, Bentham and Mill did something fundamentally different from all of their predecessors, both philosophically and psychologically. They transcended the limitations of commonsense morality by turning the problem of morality (almost) entirely over to manual mode. They put aside their inflexible automatic settings and instead asked two very abstract questions. First: What really matters? Second: What is the essence of morality? They concluded that experience is what ultimately matters, and that impartiality is the essence of morality. Combing these two ideas, we get utilitarianism: We should maximize the quality of our experience, giving equal weight to the experience of each person. (173)

If you cite an authority recognized only by your own tribe—say, the Bible—in support of a moral argument, then members of other tribes will either simply discount your position or counter it with pronouncements by their own authorities. If, on the other hand, you argue for a law or a policy by citing evidence that implementing it would mean longer, healthier, happier lives for the citizens it affects, then only those seeking to establish the dominion of their own tribe can discount your position (which of course isn’t to say they can’t offer rival interpretations of your evidence).

            If we turn commonsense morality on its head and evaluate the consequences of giving our intuitions priority over utilitarian accounting, we can find countless circumstances in which being overly moral is to everyone’s detriment. Ideas of justice and fairness allow far too much space for selfish and tribal biases, whereas the math behind mutually optimal outcomes based on compromise tends to be harder to fudge. Greene reports, for instance, the findings of a series of experiments conducted by Fieke Harinick and colleagues at the University of Amsterdam in 2000. Negotiations by lawyers representing either the prosecution or the defense were told to either focus on serving justice or on getting the best outcome for their clients. The negotiations in the first condition almost always came to loggerheads. Greene explains,

Thus, two selfish and rational negotiators who see that their positions are symmetrical will be willing to enlarge the pie, and then split the pie evenly. However, if negotiators are seeking justice, rather than merely looking out for their bottom lines, then other, more ambiguous, considerations come into play, and with them the opportunity for biased fairness. Maybe your clients really deserve lighter penalties. Or maybe the defendants you’re prosecuting really deserve stiffer penalties. There is a range of plausible views about what’s truly fair in these cases, and you can choose among them to suit your interests. By contrast, if it’s just a matter of getting the best deal you can from someone who’s just trying to get the best deal for himself, there’s a lot less wiggle room, and a lot less opportunity for biased fairness to create an impasse. (88)

Framing an argument as an effort to establish who was right and who was wrong is like drawing a line in the sand—it activates tribal attitudes pitting us against them, while treating negotiations more like an economic exchange circumvents these tribal biases.

Challenges to Utilitarianism

But do we really want to suppress our automatic moral reactions in favor of deliberative accountings of the greatest good for the greatest number? Deontologists have posed some effective challenges to utilitarianism in the form of thought-experiments that seem to show efforts to improve the quality of experiences would lead to atrocities. For instance, Greene recounts how in a high school debate, he was confronted by a hypothetical surgeon who could save five sick people by killing one healthy one. Then there’s the so-called Utility Monster, who experiences such happiness when eating humans that it quantitatively outweighs the suffering of those being eaten. More down-to-earth examples feature a scapegoat convicted of a crime to prevent rioting by people who are angry about police ineptitude, and the use of torture to extract information from a prisoner that could prevent a terrorist attack. The most influential challenge to utilitarianism, however, was leveled by the political philosopher John Rawls when he pointed out that it could be used to justify the enslavement of a minority by a majority.

Greene’s responses to these criticisms make up one of the most surprising, important, and fascinating parts of Moral Tribes. First, highly contrived thought-experiments about Utility Monsters and circumstances in which pushing a guy off a bridge is guaranteed to stop a trolley may indeed prove that utilitarianism is not true in any absolute sense. But whether or not such moral absolutes even exist is a contentious issue in its own right. Greene explains,

I am not claiming that utilitarianism is the absolute moral truth. Instead I’m claiming that it’s a good metamorality, a good standard for resolving moral disagreements in the real world. As long as utilitarianism doesn’t endorse things like slavery in the real world, that’s good enough. (275-6)

One source of confusion regarding the slavery issue is the equation of happiness with money; slave owners probably could make profits in excess of the losses sustained by the slaves. But money is often a poor index of happiness. Greene underscores this point by asking us to consider how much someone would have to pay us to sell ourselves into slavery. “In the real world,” he writes, “oppression offers only modest gains in happiness to the oppressors while heaping misery upon the oppressed” (284-5).

            Another failing of the thought-experiments thought to undermine utilitarianism is the shortsightedness of the supposedly obvious responses. The crimes of the murderous doctor and the scapegoating law officers may indeed produce short-term increases in happiness, but if the secret gets out healthy and innocent people will live in fear, knowing they can’t trust doctors and law officers. The same logic applies to the objection that utilitarianism would force us to become infinitely charitable, since we can almost always afford to be more generous than we currently are. But how long could we serve as so-called happiness pumps before burning out, becoming miserable, and thus lose the capacity for making anyone else happier? Greene writes,

If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits. (258)

Greene seems to be endorsing what philosophers call "rule utilitarianism." We can approach every choice by calculating the likely outcomes, but as a society we would be better served deciding on some rules for everyone to adhere to. It just may be possible for a doctor to increase happiness through murder in a particular set of circumstances—but most people would vociferously object to a rule legitimizing the practice.

The concept of human rights may present another challenge to Greene in his championing of consequentialism over deontology. It is our duty, after all, to recognize the rights of every human, and we ourselves have no right to disregard someone else’s rights no matter what benefit we believe might result from doing so. In his book The Better Angels of our Nature, Steven Pinker, Greene’s colleague at Harvard, attributes much of the steep decline in rates of violent death over the past three centuries to a series of what he calls Rights Revolutions, the first of which began during the Enlightenment. But the problem with arguments that refer to rights, Greene explains, is that controversies arise for the very reason that people don’t agree which rights we should recognize. He writes,

Thus, appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribespeople feel, you can always posit the existence of a right that corresponds to your feelings. If you feel that abortion is wrong, you can talk about a “right to life.” If you feel that outlawing abortion is wrong, you can talk about a “right to choose.” If you’re Iran, you can talk about your “nuclear rights,” and if you’re Israel you can talk about your “right to self-defense.” “Rights” are nothing short of brilliant. They allow us to rationalize our gut feelings without doing any additional work. (302)

The only way to resolve controversies over which rights we should actually recognize and which rights we should prioritize over others, Greene argues, is to apply utilitarian reasoning.

Ideology and Epistemology

            In his discussion of the proper use of the language of rights, Greene comes closer than in other section of Moral Tribes to explicitly articulating what strikes me as the most revolutionary idea that he and his fellow moral psychologists are suggesting—albeit as of yet only implicitly. In his advocacy for what he calls “deep pragmatism,” Greene isn’t merely applying evolutionary theories to an old philosophical debate; he’s actually posing a subtly different question. The numerous thought-experiments philosophers use to poke holes in utilitarianism may not have much relevance in the real world—but they do undermine any claim utilitarianism may have on absolute moral truth. Greene’s approach is therefore to eschew any effort to arrive at absolute truths, including truths pertaining to rights. Instead, in much the same way scientists accept that our knowledge of the natural world is mediated by theories, which only approach the truth asymptotically, never capturing it with any finality, Greene intimates that the important task in the realm of moral philosophy isn’t to arrive at a full accounting of moral truths but rather to establish a process for resolving moral and political dilemmas.

What’s needed, in other words, isn’t a rock solid moral ideology but a workable moral epistemology. And, just as empiricism serves as the foundation of the epistemology of science, Greene makes a convincing case that we could use utilitarianism as the basis of an epistemology of morality. Pursuing the analogy between scientific and moral epistemologies even farther, we can compare theories, which stand or fall according to their empirical support, to individual human rights, which we afford and affirm according to their impact on the collective happiness of every individual in the society. Greene writes,

If we are truly interested in persuading our opponents with reason, then we should eschew the language of rights. This is, once again, because we have no non-question-begging (and utilitarian) way of figuring out which rights really exist and which rights take precedence over others. But when it’s not worth arguing—either because the question has been settled or because our opponents can’t be reasoned with—then it’s time to start rallying the troops. It’s time to affirm our moral commitments, not with wonky estimates of probabilities but with words that stir our souls. (308-9)

Rights may be the closest thing we have to moral truths, just as theories serve as our stand-ins for truths about the natural world, but even more important than rights or theories are the processes we rely on to establish and revise them.

A New Theory of Narrative

            As if a philosophical revolution weren’t enough, moral psychology is also putting in place what could be the foundation of a new understanding of the role of narratives in human lives. At the heart of every story is a conflict between competing moral ideals. In commercial fiction, there tends to be a character representing each side of the conflict, and audiences can be counted on to favor one side over the other—the good, altruistic guys over the bad, selfish guys. In more literary fiction, on the other hand, individual characters are faced with dilemmas pitting various modes of moral thinking against each other. In season one of Breaking Bad, for instance, Walter White famously writes a list on a notepad of the pros and cons of murdering the drug dealer restrained in Jesse’s basement. Everyone, including Walt, feels that killing the man is wrong, but if they let him go Walt and his family will at risk of retaliation. This dilemma is in fact quite similar to ones he faces in each of the following seasons, right up until he has to decide whether or not to poison Brock. Trying to work out what Walt should do, and anxiously anticipating what he will do, are mental exercises few can help engaging in as they watch the show.

            The current reigning conception of narrative in academia explains the appeal of stories by suggesting it derives from their tendency to fulfill conscious and unconscious desires, most troublesomely our desires to have our prejudices reinforced. We like to see men behaving in ways stereotypically male, women stereotypically female, minorities stereotypically black or Hispanic, and so on. Cultural products like works of art, and even scientific findings, are embraced, the reasoning goes, because they cement the hegemony of various dominant categories of people within the society. This tradition in arts scholarship and criticism can in large part be traced back to psychoanalysis, but it has developed over the last century to incorporate both the predominant view of language in the humanities and the cultural determinism espoused by many scholars in the service of various forms of identity politics. 

            The postmodern ideology that emerged from the convergence of these schools is marked by a suspicion that science is often little more than a veiled effort at buttressing the political status quo, and its preeminent thinkers deliberately set themselves apart from the humanist and Enlightenment traditions that held sway in academia until the middle of the last century by writing in byzantine, incoherent prose. Even though there could be no rational way either to support or challenge postmodern ideas, scholars still take them as cause for leveling accusations against both scientists and storytellers of using their work to further reactionary agendas.

For anyone who recognizes the unparalleled power of science both to advance our understanding of the natural world and to improve the conditions of human lives, postmodernism stands out as a catastrophic wrong turn, not just in academic history but in the moral evolution of our civilization. The equation of narrative with fantasy is a bizarre fantasy in its own right. Attempting to explain the appeal of a show like Breaking Bad by suggesting that viewers have an unconscious urge to be diagnosed with cancer and to subsequently become drug manufacturers is symptomatic of intractable institutional delusion. And, as Pinker recounts in Better Angels, literature, and novels in particular, were likely instrumental in bringing about the shift in consciousness toward greater compassion for greater numbers of people that resulted in the unprecedented decline in violence beginning in the second half of the nineteenth century.

Yet, when it comes to arts scholarship, postmodernism is just about the only game in town. Granted, the writing in this tradition has progressed a great deal toward greater clarity, but the focus on identity politics has intensified to the point of hysteria: you’d be hard-pressed to find a major literary figure who hasn’t been accused of misogyny at one point or another, and any scientist who dares study something like gender differences can count on having her motives questioned and her work lampooned by well intentioned, well indoctrinated squads of well-poisoning liberal wags.

            When Emily Nussbaum complains about viewers of Breaking Bad being lulled by the “masculine adventure” and the digs against “nagging wives” into becoming enablers of Walt’s bad behavior, she’s doing exactly what so many of us were taught to do in academic courses on literary and film criticism, applying a postmodern feminist ideology to the show—and completely missing the point. As the series opens, Walt is deliberately portrayed as downtrodden and frustrated, and Skyler’s bullying is an important part of that dynamic. But the pleasure of watching the show doesn’t come so much from seeing Walt get out from under Skyler’s thumb—he never really does—as it does from anticipating and fretting over how far Walt will go in his criminality, goaded on by all that pent-up frustration. Walt shows a similar concern for himself, worrying over what effect his exploits will have on who he is and how he will be remembered. We see this in season three when he becomes obsessed by the “contamination” of his lab—which turns out to be no more than a house fly—and at several other points as well. Viewers are not concerned with Walt because he serves as an avatar acting out their fantasies (or else the show would have a lot more nude scenes with the principal of the school he teaches in). They’re concerned because, at least at the beginning of the show, he seems to be a good person and they can sympathize with his tribulations.

The much more solidly grounded view of narrative inspired by moral psychology suggests that common themes in fiction are not reflections or reinforcements of some dominant culture, but rather emerge from aspects of our universal human psychology. Our feelings about characters, according to this view, aren’t determined by how well they coincide with our abstract prejudices; on the contrary, we favor the types of people in fiction we would favor in real life. Indeed, if the story is any good, we will have to remind ourselves that the people whose lives we’re tracking really are fictional. Greene doesn’t explore the intersection between moral psychology and narrative in Moral Tribes, but he does give a nod to what we can only hope will be a burgeoning field when he writes, 

Nowhere is our concern for how others treat others more apparent than in our intense engagement with fiction. Were we purely selfish, we wouldn’t pay good money to hear a made-up story about a ragtag group of orphans who use their street smarts and quirky talents to outfox a criminal gang. We find stories about imaginary heroes and villains engrossing because they engage our social emotions, the ones that guide our reactions to real-life cooperators and rogues. We are not disinterested parties. (59)

Many people ask why we care so much about people who aren’t even real, but we only ever reflect on the fact that what we’re reading or viewing is a simulation when we’re not sufficiently engrossed by it.

            Was Nussbaum completely wrong to insist that Walt went beyond the pale when he poisoned Brock? She certainly showed the type of myopia Greene attributes to the automatic mode by forgetting Walt saved at least four lives by endangering the one. But most viewers probably had a similar reaction. The trouble wasn’t that she was appalled; it was that her postmodernism encouraged her to unquestioningly embrace and give voice to her initial feelings. Greene writes, 

It’s good that we’re alarmed by acts of violence. But the automatic emotional gizmos in our brains are not infinitely wise. Our moral alarm systems think that the difference between pushing and hitting a switch is of great moral significance. More important, our moral alarm systems see no difference between self-serving murder and saving a million lives at the cost of one. It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy. (253)

Greene doesn’t reveal whether he’s a Breaking Bad fan or not, but his discussion of the footbridge dilemma gives readers a good indication of how he’d respond to Walt’s actions.

If you don’t feel that it’s wrong to push the man off the footbridge, there’s something wrong with you. I, too, feel that it’s wrong, and I doubt that I could actually bring myself to push, and I’m glad that I’m like this. What’s more, in the real world, not pushing would almost certainly be the right decision. But if someone with the best of intentions were to muster the will to push the man off the footbridge, knowing for sure that it would save five lives, and knowing for sure that there was no better alternative, I would approve of this action, although I might be suspicious of the person who chose to perform it. (251)

The overarching theme of Breaking Bad, which Nussbaum fails utterly to comprehend, is the transformation of a good man into a bad one. When he poisons Brock, we’re glad he succeeded in saving his family, some of us are even okay with methods, but we’re worried—suspicious even—about what his ability to go through with it says about him. Over the course of the series, we’ve found ourselves rooting for Walt, and we’ve come to really like him. We don’t want to see him break too bad. And however bad he does break we can’t help hoping for his redemption. Since he’s a valuable member of our tribe, we’re loath to even consider it might be time to start thinking he has to go.

Also read:

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

And

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

“The World until Yesterday” and the Great Anthropology Divide: Wade Davis’s and James C. Scott’s Bizarre and Dishonest Reviews of Jared Diamond’s Work

The field of anthropology is divided into two rival factions, the postmodernists and the scientists—though the postmodernists like to insist they’re being scientific as well. The divide can be seen in critiques of Jared Diamond’s “The World until Yesterday.”

Cultural anthropology has for some time been divided into two groups. The first attempts to understand cultural variation empirically by incorporating it into theories of human evolution and ecological adaptation. The second merely celebrates cultural diversity, and its members are quick to attack any findings or arguments by those in the first group that can in any way be construed as unflattering to the cultures being studied. (This dichotomy is intended to serve as a useful, and only slight, oversimplification.)

Jared Diamond’s scholarship in anthropology places him squarely in the first group. Yet he manages to thwart many of the assumptions held by those in the second group because he studiously avoids the sins of racism and biological determinism they insist every last member of the first group is guilty of. Rather than seeing his work as an exemplar or as evidence that the field is amenable to scientific investigation, however, members of the second group invent crimes and victims so they can continue insisting there’s something immoral about scientific anthropology (though the second group, oddly enough, claims that designation as well).

            Diamond is not an anthropologist by training, but his Pulitzer Prize-winning book Guns, Germs, and Steel, in which he sets out to explain why some societies became technologically advanced conquerors over the past 10,000 years while others maintained their hunter-gatherer lifestyles, became a classic in the field almost as soon as it was published in 1997. His interest in cultural variation arose in large part out of his experiences traveling through New Guinea, the most culturally diverse region of the planet, to conduct ornithological research. By the time he published his first book about human evolution, The Third Chimpanzee, at age 54, he’d spent more time among people from a more diverse set of cultures than many anthropologists do over their entire careers.

In his latest book, The World until Yesterday: What Can We Learn from Traditional Societies?, Diamond compares the lifestyles of people living in modern industrialized societies with those of people who rely on hunting and gathering or horticultural subsistence strategies. His first aim is simply to highlight the differences, since the way most us live today is, evolutionarily speaking, a very recent development; his second is to show that certain traditional practices may actually lead to greater well-being, and may thus be advantageous if adopted by those of us living in advanced civilizations.

            Obviously, Diamond’s approach has certain limitations, chief among them that it affords him little space for in-depth explorations of individual cultures. Instead, he attempts to identify general patterns that apply to traditional societies all over the world. What this means in the context of the great divide in anthropology is that no sooner had Diamond set pen to paper than he’d fallen afoul of the most passionately held convictions of the second group, who bristle at any discussion of universal trends in human societies. The anthropologist Wade Davis’s review of The World until Yesterday in The Guardian is extremely helpful for anyone hoping to appreciate the differences between the two camps because it exemplifies nearly all of the features of this type of historical particularism, with one exception: it’s clearly, even gracefully, written. But this isn’t to say Davis is at all straightforward about his own positions, which you have to read between the lines to glean. Situating the commitment to avoid general theories and focus instead on celebrating the details in a historical context, Davis writes,

This ethnographic orientation, distilled in the concept of cultural relativism, was a radical departure, as unique in its way as was Einstein’s theory of relativity in the field of physics. It became the central revelation of modern anthropology. Cultures do not exist in some absolute sense; each is but a model of reality, the consequence of one particular set of intellectual and spiritual choices made, however successfully, many generations before. The goal of the anthropologist is not just to decipher the exotic other, but also to embrace the wonder of distinct and novel cultural possibilities, that we might enrich our understanding of human nature and just possibly liberate ourselves from cultural myopia, the parochial tyranny that has haunted humanity since the birth of memory.

This stance with regard to other cultures sounds viable enough—it even seems admirable. But Davis is saying something more radical than you may think at first glance. He’s claiming that cultural differences can have no explanations because they arise out of “intellectual and spiritual choices.” It must be pointed out as well that he’s profoundly confused about how relativity in physics relates to—or doesn’t relate to—cultural relativity in anthropology. Einstein discovered that time is relative with regard to velocity compared to a constant speed of light, so the faster one travels the more slowly time advances. Since this rule applies the same everywhere in the universe, the theory actually works much better as an analogy for the types of generalization Diamond tries to discover than it does for the idea that no such generalizations can be discovered. Cultural relativism is not a “revelation” about whether or not cultures can be said to exist or not; it’s a principle that enjoins us to try to understand other cultures on their own terms, not as deviations from our own. Diamond appreciates this principle—he just doesn’t take it to as great an extreme as Davis and the other anthropologists in his camp.

            The idea that cultures don’t exist in any absolute sense implies that comparing one culture to another won’t result in any meaningful or valid insights. But this isn’t a finding or a discovery, as Davis suggests; it’s an a priori conviction. For anthropologists in Davis’s camp, as soon as you start looking outside of a particular culture for an explanation of how it became what it is, you’re no longer looking to understand that culture on its own terms; you’re instead imposing outside ideas and outside values on it. So the simple act of trying to think about variation in a scientific way automatically makes you guilty of a subtle form of colonization. Davis writes,

The very premise of Guns, Germs, and Steel is that a hierarchy of progress exists in the realm of culture, with measures of success that are exclusively material and technological; the fascinating intellectual challenge is to determine just why the west ended up on top. In the posing of this question, Diamond evokes 19th-century thinking that modern anthropology fundamentally rejects. The triumph of secular materialism may be the conceit of modernity, but it does very little to unveil the essence of culture or to account for its diversity and complexity.

For Davis, comparison automatically implies assignment of relative values. But, if we agree that two things can be different without one being superior, we must conclude that Davis is simply being dishonest, because you don’t have to read beyond the Prelude to Guns, Germs, and Steel to find Diamond’s explicit disavowal of this premise that supposedly underlies the entire book:

…don’t words such as “civilization,” and phrases such as “rise of civilization,” convey the false impression that civilization is good, tribal hunter-gatherers are miserable, and history for the past 13,000 years has involved progress toward greater human happiness? In fact, I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in happiness. My own impression, from having divided my life between United States cities and New Guinea villages, is that the so-called blessings of civilization are mixed. For example, compared with hunter-gatherers, citizens of modern industrialized states enjoy better medical care, lower risk of death by homicide, and a longer life span, but receive much less social support from friendships and extended families. My motive for investigating these geographic differences in human societies is not to celebrate one type of society over another but simply to understand what happened in history. (18)

            For Davis and those sharing his postmodern ideology, this type of dishonesty is acceptable because they believe the political ends of protecting indigenous peoples from exploitation justifies their deceitful means. In other words, they’re placing their political goals before their scholarly or scientific ones. Davis argues that the only viable course is to let people from various cultures speak for themselves, since facts and theories in the wrong hands will inevitably lubricate the already slippery slope to colonialism and exploitation. Even Diamond’s theories about environmental influences, in this light, can be dangerous. Davis writes,

In accounting for their simple material culture, their failure to develop writing or agriculture, he laudably rejects notions of race, noting that there is no correlation between intelligence and technological prowess. Yet in seeking ecological and climatic explanations for the development of their way of life, he is as certain of their essential primitiveness as were the early European settlers who remained unconvinced that Aborigines were human beings. The thought that the hundreds of distinct tribes of Australia might simply represent different ways of being, embodying the consequences of unique sets of intellectual and spiritual choices, does not seem to have occurred to him.

Davis is rather deviously suggesting a kinship between Diamond and the evil colonialists of yore, but the connection rests on a non sequitur, that positing environmental explanations of cultural differences necessarily implies primitiveness on the part of the “lesser” culture.

Davis doesn’t explicitly say anywhere in his review that all scientific explanations are colonialist, but once you rule out biological, cognitive, environmental, and climatic theories, well, there’s not much left. Davis’s rival explanation, such as it is, posits a series of collective choices made over the course of history, which in a sense must be true. But it merely begs the question of what precisely led the people to make those choices, and this question inevitably brings us back to all those factors Diamond weighs as potential explanations. Davis could have made the point that not every aspect of every cultural can be explained by ecological factors—but Diamond never suggests otherwise. Citing the example of Kaulong widow strangling in The World until Yesterday, Diamond writes that there’s no reason to believe the practice is in any way adaptive and admits that it can only be “an independent historical cultural trait that arose for some unknown reason in that particular area of New Britain” (21).

I hope we can all agree that harming or exploiting indigenous peoples in any part of the world is wrong and that we should support the implementation of policies that protect them and their ways of life (as long as those ways don’t involve violations of anyone’s rights as a human—yes, that moral imperative supersedes cultural relativism, fears of colonialism be damned). But the idea that trying to understand cultural variation scientifically always and everywhere undermines the dignity of people living in non-Western cultures is the logical equivalent of insisting that trying to understand variations in peoples’ personalities through empirical methods is an affront to their agency and freedom to make choices as individuals. If the position of these political-activist anthropologists had any validity, it would undermine the entire field of psychology, and for that matter the social sciences in general. It’s safe to assume that the opacity that typifies these anthropologists’ writing is meant to protect their ideas from obvious objections like this one. 

As well as Davis writes, it’s nonetheless difficult to figure out what his specific problems with Diamond’s book are. At one point he complains, “Traditional societies do not exist to help us tweak our lives as we emulate a few of their cultural practices. They remind us that our way is not the only way.” Fair enough—but then he concludes with a passage that seems startlingly close to a summation of Diamond’s own thesis.

The voices of traditional societies ultimately matter because they can still remind us that there are indeed alternatives, other ways of orienting human beings in social, spiritual and ecological space… By their very existence the diverse cultures of the world bear witness to the folly of those who say that we cannot change, as we all know we must, the fundamental manner in which we inhabit this planet. This is a sentiment that Jared Diamond, a deeply humane and committed conservationist, would surely endorse.

On the surface, it seems like Davis isn’t even disagreeing with Diamond. What he’s not saying explicitly, however, but hopes nonetheless that we understand is that sampling or experiencing other cultures is great—but explaining them is evil.

            Davis’s review was published in January of 2013, and its main points have been echoed by several other anti-scientific anthropologists—but perhaps none so eminent as the Yale Professor of Anthropology and Political Science, James C. Scott, whose review, “Crops, Towns, Government,” appeared in the London Review of Books in November. After praising Diamond’s plea for the preservation of vanishing languages, Scott begins complaining about the idea that modern traditional societies offer us any evidence at all about how our ancestors lived. He writes of Diamond,

He imagines he can triangulate his way to the deep past by assuming that contemporary hunter-gatherer societies are ‘our living ancestors’, that they show what we were like before we discovered crops, towns and government. This assumption rests on the indefensible premise that contemporary hunter-gatherer societies are survivals, museum exhibits of the way life was lived for the entirety of human history ‘until yesterday’–preserved in amber for our examination.

Don’t be fooled by those lonely English quotation marks—Diamond never makes this mistake, nor does his argument rest on any such premise. Scott is simply being dishonest. In the first chapter of The World until Yesterday, Diamond explains why he wanted to write about the types of changes that took place in New Guinea between the first contact with Westerners in 1931 and today. “New Guinea is in some respects,” he writes, “a window onto the human world as it was until a mere yesterday, measured against a time scale of the 6,000,000 years of human evolution.” He follows this line with a parenthetical, “(I emphasize ‘in some respects’—of course the New Guinea Highlands of 1931 were not an unchanged world of yesterday)” (5-6). It’s clear he added this line because he was anticipating criticisms like Davis’s and Scott’s.

The confusion arises from Scott’s conflation of the cultures and lifestyles Diamond describes with the individuals representing them. Diamond assumes that factors like population size, social stratification, and level of technological advancement have a profound influence on culture. So, if we want to know about our ancestors, we need to look to societies living in conditions similar to the ones they must’ve lived in with regard to just these types of factors. In another bid to ward off the types of criticism he knows to expect from anthropologists like Scott and Davis, he includes a footnote in his introduction which explains precisely what he’s interested in.

By the terms “traditional” and “small-scale” societies, which I shall use throughout this book, I mean past and present societies living at low population densities in small groups ranging from a few dozen to a few thousand people, subsisting by hunting-gathering or by farming or herding, and transformed to a limited degree by contact with large, Westernized, industrial societies. In reality, all such traditional societies still existing today have been at least partly modified by contact, and could alternatively be described as “transitional” rather than “traditional” societies, but they often still retain many features and social processes of the small societies of the past. I contrast traditional small-scale societies with “Westernized” societies, by which I mean the large modern industrial societies run by state governments, familiar to readers of this book as the societies in which most of my readers now live. They are termed “Westernized” because important features of those societies (such as the Industrial Revolution and public health) arose first in Western Europe in the 1700s and 1800s, and spread from there overseas to many other countries. (6)

Scott goes on to take Diamond to task for suggesting that traditional societies are more violent than modern industrialized societies. This is perhaps the most incendiary point of disagreement between the factions on either side of the anthropology divide. The political activists worry that if anthropologists claim indigenous peoples are more violent outsiders will take it as justification to pacify them, which has historically meant armed invasion and displacement. Since the stakes are so high, Scott has no compunctions about misrepresenting Diamond’s arguments. “There is, contra Diamond,” he writes, “a strong case that might be made for the relative non-violence and physical well-being of contemporary hunters and gatherers when compared with the early agrarian states.” 

Well, no, not contra Diamond, who only compares traditional societies to modern Westernized states, like the ones his readers live in, not early agrarian ones. Scott is referring to Diamond's theories about the initial transition to states, claiming that interstate violence negates the benefits of any pacifying central authority. But it may still be better to live under the threat of infrequent state warfare than of much more frequent ambushes or retaliatory attacks by nearby tribes. Scott also suggests that records of high rates of enslavement in early states somehow undermine the case for more homicide in traditional societies, but again Diamond doesn’t discuss early states. Diamond would probably agree that slavery, in the context of his theories, is an interesting topic, but it's hardly the fatal flaw in his ideas Scott makes it out to be.

The misrepresentations extend beyond Diamond’s arguments to encompass the evidence he builds them on. Scott insists it’s all anecdotal, pseudoscientific, and extremely limited in scope. His biggest mistake here is to pull Steven Pinker into the argument, a psychologist whose name alone may tar Diamond’s book in the eyes of anthropologists who share Scott’s ideology, but for anyone else, especially if they’ve actually read Pinker’s work, that name lends further credence to Diamond’s case. (Pinker has actually done the math on whether your chances of dying a violent death are better or worse in different types of society.) Scott writes,

Having chosen some rather bellicose societies (the Dani, the Yanomamo) as illustrations, and larded his account with anecdotal evidence from informants, he reaches the same conclusion as Steven Pinker in The Better Angels of Our Nature: we know, on the basis of certain contemporary hunter-gatherers, that our ancestors were violent and homicidal and that they have only recently (very recently in Pinker’s account) been pacified and civilised by the state. Life without the state is nasty, brutish and short.

In reality, both Diamond and Pinker rely on evidence from a herculean variety of sources going well beyond contemporary ethnographies. To cite just one example Scott neglects to mention, an article by Samuel Bowles published in the journal Science in 2009 examines the rates of death by violence at several prehistoric sites and shows that they’re startlingly similar to those found among modern hunter-gatherers. Insofar as Scott even mentions archeological evidence, it's merely to insist on its worthlessness. Anyone who reads The World until Yesterday after reading Scott’s review will be astonished by how nuanced Diamond’s section on violence actually is. Taking up almost a hundred pages, it is far more insightful and better supported than the essay that purports to undermine it. The section also shows, contra Scott, that Diamond is well aware of all the difficulties and dangers of trying to arrive at conclusions based on any one line of evidence—which is precisely why he follows as many lines as are available to him.

However, even if we accept that traditional societies really are more violent, it could still be the case that tribal conflicts are caused, or at least intensified, through contact with large-scale societies. In order to make this argument, though, political-activist anthropologists must shift their position from claiming that no evidence of violence exists to claiming that the evidence is meaningless or misleading. Scott writes,

No matter how one defines violence and warfare in existing hunter-gatherer societies, the greater part of it by far can be shown to be an effect of the perils and opportunities represented by a world of states. A great deal of the warfare among the Yanomamo was, in this sense, initiated to monopolise key commodities on the trade routes to commercial outlets (see, for example, R. Brian Ferguson’s Yanomami Warfare: A Political History, a strong antidote to the pseudo-scientific account of Napoleon Chagnon on which Diamond relies heavily).

It’s true that Ferguson puts forth a rival theory for warfare among the Yanomamö—and the political-activist anthropologists hold him up as a hero for doing so. (At least one Yanomamö man insisted, in response to Chagnon’s badgering questions about why they fought so much, that it had nothing to do with commodities—they raided other villages for women.) But Ferguson’s work hardly settles the debate. Why, for instance, do the patterns of violence appear in traditional societies all over the world, regardless of which state societies they’re in supposed contact with? And state governments don’t just influence violence in an upward direction. As Diamond points out, “State governments routinely adopt a conscious policy of ending traditional warfare: for example, the first goal of 20th-Century Australian patrol officers in the Territory of Papua New Guinea, on entering a new area, was to stop warfare and cannibalism” (133-4).

What is the proper moral stance anthropologists should take with regard to people living in traditional societies? Should they make it their priority to report the findings of their inquiries honestly? Or should they prioritize their role as advocates for indigenous people’s rights? These are fair questions—and they take on a great deal of added gravity when you consider the history, not to mention the ongoing examples, of how indigenous peoples have suffered at the hands of peoples from Western societies. The answers hinge on how much influence anthropologists currently have on policies that impact traditional societies and on whether science, or Western culture in general, is by its very nature somehow harmful to indigenous peoples. Scott’s and Davis’s positions on both of these issues are clear. Scott writes,

Contemporary hunter-gatherer life can tell us a great deal about the world of states and empires but it can tell us nothing at all about our prehistory. We have virtually no credible evidence about the world until yesterday and, until we do, the only defensible intellectual position is to shut up.

Scott’s argument raises two further questions: when and from where can we count on the “credible evidence” to start rolling in? His “only defensible intellectual position” isn’t that we should reserve judgment or hold off trying to arrive at explanations; it’s that we shouldn’t bother trying to judge the merits of the evidence and that any attempts at explanation are hopeless. This isn’t an intellectual position at all—it’s an obvious endorsement of anti-intellectualism. What Scott really means is that he believes making questions about our hunter-gatherer ancestors off-limits is the only morally defensible position.

            It’s easy to conjure up mental images of the horrors inflicted on native peoples by western explorers and colonial institutions. But framing the history of encounters between peoples with varying levels of technological advancement as one long Manichean tragedy of evil imperialists having their rapacious and murderous way with perfectly innocent noble savages risks trivializing important elements of both types of culture. Traditional societies aren’t peaceful utopias. Western societies and Western governments aren’t mere engines of oppression. Most importantly, while it may be true that science can be—and sometimes is—coopted to serve oppressive or exploitative ends, there’s nothing inherently harmful or immoral about science, which can just as well be used to counter arguments for the mistreatment of one group of people by another. To anthropologists like Davis and Scott, human behavior is something to stand in spiritual awe of, indigenous societies something to experience religious guilt about, in any case not anything to profane with dirty, mechanistic explanations. But, for all their declamations about the evils of thinking that any particular culture can in any sense be said to be inferior to another, they have a pretty dim view of our own.

            It may be simple pride that makes it hard for Scott to accept that gold miners in Brazil weren’t sitting around waiting for some prominent anthropologist at the University of Michigan, or UCLA, or Yale, to publish an article in Science about Yanomamö violence to give them proper justification to use their superior weapons to displace the people living on prime locations. The sad fact is, if the motivation to exploit indigenous peoples is strong enough, and if the moral and political opposition isn’t sufficient, justifications will be found regardless of which anthropologist decides to publish on which topics. But the crucial point Scott misses is that our moral and political opposition cannot be founded on dishonest representations or willful blindness regarding the behaviors, good or bad, of the people we would protect. To understand why this is so, and because Scott embarrassed himself with his childishness, embarrassed The London Review which failed to properly fact-check his article, and did a disservice to the discipline of anthropology by attempting to shout down an honest and humane scholar he disagrees with, it's only fitting that we turn to a passage in The World until Yesterday Scott should have paid more attention to. “I sympathize with scholars outraged by the mistreatment of indigenous peoples,” Diamond writes,

But denying the reality of traditional warfare because of political misuse of its reality is a bad strategy, for the same reason that denying any other reality for any other laudable political goal is a bad strategy. The reason not to mistreat indigenous people is not that they are falsely accused of being warlike, but that it’s unjust to mistreat them. The facts about traditional warfare, just like the facts about any other controversial phenomenon that can be observed and studied, are likely eventually to come out. When they do come out, if scholars have been denying traditional warfare’s reality for laudable political reasons, the discovery of the facts will undermine the laudable political goals. The rights of indigenous people should be asserted on moral grounds, not by making untrue claims susceptible to refutation. (153-4)

Also read:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

The Self-Righteousness Instinct: Steven Pinker on the Better Angels of Modernity and the Evils of Morality

Is violence really declining? How can that be true? What could be causing it? Why are so many of us convinced the world is going to hell in a hand basket? Steven Pinker attempts to answer these questions in his magnificent and mind-blowing book.

51a5k0THlNL.jpg

Steven Pinker is one of the few scientists who can write a really long book and still expect a significant number of people to read it. But I have a feeling many who might be vaguely intrigued by the buzz surrounding his 2011 book The Better Angels of Our Nature: Why Violence Has Declined wonder why he had to make it nearly seven hundred outsized pages long. Many curious folk likely also wonder why a linguist who proselytizes for psychological theories derived from evolutionary or Darwinian accounts of human nature would write a doorstop drawing on historical and cultural data to describe the downward trajectories of rates of the worst societal woes. The message that violence of pretty much every variety is at unprecedentedly low rates comes as quite a shock, as it runs counter to our intuitive, news-fueled sense of being on a crash course for Armageddon. So part of the reason behind the book’s heft is that Pinker has to bolster his case with lots of evidence to get us to rethink our views. But flipping through the book you find that somewhere between half and a third of its mass is devoted, not to evidence of the decline, but to answering the questions of why the trend has occurred and why it gives every indication of continuing into the foreseeable future. So is this a book about how evolution has made us violent or about how culture is making us peaceful?

The first thing that needs to be said about Better Angels is that you should read it. Despite its girth, it’s at no point the least bit cumbersome to read, and at many points it’s so fascinating that, weighty as it is, you’ll have a hard time putting it down. Pinker has mastered a prose style that’s simple and direct to the point of feeling casual without ever wanting for sophistication. You can also rest assured that what you’re reading is timely and important because it explores aspects of history and social evolution that impact pretty much everyone in the world but that have gone ignored—if not censoriously denied—by most of the eminences contributing to the zeitgeist since the decades following the last world war.

            Still, I suspect many people who take the plunge into the first hundred or so pages are going to feel a bit disoriented as they try to figure out what the real purpose of the book is, and this may cause them to falter in their resolve to finish reading. The problem is that the resistance Better Angels runs to such a prodigious page-count simultaneously anticipating and responding to doesn’t come from news media or the blinkered celebrities in the carnivals of sanctimonious imbecility that are political talk shows. It comes from Pinker’s fellow academics. The overall point of Better Angels remains obscure owing to some deliberate caginess on the author’s part when it comes to identifying the true targets of his arguments. 

            This evasiveness doesn’t make the book difficult to read, but a quality of diffuseness to the theoretical sections, a multitude of strands left dangling, does at points make you doubt whether Pinker had a clear purpose in writing, which makes you doubt your own purpose in reading. With just a little tying together of those strands, however, you start to see that while on the surface he’s merely righting the misperception that over the course of history our species has been either consistently or increasingly violent, what he’s really after is something different, something bigger. He’s trying to instigate, or at least play a part in instigating, a revolution—or more precisely a renaissance—in the way scholars and intellectuals think not just about human nature but about the most promising ways to improve the lot of human societies.

The longstanding complaint about evolutionary explanations of human behavior is that by focusing on our biology as opposed to our supposedly limitless capacity for learning they imply a certain level of fixity to our nature, and this fixedness is thought to further imply a limit to what political reforms can accomplish. The reasoning goes, if the explanation for the way things are is to be found in our biology, then, unless our biology changes, the way things are is the way they’re going to remain. Since biological change occurs at the glacial pace of natural selection, we’re pretty much stuck with the nature we have. 

            Historically, many scholars have made matters worse for evolutionary scientists today by applying ostensibly Darwinian reasoning to what seemed at the time obvious biological differences between human races in intelligence and capacity for acquiring the more civilized graces, making no secret of their conviction that the differences justified colonial expansion and other forms of oppressive rule. As a result, evolutionary psychologists of the past couple of decades have routinely had to defend themselves against charges that they’re secretly trying to advance some reactionary (or even genocidal) agenda. Considering Pinker’s choice of topic in Better Angels in light of this type of criticism, we can start to get a sense of what he’s up to—and why his efforts are discombobulating.

If you’ve spent any time on a university campus in the past forty years, particularly if it was in a department of the humanities, then you have been inculcated with an ideology that was once labeled postmodernism but that eventually became so entrenched in academia, and in intellectual culture more broadly, that it no longer requires a label. (If you took a class with the word "studies" in the title, then you got a direct shot to the brain.) Many younger scholars actually deny any espousal of it—“I’m not a pomo!”—with reference to a passé version marked by nonsensical tangles of meaningless jargon and the conviction that knowledge of the real world is impossible because “the real world” is merely a collective delusion or social construction put in place to perpetuate societal power structures. The disavowals notwithstanding, the essence of the ideology persists in an inescapable but unremarked obsession with those same power structures—the binaries of men and women, whites and blacks, rich and poor, the West and the rest—and the abiding assumption that texts and other forms of media must be assessed not just according to their truth content, aesthetic virtue, or entertainment value, but also with regard to what we imagine to be their political implications. Indeed, those imagined political implications are often taken as clear indicators of the author’s true purpose in writing, which we must sniff out—through a process called “deconstruction,” or its anemic offspring “rhetorical analysis”—lest we complacently succumb to the subtle persuasion.

In the late nineteenth and early twentieth centuries, faith in what we now call modernism inspired intellectuals to assume that the civilizations of Western Europe and the United States were on a steady march of progress toward improved lives for all their own inhabitants as well as the world beyond their borders. Democracy had brought about a new age of government in which rulers respected the rights and freedom of citizens. Medicine was helping ever more people live ever longer lives. And machines were transforming everything from how people labored to how they communicated with friends and loved ones. Everyone recognized that the driving force behind this progress was the juggernaut of scientific discovery. But jump ahead a hundred years to the early twenty-first century and you see a quite different attitude toward modernity. As Pinker explains in the closing chapter of Better Angels,

A loathing of modernity is one of the great constants of contemporary social criticism. Whether the nostalgia is for small-town intimacy, ecological sustainability, communitarian solidarity, family values, religious faith, primitive communism, or harmony with the rhythms of nature, everyone longs to turn back the clock. What has technology given us, they say, but alienation, despoliation, social pathology, the loss of meaning, and a consumer culture that is destroying the planet to give us McMansions, SUVs, and reality television? (692)

The social pathology here consists of all the inequities and injustices suffered by the people on the losing side of those binaries all us closet pomos go about obsessing over. Then of course there’s industrial-scale war and all the other types of modern violence. With terrorism, the War on Terror, the civil war in Syria, the Israel-Palestine conflict, genocides in the Sudan, Kosovo, and Rwanda, and the marauding bands of drugged-out gang rapists in the Congo, it seems safe to assume that science and democracy and capitalism have contributed to the construction of an unsafe global system with some fatal, even catastrophic design flaws. And that’s before we consider the two world wars and the Holocaust. So where the hell is this decline Pinker refers to in his title?

            One way to think about the strain of postmodernism or anti-modernism with the most currency today (and if you’re reading this essay you can just assume your views have been influenced by it) is that it places morality and politics—identity politics in particular—atop a hierarchy of guiding standards above science and individual rights. So, for instance, concerns over the possibility that a negative image of Amazonian tribespeople might encourage their further exploitation trump objective reporting on their culture by anthropologists, even though there’s no evidence to support those concerns. And evidence that the disproportionate number of men in STEM fields reflects average differences between men and women in lifestyle preferences and career interests is ignored out of deference to a political ideal of perfect parity. The urge to grant moral and political ideals veto power over science is justified in part by all the oppression and injustice that abounds in modern civilizations—sexism, racism, economic exploitation—but most of all it’s rationalized with reference to the violence thought to follow in the wake of any movement toward modernity. Pinker writes,

“The twentieth century was the bloodiest in history” is a cliché that has been used to indict a vast range of demons, including atheism, Darwin, government, science, capitalism, communism, the ideal of progress, and the male gender. But is it true? The claim is rarely backed up by numbers from any century other than the 20th, or by any mention of the hemoclysms of centuries past. (193)

He gives the question even more gravity when he reports that all those other areas in which modernity is alleged to be such a colossal failure tend to improve in the absence of violence. “Across time and space,” he writes in the preface, “the more peaceable societies also tend to be richer, healthier, better educated, better governed, more respectful of their women, and more likely to engage in trade” (xxiii). So the question isn’t just about what the story with violence is; it’s about whether science, liberal democracy, and capitalism are the disastrous blunders we’ve learned to think of them as or whether they still just might hold some promise for a better world.

*******

            It’s in about the third chapter of Better Angels that you start to get the sense that Pinker’s style of thinking is, well, way out of style. He seems to be marching to the beat not of his own drummer but of some drummer from the nineteenth century. In the chapter previous, he drew a line connecting the violence of chimpanzees to that in what he calls non-state societies, and the images he’s left you with are savage indeed. Now he’s bringing in the philosopher Thomas Hobbes’s idea of a government Leviathan that once established immediately works to curb the violence that characterizes us humans in states of nature and anarchy. According to sociologist Norbert Elias’s 1969 book, The Civilizing Process, a work whose thesis plays a starring role throughout Better Angels, the consolidation of a Leviathan in England set in motion a trend toward pacification, beginning with the aristocracy no less, before spreading down to the lower ranks and radiating out to the countries of continental Europe and onward thence to other parts of the world. You can measure your feelings of unease in response to Pinker’s civilizing scenario as a proxy for how thoroughly steeped you are in postmodernism.

            The two factors missing from his account of the civilizing pacification of Europe that distinguish it from the self-congratulatory and self-exculpatory sagas of centuries past are the innate superiority of the paler stock and the special mission of conquest and conversion commissioned by a Christian god. In a later chapter, Pinker violates the contemporary taboo against discussing—or even thinking about—the potential role of average group (racial) differences in a propensity toward violence, but he concludes the case for any such differences is unconvincing: “while recent biological evolution may, in theory, have tweaked our inclinations toward violence and nonviolence, we have no good evidence that it actually has” (621). The conclusion that the Civilizing Process can’t be contingent on congenital characteristics follows from the observation of how readily individuals from far-flung regions acquire local habits of self-restraint and fellow-feeling when they’re raised in modernized societies. As for religion, Pinker includes it in a category of factors that are “Important but Inconsistent” with regard to the trend toward peace, dismissing the idea that atheism leads to genocide by pointing out that “Fascism happily coexisted with Catholicism in Spain, Italy, Portugal, and Croatia, and though Hitler had little use for Christianity, he was by no means an atheist, and professed that he was carrying out a divine plan.” Though he cites several examples of atrocities incited by religious fervor, he does credit “particular religious movements at particular times in history” with successfully working against violence (677).

            Despite his penchant for blithely trampling on the taboos of the liberal intelligentsia, Pinker refuses to cooperate with our reflex to pigeonhole him with imperialists or far-right traditionalists past or present. He continually holds up to ridicule the idea that violence has any redeeming effects. In a section on the connection between increasing peacefulness and rising intelligence, he suggests that our violence-tolerant “recent ancestors” can rightly be considered “morally retarded” (658).

  He singles out George W. Bush as an unfortunate and contemptible counterexample in a trend toward more complex political rhetoric among our leaders. And if it’s either gender that comes out not looking as virtuous in Better Angels it ain’t the distaff one. Pinker is difficult to categorize politically because he’s a scientist through and through. What he’s after are reasoned arguments supported by properly weighed evidence.

But there is something going on in Better Angels beyond a mere accounting for the ongoing decline in violence that most of us are completely oblivious of being the beneficiaries of. For one, there’s a challenge to the taboo status of topics like genetic differences between groups, or differences between individuals in IQ, or differences between genders. And there’s an implicit challenge as well to the complementary premises he took on more directly in his earlier book The Blank Slate that biological theories of human nature always lead to oppressive politics and that theories of the infinite malleability of human behavior always lead to progress (communism relies on a blank slate theory, and it inspired guys like Stalin, Mao, and Pol Pot to murder untold millions). But the most interesting and important task Pinker has set for himself with Better Angels is a restoration of the Enlightenment, with its twin pillars of science and individual rights, to its rightful place atop the hierarchy of our most cherished guiding principles, the position we as a society misguidedly allowed to be usurped by postmodernism, with its own dual pillars of relativism and identity politics.

  But, while the book succeeds handily in undermining the moral case against modernism, it does so largely by stealth, with only a few explicit references to the ideologies whose advocates have dogged Pinker and his fellow evolutionary psychologists for decades. Instead, he explores how our moral intuitions and political ideals often inspire us to make profoundly irrational arguments for positions that rational scrutiny reveals to be quite immoral, even murderous. As one illustration of how good causes can be taken to silly, but as yet harmless, extremes, he gives the example of how “violence against children has been defined down to dodgeball” (415) in gym classes all over the US, writing that

The prohibition against dodgeball represents the overshooting of yet another successful campaign against violence, the century-long movement to prevent the abuse and neglect of children. It reminds us of how a civilizing offensive can leave a culture with a legacy of puzzling customs, peccadilloes, and taboos. The code of etiquette bequeathed to us by this and other Rights Revolutions is pervasive enough to have acquired a name. We call it political correctness. (381)

Such “civilizing offensives” are deliberately undertaken counterparts to the fortuitously occurring Civilizing Process Elias proposed to explain the jagged downward slope in graphs of relative rates of violence beginning in the Middle Ages in Europe. The original change Elias describes came about as a result of rulers consolidating their territories and acquiring greater authority. As Pinker explains,

Once Leviathan was in charge, the rules of the game changed. A man’s ticket to fortune was no longer being the baddest knight in the area but making a pilgrimage to the king’s court and currying favor with him and his entourage. The court, basically a government bureaucracy, had no use for hotheads and loose cannons, but sought responsible custodians to run its provinces. The nobles had to change their marketing. They had to cultivate their manners, so as not to offend the king’s minions, and their empathy, to understand what they wanted. The manners appropriate for the court came to be called “courtly” manners or “courtesy.” (75)

And this higher premium on manners and self-presentation among the nobles would lead to a cascade of societal changes.

Elias first lighted on his theory of the Civilizing Process as he was reading some of the etiquette guides which survived from that era. It’s striking to us moderns to see that knights of yore had to be told not to dispose of their snot by shooting it into their host’s table cloth, but that simply shows how thoroughly people today internalize these rules. As Elias explains, they’ve become second nature to us. Of course, we still have to learn them as children. Pinker prefaces his discussion of Elias’s theory with a recollection of his bafflement at why it was so important for him as a child to abstain from using his knife as a backstop to help him scoop food off his plate with a fork. Table manners, he concludes, reside on the far end of a continuum of self-restraint at the opposite end of which are once-common practices like cutting off the nose of a dining partner who insults you. Likewise, protecting children from the perils of flying rubber balls is the product of a campaign against the once-common custom of brutalizing them. The centrality of self-control is the common underlying theme: we control our urge to misuse utensils, including their use in attacking our fellow diners, and we control our urge to throw things at our classmates, even if it’s just in sport. The effect of the Civilizing Process in the Middle Ages, Pinker explains, was that “A culture of honor—the readiness to take revenge—gave way to a culture of dignity—the readiness to control one’s emotions” (72). In other words, diplomacy became more important than deterrence.

            What we’re learning here is that even an evolved mind can adjust to changing incentive schemes. Chimpanzees have to control their impulses toward aggression, sexual indulgence, and food consumption in order to survive in hierarchical bands with other chimps, many of whom are bigger, stronger, and better-connected. Much of the violence in chimp populations takes the form of adult males vying for positions in the hierarchy so they can enjoy the perquisites males of lower status must forgo to avoid being brutalized. Lower ranking males meanwhile bide their time, hopefully forestalling their gratification until such time as they grow stronger or the alpha grows weaker. In humans, the capacity for impulse-control and the habit of delaying gratification are even more important because we live in even more complex societies. Those capacities can either lie dormant or they can be developed to their full potential depending on exactly how complex the society is in which we come of age. Elias noticed a connection between the move toward more structured bureaucracies, less violence, and an increasing focus on etiquette, and he concluded that self-restraint in the form of adhering to strict codes of comportment was both an advertisement of, and a type of training for, the impulse-control that would make someone a successful bureaucrat.

            Aside from children who can’t fathom why we’d futz with our forks trying to capture recalcitrant peas, we normally take our society’s rules of etiquette for granted, no matter how inconvenient or illogical they are, seldom thinking twice before drawing unflattering conclusions about people who don’t bother adhering to them, the ones for whom they aren’t second nature. And the importance we place on etiquette goes beyond table manners. We judge people according to the discretion with which they dispose of any and all varieties of bodily effluent, as well as the delicacy with which they discuss topics sexual or otherwise basely instinctual. 

            Elias and Pinker’s theory is that, while the particular rules are largely arbitrary, the underlying principle of transcending our animal nature through the application of will, motivated by an appreciation of social convention and the sensibilities of fellow community members, is what marked the transition of certain constituencies of our species from a violent non-state existence to a relatively peaceful, civilized lifestyle. To Pinker, the uptick in violence that ensued once the counterculture of the 1960s came into full blossom was no coincidence. The squares may not have been as exciting as the rock stars who sang their anthems to hedonism and the liberating thrill of sticking it to the man. But a society of squares has certain advantages—a lower probability for each of its citizens of getting beaten or killed foremost among them.

            The Civilizing Process as Elias and Pinker, along with Immanuel Kant, understand it picks up momentum as levels of peace conducive to increasingly complex forms of trade are achieved. To understand why the move toward markets or “gentle commerce” would lead to decreasing violence, us pomos have to swallow—at least momentarily—our animus for Wall Street and all the corporate fat cats in the top one percent of the wealth distribution. The basic dynamic underlying trade is that one person has access to more of something than they need, but less of something else, while another person has the opposite balance, so a trade benefits them both. It’s a win-win, or a positive-sum game. The hard part for educated liberals is to appreciate that economies work to increase the total wealth; there isn’t a set quantity everyone has to divvy up in a zero-sum game, an exchange in which every gain for one is a loss for another. And Pinker points to another benefit:

Positive-sum games also change the incentives for violence. If you’re trading favors or surpluses with someone, your trading partner suddenly becomes more valuable to you alive than dead. You have an incentive, moreover, to anticipate what he wants, the better to supply it to him in exchange for what you want. Though many intellectuals, following in the footsteps of Saints Augustine and Jerome, hold businesspeople in contempt for their selfishness and greed, in fact a free market puts a premium on empathy. (77)

The Occupy Wall Street crowd will want to jump in here with a lengthy list of examples of businesspeople being unempathetic in the extreme. But Pinker isn’t saying commerce always forces people to be altruistic; it merely encourages them to exercise their capacity for perspective-taking. Discussing the emergence of markets, he writes,

The advances encouraged the division of labor, increased surpluses, and lubricated the machinery of exchange. Life presented people with more positive-sum games and reduced the attractiveness of zero-sum plunder. To take advantage of the opportunities, people had to plan for the future, control their impulses, take other people’s perspectives, and exercise the other social and cognitive skills needed to prosper in social networks. (77)

And these changes, the theory suggests, will tend to make merchants less likely on average to harm anyone. As bad as bankers can be, they’re not out sacking villages.

            Once you have commerce, you also have a need to start keeping records. And once you start dealing with distant partners it helps to have a mode of communication that travels. As writing moved out of the monasteries, and as technological advances in transportation brought more of the world within reach, ideas and innovations collided to inspire sequential breakthroughs and discoveries. Every advance could be preserved, dispersed, and ratcheted up. Pinker focuses on two relatively brief historical periods that witnessed revolutions in the way we think about violence, and both came in the wake of major advances in the technologies involved in transportation and communication. The first is the Humanitarian Revolution that occurred in the second half of the eighteenth century, and the second covers the Rights Revolutions in the second half of the twentieth. The Civilizing Process and gentle commerce weren’t sufficient to end age-old institutions like slavery and the torture of heretics. But then came the rise of the novel as a form of mass entertainment, and with all the training in perspective-taking readers were undergoing the hitherto unimagined suffering of slaves, criminals, and swarthy foreigners became intolerably imaginable. People began to agitate and change ensued.

            The Humanitarian Revolution occurred at the tail end of the Age of Reason and is recognized today as part of the period known as the Enlightenment. According to some scholarly scenarios, the Enlightenment, for all its successes like the American Constitution and the abolition of slavery, paved the way for all those allegedly unprecedented horrors in the first half of the twentieth century. Notwithstanding all this ivory tower traducing, the Enlightenment emerged from dormancy after the Second World War and gradually gained momentum, delivering us into a period Pinker calls the New Peace. Just as the original Enlightenment was preceded by increasing cosmopolitanism, improving transportation, and an explosion of literacy, the transformations that brought about the New Peace followed a burst of technological innovation. For Pinker, this is no coincidence. He writes,

If I were to put my money on the single most important exogenous cause of the Rights Revolutions, it would be the technologies that made ideas and people increasingly mobile. The decades of the Rights Revolutions were the decades of the electronics revolutions: television, transistor radios, cable, satellite, long-distance telephones, photocopiers, fax machines, the Internet, cell phones, text messaging, Web video. They were the decades of the interstate highway, high-speed rail, and the jet airplane. They were the decades of the unprecedented growth in higher education and in the endless frontier of scientific research. Less well known is that they were also the decades of an explosion in book publishing. From 1960 to 2000, the annual number of books published in the United States increased almost fivefold. (477)

Violence got slightly worse in the 60s. But the Civil Rights Movement was underway, Women’s Rights were being extended into new territories, and people even began to acknowledge that animals could suffer, prompting them to argue that we shouldn’t cause them to do so without cause. Today the push for Gay Rights continues. By 1990, the uptick in violence was over, and so far the move toward peace is looking like an ever greater success. Ironically, though, all the new types of media bringing images from all over the globe into our living rooms and pockets contributes to the sense that violence is worse than ever.

*******

            Three factors brought about a reduction in violence over the course of history then: strong government, trade, and communications technology. These factors had the impact they did because they interacted with two of our innate propensities, impulse-control and perspective-taking, by giving individuals both the motivation and the wherewithal to develop them both to ever greater degrees. It’s difficult to draw a clear delineation between developments that were driven by chance or coincidence and those driven by deliberate efforts to transform societies. But Pinker does credit political movements based on moral principles with having played key roles:

Insofar as violence is immoral, the Rights Revolutions show that a moral way of life often requires a decisive rejection of instinct, culture, religion, and standard practice. In their place is an ethics that is inspired by empathy and reason and stated in the language of rights. We force ourselves into the shoes (or paws) of other sentient beings and consider their interests, starting with their interest in not being hurt or killed, and we ignore superficialities that may catch our eye such as race, ethnicity, gender, age, sexual orientation, and to some extent, species. (475)

Some of the instincts we must reject in order to bring about peace, however, are actually moral instincts.

Pinker is setting up a distinction here between different kinds of morality. The one he describes that’s based on perspective-taking—which evidence he presents later suggests inspires sympathy—and is “stated in the language of rights” is the one he credits with transforming the world for the better. Of the idea that superficial differences shouldn’t distract us from our common humanity, he writes,

This conclusion, of course, is the moral vision of the Enlightenment and the strands of humanism and liberalism that have grown out of it. The Rights Revolutions are liberal revolutions. Each has been associated with liberal movements, and each is currently distributed along a gradient that runs, more or less, from Western Europe to the blue American states to the red American states to the democracies of Latin America and Asia and then to the more authoritarian countries, with Africa and most of the Islamic world pulling up the rear. In every case, the movements have left Western cultures with excesses of propriety and taboo that are deservedly ridiculed as political correctness. But the numbers show that the movements have reduced many causes of death and suffering and have made the culture increasingly intolerant of violence in any form. (475-6)

So you’re not allowed to play dodgeball at school or tell off-color jokes at work, but that’s a small price to pay. The most remarkable part of this passage though is that gradient he describes; it suggests the most violent regions of the globe are also the ones where people are the most obsessed with morality, with things like Sharia and so-called family values. It also suggests that academic complaints about the evils of Western culture are unfounded and startlingly misguided. As Pinker casually points out in his section on Women’s Rights, “Though the United States and other Western nations are often accused of being misogynistic patriarchies, the rest of the world is immensely worse” (413).

The Better Angels of Our Nature came out about a year before Jonathan Haidt’s The Righteous Mind, but Pinker’s book beats Haidt’s to the punch by identifying a serious flaw in his reasoning. The Righteous Mind explores how liberals and conservatives conceive of morality differently, and Haidt argues that each conception is equally valid so we should simply work to understand and appreciate opposing political views. It’s not like you’re going to change anyone’s mind anyway, right? But the liberal ideal of resisting certain moral intuitions tends to bring about a rather important change wherever it’s allowed to be realized. Pinker writes that

right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence. And that retraction is precisely the agenda of classical liberalism: a freedom of individuals from tribal and authoritarian force, and a tolerance of personal choices as long as they do not infringe on the autonomy and well-being of others. (637)

Classical liberalism—which Pinker distinguishes from contemporary political liberalism—can even be viewed as an effort to move morality away from the realm of instincts and intuitions into the more abstract domains of law and reason. The perspective-taking at the heart of Enlightenment morality can be said to consist of abstracting yourself from your identifying characteristics and immediate circumstances to imagine being someone else in unfamiliar straits. A man with a job imagines being a woman who can’t get one. A white man on good terms with law enforcement imagines being a black man who gets harassed. This practice of abstracting experiences and distilling individual concerns down to universal principles is the common thread connecting Enlightenment morality to science.

            So it’s probably no coincidence, Pinker argues, that as we’ve gotten more peaceful, people in Europe and the US have been getting better at abstract reasoning as well, a trend which has been going on for as long as researchers have had tests to measure it. Psychologists over the course of the twentieth century have had to adjust IQ test results (the average is always 100) a few points every generation because scores on a few subsets of questions have kept going up. The regular rising of scores is known as the Flynn Effect, after psychologist James Flynn, who was one of the first researchers to realize the trend was more than methodological noise. Having posited a possible connection between scientific and moral reasoning, Pinker asks, “Could there be a moral Flynn Effect?” He explains,

We have several grounds for supposing that enhanced powers of reason—specifically, the ability to set aside immediate experience, detach oneself from a parochial vantage point, and frame one’s ideas in abstract, universal terms—would lead to better moral commitments, including an avoidance of violence. And we have just seen that over the course of the 20th century, people’s reasoning abilities—particularly their ability to set aside immediate experience, detach themselves from a parochial vantage point, and think in abstract terms—were steadily enhanced. (656)

Pinker cites evidence from an array of studies showing that high-IQ people tend have high moral IQs as well. One of them, an infamous study by psychologist Satoshi Kanazawa based on data from over twenty thousand young adults in the US, demonstrates that exceptionally intelligent people tend to hold a particular set of political views. And just as Pinker finds it necessary to distinguish between two different types of morality he suggests we also need to distinguish between two different types of liberalism:

Intelligence is expected to correlate with classical liberalism because classical liberalism is itself a consequence of the interchangeability of perspectives that is inherent to reason itself. Intelligence need not correlate with other ideologies that get lumped into contemporary left-of-center political coalitions, such as populism, socialism, political correctness, identity politics, and the Green movement. Indeed, classical liberalism is sometimes congenial to the libertarian and anti-political-correctness factions in today’s right-of-center coalitions. (662)

And Kanazawa’s findings bear this out. It’s not liberalism in general that increases steadily with intelligence, but a particular kind of liberalism, the type focusing more on fairness than on ideology.

*******

Following the chapters devoted to historical change, from the early Middle Ages to the ongoing Rights Revolutions, Pinker includes two chapters on psychology, the first on our “Inner Demons” and the second on our “Better Angels.” Ideology gets some prime real estate in the Demons chapter, because, he writes, “the really big body counts in history pile up” when people believe they’re serving some greater good. “Yet for all that idealism,” he explains, “it’s ideology that drove many of the worst things that people have ever done to each other.” Christianity, Nazism, communism—they all “render opponents of the ideology infinitely evil and hence deserving of infinite punishment” (556). Pinker’s discussion of morality, on the other hand, is more complicated. It begins, oddly enough, in the Demons chapter, but stretches into the Angels one as well. This is how the section on morality in the Angels chapter begins:

The world has far too much morality. If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest. The human moral sense can excuse any atrocity in the minds of those who commit it, and it furnishes them with motives for acts of violence that bring them no tangible benefit. The torture of heretics and conversos, the burning of witches, the imprisonment of homosexuals, and the honor killing of unchaste sisters and daughters are just a few examples. (622)

The postmodern push to give precedence to moral and political considerations over science, reason, and fairness may seem like a good idea at first. But political ideologies can’t be defended on the grounds of their good intentions—they all have those. And morality has historically caused more harm than good. It’s only the minimalist, liberal morality that has any redemptive promise:

Though the net contribution of the human moral sense to human well-being may well be negative, on those occasions when it is suitably deployed it can claim some monumental advances, including the humanitarian reforms of the Enlightenment and the Rights Revolutions of recent decades. (622)

            One of the problems with ideologies Pinker explores is that they lend themselves too readily to for-us-or-against-us divisions which piggyback on all our tribal instincts, leading to dehumanization of opponents as a step along the path to unrestrained violence. But, we may ask, isn’t the Enlightenment just another ideology? If not, is there some reliable way to distinguish an ideological movement from a “civilizing offensive” or a “Rights Revolution”? Pinker doesn’t answer these questions directly, but it’s in his discussion of the demonic side of morality where Better Angels offers its most profound insights—and it’s also where we start to be able to piece together the larger purpose of the book. He writes,

In The Blank Slate I argued that the modern denial of the dark side of human nature—the doctrine of the Noble Savage—was a reaction against the romantic militarism, hydraulic theories of aggression, and glorification of struggle and strife that had been popular in the late 19th and early 20th centuries. Scientists and scholars who question the modern doctrine have been accused of justifying violence and have been subjected to vilification, blood libel, and physical assault. The Noble Savage myth appears to be another instance of an antiviolence movement leaving a cultural legacy of propriety and taboo. (488)

Since Pinker figured that what he and his fellow evolutionary psychologists kept running up against was akin to the repulsion people feel against poor table manners or kids winging balls at each other in gym class, he reasoned that he ought to be able to simply explain to the critics that evolutionary psychologists have no intention of justifying, or even encouraging complacency toward, the dark side of human nature. “But I am now convinced,” he writes after more than a decade of trying to explain himself, “that a denial of the human capacity for evil runs even deeper, and may itself be a feature of human nature” (488). That feature, he goes on to explain, makes us feel compelled to label as evil anyone who tries to explain evil scientifically—because evil as a cosmic force beyond the reach of human understanding plays an indispensable role in group identity.

            Pinker began to fully appreciate the nature of the resistance to letting biology into discussions of human harm-doing when he read about the work of psychologist Roy Baumeister exploring the wide discrepancies in accounts of anger-inducing incidents between perpetrators and victims. The first studies looked at responses to minor offenses, but Baumeister went on to present evidence that the pattern, which Pinker labels the “Moralization Gap,” can be scaled up to describe societal attitudes toward historical atrocities. Pinker explains,

The Moralization Gap consists of complementary bargaining tactics in the negotiation for recompense between a victim and a perpetrator. Like opposing counsel in a lawsuit over a tort, the social plaintiff will emphasize the deliberateness, or at least the depraved indifference, of the defendant’s action, together with the pain and suffering the plaintiff endures. The social defendant will emphasize the reasonableness or unavoidability of the action, and will minimize the plaintiff’s pain and suffering. The competing framings shape the negotiations over amends, and also play to the gallery in a competition for their sympathy and for a reputation as a responsible reciprocator. (491)

Another of the Inner Demons Pinker suggests plays a key role in human violence is the drive for dominance, which he explains operates not just at the level of the individual but at that of the group to which he or she belongs. We want our group, however we understand it in the immediate context, to rest comfortably atop a hierarchy of other groups. What happens is that the Moralization Gap gets mingled with this drive to establish individual and group superiority. You see this dynamic playing out even in national conflicts. Pinker points out,

The victims of a conflict are assiduous historians and cultivators of memory. The perpetrators are pragmatists, firmly planted in the present. Ordinarily we tend to think of historical memory as a good thing, but when the events being remembered are lingering wounds that call for redress, it can be a call to violence. (493)

Name a conflict and with little effort you’ll likely also be able to recall contentions over historical records associated with it.

            The outcome of the Moralization Gap being taken to the group historical level is what Pinker and Baumeister call the “Myth of Pure Evil.” Harm-doing narratives start to take on religious overtones as what began as a conflict between regular humans pursuing or defending their interests, in ways they probably reasoned were just, transforms into an eternal struggle against inhuman and sadistic agents of chaos. And Pinker has come to realize that it is this Myth of Pure Evil that behavioral scientists ineluctably end up blaspheming:

Baumeister notes that in the attempt to understand harm-doing, the viewpoint of the scientist or scholar overlaps with the viewpoint of the perpetrator. Both take a detached, amoral stance toward the harmful act. Both are contextualizers, always attentive to the complexities of the situation and how they contributed to the causation of the harm. And both believe that the harm is ultimately explicable. (495)

This is why evolutionary psychologists who study violence inspire what Pinker in The Blank Slate called “political paranoia and moral exhibitionism” (106) on the part of us naïve pomos, ravenously eager to showcase our valor by charging once more into the breach against the mythical malevolence. All the while, our impregnable assurance of our own righteousness is borne of the conviction that we’re standing up for the oppressed. Pinker writes,

The viewpoint of the moralist, in contrast, is the viewpoint of the victim. The harm is treated with reverence and awe. It continues to evoke sadness and anger long after it was perpetrated. And for all the feeble ratiocination we mortals throw at it, it remains a cosmic mystery, a manifestation of the irreducible and inexplicable existence of evil in the universe. Many chroniclers of the Holocaust consider it immoral even to try to explain it. (495-6)

We simply can’t help inflating the magnitude of the crime in our attempt to convince our ideological opponents of their folly—though what we’re really inflating is our own, and our group’s, glorification—and so we can’t abide anyone puncturing our overblown conception because doing so lends credence to the opposition, making us look a bit foolish in the process for all our exaggerations.

            Reading Better Angels, you get the sense that Pinker experienced some genuine surprise and some real delight in discovering more and more corroboration for the idea that rates of violence have been trending downward in nearly every domain he explored. But things get tricky as you proceed through the pages because many of his arguments take on opposing positions he avoids naming. He seems to have seen the trove of evidence for declining violence as an opportunity to outflank the critics of evolutionary psychology in leftist, postmodern academia (to use a martial metaphor). Instead of calling them out directly, he circles around to chip away at the moral case for their political mission. We see this, for example, in his discussion of rape, which psychologists get into all kinds of trouble for trying to explain. After examining how scientists seem to be taking the perspective of perpetrators, Pinker goes on to write,

The accusation of relativizing evil is particularly likely when the motive the analyst imputes to the perpetrator appears to be venial, like jealousy, status, or retaliation, rather than grandiose, like the persistence of suffering in the world or the perpetuation of race, class, or gender oppression. It is also likely when the analyst ascribes the motive to every human being rather than to a few psychopaths or to the agents of a malignant political system (hence the popularity of the doctrine of the Noble Savage). (496)

In his earlier section on Woman’s Rights and the decline of rape, he attributed the difficulty in finding good data on the incidence of the crime, as well as some of the “preposterous” ideas about what motivates it, to the same kind of overextensions of anti-violence campaigns that lead to arbitrary rules about the use of silverware and proscriptions against dodgeball:

Common sense never gets in the way of a sacred custom that has accompanied a decline in violence, and today rape centers unanimously insist that “rape or sexual assault is not an act of sex or lust—it’s about aggression, power, and humiliation, using sex as the weapon. The rapist’s goal is domination.” (To which the journalist Heather MacDonald replies: “The guys who push themselves on women at keggers are after one thing only, and it’s not a reinstatement of the patriarchy.”) (406)

Jumping ahead to Pinker’s discussion of the Moralization Gap, we see that the theory that rape is about power, as opposed to the much more obvious theory that it’s about sex, is an outgrowth of the Myth of Pure Evil, an inflation of the mundane drives that lead some pathetic individuals to commit horrible crimes into eternal cosmic forces, inscrutable and infinitely punishable.

            When feminists impute political motives to rapists, they’re crossing the boundary from Enlightenment morality to the type of moral ideology that inspires dehumanization and violence. The good news is that it’s not difficult to distinguish between the two. From the Enlightenment perspective, rape is indefensibly wrong because it violates the autonomy of the victim—it’s an act of violence perpetrated by one individual against another. From the ideological perspective, every rape must be understood in the context of the historical oppression of women by men; it transcends the individuals involved as a representation of a greater evil. The rape-as-a-political-act theory also comes dangerously close to implying a type of collective guilt, which is a clear violation of individual rights.

Scholars already make the distinction between three different waves of feminism. The first two fall within Pinker’s definition of Rights Revolutions; they encompassed pushes for suffrage, marriage rights, and property rights, and then the rights to equal pay and equal opportunity in the workplace. The third wave is avowedly postmodern, its advocates committed to the ideas that gender is a pure social construct and that suggesting otherwise is an act of oppression. What you come away from Better Angels realizing, even though Pinker doesn’t say it explicitly, is that somewhere between the second and third waves feminists effectively turned against the very ideas and institutions that had been most instrumental in bringing about the historical improvements in women’s lives from the Middle Ages to the turn of the twenty-first century. And so it is with all the other ideologies on the postmodern roster.

Another misguided propaganda tactic that dogged Pinker’s efforts to identify historical trends in violence can likewise be understood as an instance of inflating the severity of crimes on behalf of a moral ideology—and the taboo placed on puncturing the bubble or vitiating the purity of evil with evidence and theories of venial motives. As he explains in the preface, “No one has ever recruited activists to a cause by announcing that things are getting better, and bearers of good news are often advised to keep their mouths shut lest they lull people into complacency” (xxii). Here again the objective researcher can’t escape the appearance of trying to minimize the evil, and therefore risks being accused of looking the other way, or even of complicity. But in an earlier section on genocide Pinker provides the quintessential Enlightenment rationale for the clear-eyed scientific approach to studying even the worst atrocities. He writes,

The effort to whittle down the numbers that quantify the misery can seem heartless, especially when the numbers serve as propaganda for raising money and attention. But there is a moral imperative in getting the facts right, and not just to maintain credibility. The discovery that fewer people are dying in wars all over the world can thwart cynicism among compassion-fatigued news readers who might otherwise think that poor countries are irredeemable hellholes. And a better understanding of what drve the numbers down can steer us toward doing things that make people better off rather than congratulating ourselves on how altruistic we are. (320)

This passage can be taken as the underlying argument of the whole book. And it gestures toward some far-reaching ramifications to the idea that exaggerated numbers are a product of the same impulse that causes us to inflate crimes to the status of pure evil.

Could it be that the nearly universal misperception that violence is getting worse all over the world, that we’re doomed to global annihilation, and that everywhere you look is evidence of the breakdown in human decency—could it be that the false impression Pinker set out to correct with Better Angels is itself a manifestation of a natural urge in all of us to seek out evil and aggrandize ourselves by unconsciously overestimating it? Pinker himself never goes as far as suggesting the mass ignorance of waning violence is a byproduct of an instinct toward self-righteousness. Instead, he writes of the “gloom” about the fate of humanity,

I think it comes from the innumeracy of our journalistic and intellectual culture. The journalist Michael Kinsley recently wrote, “It is a crushing disappointment that Boomers entered adulthood with Americans killing and dying halfway around the world, and now, as Boomers reach retirement and beyond, our country is doing the same damned thing.” This assumes that 5,000 Americans dying is the same damned thing as 58,000 Americans dying, and that a hundred thousand Iraqis being killed is the same damned thing as several million Vietnamese being killed. If we don’t keep an eye on the numbers, the programming policy “If it bleeds it leads” will feed the cognitive shortcut “The more memorable, the more frequent,” and we will end up with what has been called a false sense of insecurity. (296)

Pinker probably has a point, but the self-righteous undertone of Kinsley’s “same damned thing” is unmistakable. He’s effectively saying, I’m such an outstanding moral being the outrageous evilness of the invasion of Iraq is blatantly obvious to me—why isn’t it to everyone else? And that same message seems to underlie most of the statements people make expressing similar sentiments about how the world is going to hell.

            Though Pinker neglects to tie all the strands together, he still manages to suggest that the drive to dominance, ideology, tribal morality, and the Myth of Pure Evil are all facets of the same disastrous flaw in human nature—an instinct for self-righteousness. Progress on the moral front—real progress like fewer deaths, less suffering, and more freedom—comes from something much closer to utilitarian pragmatism than activist idealism. Yet the activist tradition is so thoroughly enmeshed in our university culture that we’re taught to exercise our powers of political righteousness even while engaging in tasks as mundane as reading books and articles. 

            If the decline in violence and the improvement of the general weal in various other areas are attributable to the Enlightenment, then many of the assumptions underlying postmodernism are turned on their heads. If social ills like warfare, racism, sexism, and child abuse exist in cultures untouched by modernism—and they in fact not only exist but tend to be much worse—then science can’t be responsible for creating them; indeed, if they’ve all trended downward with the historical development of all the factors associated with male-dominated western culture, including strong government, market economies, run-away technology, and scientific progress, then postmodernism not only has everything wrong but threatens the progress achieved by the very institutions it depends on, emerged from, and squanders innumerable scholarly careers maligning.

Of course some Enlightenment figures and some scientists do evil things. Of course living even in the most Enlightened of civilizations is no guarantee of safety. But postmodernism is an ideology based on the premise that we ought to discard a solution to our societal woes for not working perfectly and immediately, substituting instead remedies that have historically caused more problems than they solved by orders of magnitude. The argument that there’s a core to the Enlightenment that some of its representatives have been faithless to when they committed atrocities may seem reminiscent of apologies for Christianity based on the fact that Crusaders and Inquisitors weren’t loving their neighbors as Christ enjoined. The difference is that the Enlightenment works—in just a few centuries it’s transformed the world and brought about a reduction in violence no religion has been able to match in millennia. If anything, the big monotheistic religions brought about more violence.

Embracing Enlightenment morality or classical liberalism doesn’t mean we should give up our efforts to make the world a better place. As Pinker describes the transformation he hopes to encourage with Better Angels,

As one becomes aware of the decline of violence, the world begins to look different. The past seems less innocent; the present less sinister. One starts to appreciate the small gifts of coexistence that would have seemed utopian to our ancestors: the interracial family playing in the park, the comedian who lands a zinger on the commander in chief, the countries that quietly back away from a crisis instead of escalating to war. The shift is not toward complacency: we enjoy the peace we find today because people in past generations were appalled by the violence in their time and worked to reduce it, and so we should work to reduce the violence that remains in our time. Indeed, it is a recognition of the decline of violence that best affirms that such efforts are worthwhile. (xxvi)

Since our task for the remainder of this century is to extend the reach of science, literacy, and the recognition of universal human rights farther and farther along the Enlightenment gradient until they're able to grant the same increasing likelihood of a long peaceful life to every citizen of every nation of the globe, and since the key to accomplishing this task lies in fomenting future Rights Revolutions while at the same time recognizing, so as to be better equipped to rein in, our drive for dominance as manifested in our more deadly moral instincts, I for one am glad Steven Pinker has the courage to violate so many of the outrageously counterproductive postmodern taboos while having the grace to resist succumbing himself, for the most part, to the temptation of self-righteousness.

Also read:

THE FAKE NEWS CAMPAIGN AGAINST STEVEN PINKER AND ENLIGHTENMENT NOW

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

And:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

Read More
Dennis Junk Dennis Junk

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

Frans de Waal’s work is always a joy to read, insightful, surprising, and superbly humane. Unfortunately, in his mostly wonderful book, “The Bonobo and the Atheist,” he carts out a familiar series of straw men to level an attack on modern critics of religion—with whom, if he’d been more diligent in reading their work, he’d find much common ground with.

            Whenever literary folk talk about voice, that supposedly ineffable but transcendently important quality of narration, they display an exasperating penchant for vagueness, as if so lofty a dimension to so lofty an endeavor couldn’t withstand being spoken of directly—or as if they took delight in instilling panic and self-doubt into the quivering hearts of aspiring authors. What the folk who actually know what they mean by voice actually mean by it is all the idiosyncratic elements of prose that give readers a stark and persuasive impression of the narrator as a character. Discussions of what makes for stark and persuasive characters, on the other hand, are vague by necessity. It must be noted that many characters even outside of fiction are neither. As a first step toward developing a feel for how character can be conveyed through writing, we may consider the nonfiction work of real people with real character, ones who also happen to be practiced authors.

The Dutch-American primatologist Frans de Waal is one such real-life character, and his prose stands as testament to the power of written language, lonely ink on colorless pages, not only to impart information, but to communicate personality and to make a contagion of states and traits like enthusiasm, vanity, fellow-feeling, bluster, big-heartedness, impatience, and an abiding wonder. De Waal is a writer with voice. Many other scientists and science writers explore this dimension to prose in their attempts to engage readers, but few avoid the traps of being goofy or obnoxious instead of funny—a trap David Pogue, for instance, falls into routinely as he hosts NOVA on PBS—and of expending far too much effort in their attempts at being distinctive, thus failing to achieve anything resembling grace. 

The most striking quality of de Waal’s writing, however, isn’t that its good-humored quirkiness never seems strained or contrived, but that it never strays far from the man’s own obsession with getting at the stories behind the behaviors he so minutely observes—whether the characters are his fellow humans or his fellow primates, or even such seemingly unstoried creatures as rats or turtles. But to say that de Waal is an animal lover doesn’t quite capture the essence of what can only be described as a compulsive fascination marked by conviction—the conviction that when he peers into the eyes of a creature others might dismiss as an automaton, a bundle of twitching flesh powered by preprogrammed instinct, he sees something quite different, something much closer to the workings of his own mind and those of his fellow humans.

De Waal’s latest book, The Bonobo and the Atheist: In Search of Humanism among the Primates, reprises the main themes of his previous books, most centrally the continuity between humans and other primates, with an eye toward answering the questions of where does, and where should morality come from. Whereas in his books from the years leading up to the turn of the century he again and again had to challenge what he calls “veneer theory,” the notion that without a process of socialization that imposes rules on individuals from some outside source they’d all be greedy and selfish monsters, de Waal has noticed over the past six or so years a marked shift in the zeitgeist toward an awareness of our more cooperative and even altruistic animal urgings. Noting a sharp difference over the decades in how audiences at his lectures respond to recitations of the infamous quote by biologist Michael Ghiselin, “Scratch an altruist and watch a hypocrite bleed,” de Waal writes,

Although I have featured this cynical line for decades in my lectures, it is only since about 2005 that audiences greet it with audible gasps and guffaws as something so outrageous, so out of touch with how they see themselves, that they can’t believe it was ever taken seriously. Had the author never had a friend? A loving wife? Or a dog, for that matter? (43)

The assumption underlying veneer theory was that without civilizing influences humans’ deeper animal impulses would express themselves unchecked. The further assumption was that animals, the end products of the ruthless, eons-long battle for survival and reproduction, would reflect the ruthlessness of that battle in their behavior. De Waal’s first book, Chimpanzee Politics, which told the story of a period of intensified competition among the captive male chimps at the Arnhem Zoo for alpha status, with all the associated perks like first dibs on choice cuisine and sexually receptive females, was actually seen by many as lending credence to these assumptions. But de Waal himself was far from convinced that the primates he studied were invariably, or even predominantly, violent and selfish.

            What he observed at the zoo in Arnhem was far from the chaotic and bloody free-for-all it would have been if the chimps took the kind of delight in violence for its own sake that many people imagine them being disposed to. As he pointed out in his second book, Peacemaking among Primates, the violence is almost invariably attended by obvious signs of anxiety on the part of those participating in it, and the tension surrounding any major conflict quickly spreads throughout the entire community. The hierarchy itself is in fact an adaptation that serves as a check on the incessant conflict that would ensue if the relative status of each individual had to be worked out anew every time one chimp encountered another. “Tightly embedded in society,” he writes in The Bonobo and the Atheist, “they respect the limits it puts on their behavior and are ready to rock the boat only if they can get away with it or if so much is at stake that it’s worth the risk” (154). But the most remarkable thing de Waal observed came in the wake of the fights that couldn’t successfully be avoided. Chimps, along with primates of several other species, reliably make reconciliatory overtures toward one another after they’ve come to blows—and bites and scratches. In light of such reconciliations, primate violence begins to look like a momentary, albeit potentially dangerous, readjustment to a regularly peaceful social order rather than any ongoing melee, as individuals with increasing or waning strength negotiate a stable new arrangement.

            Part of the enchantment of de Waal’s writing is his judicious and deft balancing of anecdotes about the primates he works with on the one hand and descriptions of controlled studies he and his fellow researchers conduct on the other. In The Bonobo and the Atheist, he strikes a more personal note than he has in any of his previous books, at points stretching the bounds of the popular science genre and crossing into the realm of memoir. This attempt at peeling back the surface of that other veneer, the white-coated scientist’s posture of mechanistic objectivity and impassive empiricism, works best when de Waal is merging tales of his animal experiences with reports on the research that ultimately provides evidence for what was originally no more than an intuition. Discussing a recent, and to most people somewhat startling, experiment pitting the social against the alimentary preferences of a distant mammalian cousin, he recounts,

Despite the bad reputation of these animals, I have no trouble relating to its findings, having kept rats as pets during my college years. Not that they helped me become popular with the girls, but they taught me that rats are clean, smart, and affectionate. In an experiment at the University of Chicago, a rat was placed in an enclosure where it encountered a transparent container with another rat. This rat was locked up, wriggling in distress. Not only did the first rat learn how to open a little door to liberate the second, but its motivation to do so was astonishing. Faced with a choice between two containers, one with chocolate chips and another with a trapped companion, it often rescued its companion first. (142-3)

This experiment, conducted by Inbal Ben-Ami Bartal, Jean Decety, and Peggy Mason, actually got a lot of media coverage; Mason was even interviewed for an episode of NOVA Science NOW where you can watch a video of the rats performing the jailbreak and sharing the chocolate (and you can also see David Pogue being obnoxious.) This type of coverage has probably played a role in the shift in public opinion regarding the altruistic propensities of humans and animals. But if there’s one species who’s behavior can be said to have undermined the cynicism underlying veneer theory—aside from our best friend the dog of course—it would have to be de Waal’s leading character, the bonobo.

            De Waal’s 1997 book Bonobo: The Forgotten Ape, on which he collaborated with photographer Frans Lanting, introduced this charismatic, peace-loving, sex-loving primate to the masses, and in the process provided behavioral scientists with a new model for what our own ancestors’ social lives might have looked like. Bonobo females dominate the males to the point where zoos have learned never to import a strange male into a new community without the protection of his mother. But for the most part any tensions, even those over food, even those between members of neighboring groups, are resolved through genito-genital rubbing—a behavior that looks an awful lot like sex and often culminates in vocalizations and facial expressions that resemble those of humans experiencing orgasms to a remarkable degree. The implications of bonobos’ hippy-like habits have even reached into politics. After an uncharacteristically ill-researched and ill-reasoned article in the New Yorker by Ian Parker which suggested that the apes weren’t as peaceful and erotic as we’d been led to believe, conservatives couldn’t help celebrating. De Waal writes in The Bonobo and the Atheist,

Given that this ape’s reputation has been a thorn in the side of homophobes as well as Hobbesians, the right-wing media jumped with delight. The bonobo “myth” could finally be put to rest, and nature remain red in tooth and claw. The conservative commentator Dinesh D’Souza accused “liberals” of having fashioned the bonobo into their mascot, and he urged them to stick with the donkey. (63)

But most primate researchers think the behavioral differences between chimps and bonobos are pretty obvious. De Waal points out that while violence does occur among the apes on rare occasions “there are no confirmed reports of lethal aggression among bonobos” (63). Chimps, on the other hand, have been observed doing all kinds of killing. Bonobos also outperform chimps in experiments designed to test their capacity for cooperation, as in the setup that requires two individuals to pull on a rope at the same time in order for either of them to get ahold of food placed atop a plank of wood. (Incidentally, the New Yorker’s track record when it comes to anthropology is suspiciously checkered—disgraced author Patrick Tierney’s discredited book on Napoleon Chagnon, for instance, was originally excerpted in the magazine.)

            Bonobos came late to the scientific discussion of what ape behavior can tell us about our evolutionary history. The famous chimp researcher Robert Yerkes, whose name graces the facility de Waal currently directs at Emory University in Atlanta, actually wrote an entire book called Almost Human about what he believed was a rather remarkable chimp. A photograph from that period reveals that it wasn’t a chimp at all. It was a bonobo. Now, as this species is becoming better researched, and with the discovery of fossils like the 4.4 million-year-old Ardipethicus ramidus known as Ardi, a bipedal ape with fangs that are quite small when compared to the lethal daggers sported by chimps, the role of violence in our ancestry is ever more uncertain. De Waal writes,

What if we descend not from a blustering chimp-like ancestor but from a gentle, empathic bonobo-like ape? The bonobo’s body proportions—its long legs and narrow shoulders—seem to perfectly fit the descriptions of Ardi, as do its relatively small canines. Why was the bonobo overlooked? What if the chimpanzee, instead of being an ancestral prototype, is in fact a violent outlier in an otherwise relatively peaceful lineage? Ardi is telling us something, and there may exist little agreement about what she is saying, but I hear a refreshing halt to the drums of war that have accompanied all previous scenarios. (61)

De Waal is well aware of all the behaviors humans engage in that are more emblematic of chimps than of bonobos—in his 2005 book Our Inner Ape, he refers to humans as “the bipolar ape”—but the fact that our genetic relatedness to both species is exactly the same, along with the fact that chimps also have a surprising capacity for peacemaking and empathy, suggest to him that evolution has had plenty of time and plenty of raw material to instill in us the emotional underpinnings of a morality that emerges naturally—without having to be imposed by religion or philosophy. “Rather than having developed morality from scratch through rational reflection,” he writes in The Bonobo and the Atheist, “we received a huge push in the rear from our background as social animals" (17).

            In the eighth and final chapter of The Bonobo and the Atheist, titled “Bottom-Up Morality,” de Waal describes what he believes is an alternative to top-down theories that attempt to derive morals from religion on the one hand and from reason on the other. Invisible beings threatening eternal punishment can frighten us into doing the right thing, and principles of fairness might offer slight nudges in the direction of proper comportment, but we must already have some intuitive sense of right and wrong for either of these belief systems to operate on if they’re to be at all compelling. Many people assume moral intuitions are inculcated in childhood, but experiments like the one that showed rats will come to the aid of distressed companions suggest something deeper, something more ingrained, is involved. De Waal has found that a video of capuchin monkeys demonstrating "inequity aversion"—a natural, intuitive sense of fairness—does a much better job than any charts or graphs at getting past the prejudices of philosophers and economists who want to insist that fairness is too complex a principle for mere monkeys to comprehend. He writes,

This became an immensely popular experiment in which one monkey received cucumber slices while another received grapes for the same task. The monkeys had no trouble performing if both received identical rewards of whatever quality, but rejected unequal outcomes with such vehemence that there could be little doubt about their feelings. I often show their reactions to audiences, who almost fall out of their chairs laughing—which I interpret as a sign of surprised recognition. (232)

What the capuchins do when they see someone else getting a better reward is throw the measly cucumber back at the experimenter and proceed to rattle the cage in agitation. De Waal compares it to the Occupy Wall Street protests. The poor monkeys clearly recognize the insanity of the human they’re working for.

            There’s still a long way to travel, however, from helpful rats and protesting capuchins before you get to human morality. But that gap continues to shrink as researchers find new ways to explore the social behaviors of the primates that are even more closely related to us. Chimps, for instance, have been seen taking inequity aversion an important step beyond what monkeys display. Not only will certain individuals refuse to work for lesser rewards; they’ll refuse to work even for the superior rewards if they see their companions aren’t being paid equally. De Waal does acknowledge though that there still remains an important step between these behaviors and human morality. “I am reluctant to call a chimpanzee a ‘moral being,’” he writes.

This is because sentiments do not suffice. We strive for a logically coherent system and have debates about how the death penalty fits arguments for the sanctity of life, or whether an unchosen sexual orientation can be morally wrong. These debates are uniquely human. There is little evidence that other animals judge the appropriateness of actions that do not directly affect themselves. (17-8)

Moral intuitions can often inspire some behaviors that to people in modern liberal societies seem appallingly immoral. De Waal quotes anthropologist Christopher Boehm on the “special, pejorative moral ‘discount’ applied to cultural strangers—who often are not even considered fully human,” and he goes on to explain that “The more we expand morality’s reach, the more we need to rely on our intellect.” But the intellectual principles must be grounded in the instincts and emotions we evolved as social primates; this is what he means by bottom-up morality or “naturalized ethics” (235).

*****

            In locating the foundations of morality in our evolved emotions—propensities we share with primates and even rats—de Waal seems to be taking a firm stand against any need for religion. But he insists throughout the book that this isn’t the case. And, while the idea that people are quite capable of playing fair and treating each other with compassion without any supernatural policing may seem to land him squarely in the same camp as prominent atheists like Richard Dawkins and Christopher Hitchens, whom he calls “neo-atheists,” he contends that they’re just as, if not more, misguided than the people of faith who believe the rules must be handed down from heaven. “Even though Dawkins cautioned against his own anthropomorphism of the gene,” de Waal wrote all the way back in his 1996 book Good Natured: The Origins of Right and Wrong in Humans and Other Animals, “with the passage of time, carriers of selfish genes became selfish by association” (14). Thus de Waal tries to find some middle ground between religious dogmatists on one side and those who are equally dogmatic in their opposition to religion and equally mistaken in their espousal of veneer theory on the other. “I consider dogmatism a far greater threat than religion per se,” he writes in The Bonobo and the Atheist.

I am particularly curious why anyone would drop religion while retaining the blinkers sometimes associated with it. Why are the “neo-atheists” of today so obsessed with God’s nonexistence that they go on media rampages, wear T-shirts proclaiming their absence of belief, or call for a militant atheism? What does atheism have to offer that’s worth fighting for? (84)

For de Waal, neo-atheism is an empty placeholder of a philosophy, defined not by any positive belief but merely by an obstinately negative attitude toward religion. It’s hard to tell early on in his book if this view is based on any actual familiarity with the books whose titles—The God Delusion, god is not Great—he takes issue with. What is obvious, though, is that he’s trying to appeal to some spirit of moderation so that he might reach an audience who may have already been turned off by the stridency of the debates over religion’s role in society. At any rate, we can be pretty sure that Hitchens, for one, would have had something to say about de Waal’s characterization.

De Waal’s expertise as a primatologist gave him what was in many ways an ideal perspective on the selfish gene debates, as well as on sociobiology more generally, much the way Sarah Blaffer Hrdy’s expertise has done for her. The monkeys and apes de Waal works with are a far cry from the ants and wasps that originally inspired the gene-centered approach to explaining behavior. “There are the bees dying for their hive,” he writes in The Bonobo and the Atheist,

and the millions of slime mold cells that build a single, sluglike organism that permits a few among them to reproduce. This kind of sacrifice was put on the same level as the man jumping into an icy river to rescue a stranger or the chimpanzee sharing food with a whining orphan. From an evolutionary perspective, both kinds of helping are comparable, but psychologically speaking they are radically different. (33)

At the same time, though, de Waal gets to see up close almost every day how similar we are to our evolutionary cousins, and the continuities leave no question as to the wrongheadedness of blank slate ideas about socialization. “The road between genes and behavior is far from straight,” he writes, sounding a note similar to that of the late Stephen Jay Gould, “and the psychology that produces altruism deserves as much attention as the genes themselves.” He goes on to explain,

Mammals have what I call an “altruistic impulse” in that they respond to signs of distress in others and feel an urge to improve their situation. To recognize the need of others, and react appropriately, is really not the same as a preprogrammed tendency to sacrifice oneself for the genetic good. (33)

We can’t discount the role of biology, in other words, but we must keep in mind that genes are at the distant end of a long chain of cause and effect that has countless other inputs before it links to emotion and behavior. De Waal angered both the social constructivists and quite a few of the gene-centered evolutionists, but by now the balanced view his work as primatologist helped him to arrive at has, for the most part, won the day. Now, in his other role as a scientist who studies the evolution of morality, he wants to strike a similar balance between extremists on both sides of the religious divide. Unfortunately, in this new arena, his perspective isn’t anywhere near as well informed.

             The type of religion de Waal points to as evidence that the neo-atheists’ concerns are misguided and excessive is definitely moderate. It’s not even based on any actual beliefs, just some nice ideas and stories adherents enjoy hearing and thinking about in a spirit of play. We have to wonder, though, just how prevalent this New Age, Life-of-Pi type of religion really is. I suspect the passages in The Bonobo and the Atheist discussing it are going to be equally offensive to atheists and people of actual faith alike. Here’s one  example of the bizarre way he writes about religion:

Neo-atheists are like people standing outside a movie theater telling us that Leonardo DiCaprio didn’t really go down with the Titanic. How shocking! Most of us are perfectly comfortable with the duality. Humor relies on it, too, lulling us into one way of looking at a situation only to hit us over the head with another. To enrich reality is one of the most delightful capacities we have, from pretend play in childhood to visions of an afterlife when we grow older. (294)

He seems to be suggesting that the religious know, on some level, their beliefs aren’t true. “Some realities exist,” he writes, “some we just like to believe in” (294). The problem is that while many readers may enjoy the innuendo about humorless and inveterately over-literal atheists, most believers aren’t joking around—even the non-extremists are more serious than de Waal seems to think.

            As someone who’s been reading de Waal’s books for the past seventeen years, someone who wanted to strangle Ian Parker after reading his cheap smear piece in The New Yorker, someone who has admired the great primatologist since my days as an undergrad anthropology student, I experienced the sections of The Bonobo and the Atheist devoted to criticisms of neo-atheism, which make up roughly a quarter of this short book, as soul-crushingly disappointing. And I’ve agonized over how to write this part of the review. The middle path de Waal carves out is between a watered-down religion believers don’t really believe on one side and an egregious postmodern caricature of Sam Harris’s and Christopher Hitchens’s positions on the other. He focuses on Harris because of his book, The Moral Landscape, which explores how we might use science to determine our morals and values instead of religion, but he gives every indication of never having actually read the book and of instead basing his criticisms solely on the book’s reputation among Harris’s most hysterical detractors. And he targets Hitchens because he thinks he has the psychological key to understanding what he refers to as his “serial dogmatism.” But de Waal’s case is so flimsy a freshman journalism student could demolish it with no more than about ten minutes of internet fact-checking.

De Waal does acknowledge that we should be skeptical of “religious institutions and their ‘primates’,” but he wonders “what good could possibly come from insulting the many people who find value in religion?” (19). This is the tightrope he tries to walk throughout his book. His focus on the purely negative aspect of atheism juxtaposed with his strange conception of the role of belief seems designed to give readers the impression that if the atheists succeed society might actually suffer severe damage. He writes,

Religion is much more than belief. The question is not so much whether religion is true or false, but how it shapes our lives, and what might possibly take its place if we were to get rid of it the way an Aztec priest rips the beating heart out of a virgin. What could fill the gaping hole and take over the removed organ’s functions? (216)

The first problem is that many people who call themselves humanists, as de Waal does, might suggest that there are in fact many things that could fill the gap—science, literature, philosophy, music, cinema, human rights activism, just to name a few. But the second problem is that the militancy of the militant atheists is purely and avowedly rhetorical. In a debate with Hitchens, former British Prime Minister Tony Blair once held up the same straw man that de Waal drags through the pages of his book, the claim that neo-atheists are trying to extirpate religion from society entirely, to which Hitchens replied, “In fairness, no one was arguing that religion should or will die out of the world. All I’m arguing is that it would be better if there was a great deal more by way of an outbreak of secularism” (20:20). What Hitchens is after is an end to the deference automatically afforded religious ideas by dint of their supposed sacredness; religious ideas need to be critically weighed just like any other ideas—and when they are thus weighed they often don’t fare so well, in either logical or moral terms. It’s hard to understand why de Waal would have a problem with this view.

*****

            De Waal’s position is even more incoherent with regard to Harris’s arguments about the potential for a science of morality, since they represent an attempt to answer, at least in part, the very question of what might take the place of religion in providing guidance in our lives that he poses again and again throughout The Bonobo and the Atheist. De Waal takes issue first with the book’s title, The Moral Landscape: How Science can Determine Human Values. The notion that science might determine any aspect of morality suggests to him a top-down approach as opposed to his favored bottom-up strategy that takes “naturalized ethics” as its touchstone. This is, however, unbeknownst to de Waal, a mischaracterization of Harris’s thesis. Rather than engage Harris’s arguments in any direct or meaningful way, de Waal contents himself with following in the footsteps of critics who apply the postmodern strategy of holding the book to account for all the analogies that can be drawn with it, no matter how tenuously or tendentiously, to historical evils. De Waal writes, for instance,

While I do welcome a science of morality—my own work is part of it—I can’t fathom calls for science to determine human values (as per the subtitle of Sam Harris’s The Moral Landscape). Is pseudoscience something of the past? Are modern scientists free from moral biases? Think of the Tuskegee syphilis study just a few decades ago, or the ongoing involvement of medical doctors in prisoner torture at Guantanamo Bay. I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden. (22)

(Great phrase that "morality's handmaiden.") But Harris never argues that scientists are any more morally pure than anyone else. His argument is for the application of that “science of morality,” which de Waal proudly contributes to, to attempts at addressing the big moral issues our society faces.

            The guilt-by-association and guilt-by-historical-analogy tactics on display in The Bonobo and the Atheist extend all the way to that lodestar of postmodernism’s hysterical obsessions. We might hope that de Waal, after witnessing the frenzied insanity of the sociobiology controversy from the front row, would know better. But he doesn’t seem to grasp how toxic this type of rhetoric is to reasoned discourse and honest inquiry. After expressing his bafflement at how science and a naturalistic worldview could inspire good the way religion does (even though his main argument is that such external inspiration to do good is unnecessary), he writes,

It took Adolf Hitler and his henchmen to expose the moral bankruptcy of these ideas. The inevitable result was a precipitous drop of faith in science, especially biology. In the 1970s, biologists were still commonly equated with fascists, such as during the heated protest against “sociobiology.” As a biologist myself, I am glad those acrimonious days are over, but at the same time I wonder how anyone could forget this past and hail science as our moral savior. How did we move from deep distrust to naïve optimism? (22)

Was Nazism borne of an attempt to apply science to moral questions? It’s true some people use science in evil ways, but not nearly as commonly as people are directly urged by religion to perpetrate evils like inquisitions or holy wars. When science has directly inspired evil, as in the case of eugenics, the lifespan of the mistake was measurable in years or decades rather than centuries or millennia. Not to minimize the real human costs, but science wins hands down by being self-correcting and, certain individual scientists notwithstanding, undogmatic.

Harris intended for his book to begin a debate he was prepared to actively participate in. But he quickly ran into the problem that postmodern criticisms can’t really be dealt with in any meaningful way. The following long quote from Harris’s response to his battier critics in the Huffington Post will show both that de Waal’s characterization of his argument is way off-the-mark, and that it is suspiciously unoriginal:

How, for instance, should I respond to the novelist Marilynne Robinson’s paranoid, anti-science gabbling in the Wall Street Journal where she consigns me to the company of the lobotomists of the mid 20th century? Better not to try, I think—beyond observing how difficult it can be to know whether a task is above or beneath you. What about the science writer John Horgan, who was kind enough to review my book twice, once in Scientific American where he tarred me with the infamous Tuskegee syphilis experiments, the abuse of the mentally ill, and eugenics, and once in The Globe and Mail, where he added Nazism and Marxism for good measure? How does one graciously respond to non sequiturs? The purpose of The Moral Landscape is to argue that we can, in principle, think about moral truth in the context of science. Robinson and Horgan seem to imagine that the mere existence of the Nazi doctors counts against my thesis. Is it really so difficult to distinguish between a science of morality and the morality of science? To assert that moral truths exist, and can be scientifically understood, is not to say that all (or any) scientists currently understand these truths or that those who do will necessarily conform to them.

And we have to ask further what alternative source of ethical principles do the self-righteous grandstanders like Robinson and Horgan—and now de Waal—have to offer? In their eagerness to compare everyone to the Nazis, they seem to be deriving their own morality from Fox News.

De Waal makes three objections to Harris’s arguments that are of actual substance, but none of them are anywhere near as devastating to his overall case as de Waal makes out. First, Harris begins with the assumption that moral behaviors lead to “human flourishing,” but this is a presupposed value as opposed to an empirical finding of science—or so de Waal claims. But here’s de Waal himself on a level of morality sometimes seen in apes that transcends one-on-one interactions between individuals:

female chimpanzees have been seen to drag reluctant males toward each other to make up after a fight, while removing weapons from their hands. Moreover, high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as a sign that the building blocks of morality are older than humanity, and that we don’t need God to explain how we got to where we are today. (20)

The similarity between the concepts of human flourishing and community concern highlights one of the main areas of confusion de Waal could have avoided by actually reading Harris’s book. The word “determine” in the title has two possible meanings. Science can determine values in the sense that it can guide us toward behaviors that will bring about flourishing. But it can also determine our values in the sense of discovering what we already naturally value and hence what conditions need to be met for us to flourish.

De Waal performs a sleight of hand late in The Bonobo and the Atheist, substituting another “utilitarian” for Harris, justifying the trick by pointing out that utilitarians also seek to maximize human flourishing—though Harris never claims to be one. This leads de Waal to object that strict utilitarianism isn’t viable because he’s more likely to direct his resources to his own ailing mother than to any stranger in need, even if those resources would benefit the stranger more. Thus de Waal faults Harris’s ethics for overlooking the role of loyalty in human lives. His third criticism is similar; he worries that utilitarians might infringe on the rights of a minority to maximize flourishing for a majority. But how, given what we know about human nature, could we expect humans to flourish—to feel as though they were flourishing—in a society that didn’t properly honor friendship and the bonds of family? How could humans be happy in a society where they had to constantly fear being sacrificed to the whim of the majority? It is in precisely this effort to discover—or determine—under which circumstances humans flourish that Harris believes science can be of the most help. And as de Waal moves up from his mammalian foundations of morality to more abstract ethical principles the separation between his approach and Harris’s starts to look suspiciously like a distinction without a difference.

            Harris in fact points out that honoring family bonds probably leads to greater well-being on pages seventy-three and seventy-four of The Moral Landscape, and de Waal quotes from page seventy-four himself to chastise Harris for concentrating too much on "the especially low-hanging fruit of conservative Islam" (74). The incoherence of de Waal's argument (and the carelessness of his research) are on full display here as he first responds to a point about the genital mutilation of young girls by asking, "Isn't genital mutilation common in the United States, too, where newborn males are circumcised without their consent?" (90). So cutting off the foreskin of a male's penis is morally equivalent to cutting off a girl's clitoris? Supposedly, the equivalence implies that there can't be any reliable way to determine the relative moral status of religious practices. "Could it be that religion and culture interact to the point that there is no universal morality?" Perhaps, but, personally, as a circumcised male, I think this argument is a real howler.

*****

The slick scholarly laziness on display in The Bonobo and the Atheist is just as bad when it comes to the positions, and the personality, of Christopher Hitchens, whom de Waal sees fit to psychoanalyze instead of engaging his arguments in any substantive way—but whose memoir, Hitch-22, he’s clearly never bothered to read. The straw man about the neo-atheists being bent on obliterating religion entirely is, disappointingly, but not surprisingly by this point, just one of several errors and misrepresentations. De Waal’s main argument against Hitchens, that his atheism is just another dogma, just as much a religion as any other, is taken right from the list of standard talking points the most incurious of religious apologists like to recite against him. Theorizing that “activist atheism reflects trauma” (87)—by which he means that people raised under severe religions will grow up to espouse severe ideologies of one form or another—de Waal goes on to suggest that neo-atheism is an outgrowth of “serial dogmatism”:

Hitchens was outraged by the dogmatism of religion, yet he himself had moved from Marxism (he was a Trotskyist) to Greek Orthodox Christianity, then to American Neo-Conservatism, followed by an “antitheist” stance that blamed all of the world’s troubles on religion. Hitchens thus swung from the left to the right, from anti-Vietnam War to cheerleader of the Iraq War, and from pro to contra God. He ended up favoring Dick Cheney over Mother Teresa. (89)

This is truly awful rubbish, and it’s really too bad Hitchens isn’t around anymore to take de Waal to task for it himself. First, this passage allows us to catch out de Waal’s abuse of the term dogma; dogmatism is rigid adherence to beliefs that aren’t open to questioning. The test of dogmatism is whether you’re willing to adjust your views in light of new evidence or changing circumstances—it has nothing to do with how willing or eager you are to debate. What de Waal is labeling dogmatism is what we normally call outspokenness. Second, his facts are simply wrong. For one, though Hitchens was labeled a neocon by some of his fellows on the left simply because he supported the invasion of Iraq, he never considered himself one. When he was asked in an interview for the New Stateman if he was a neoconservative, he responded unequivocally, “I’m not a conservative of any kind.” Finally, can’t someone be for one war and against another, or agree with certain aspects of a religious or political leader’s policies and not others, without being shiftily dogmatic?

            De Waal never really goes into much detail about what the “naturalized ethics” he advocates might look like beyond insisting that we should take a bottom-up approach to arriving at them. This evasiveness gives him space to criticize other nonbelievers regardless of how closely their ideas might resemble his own. “Convictions never follow straight from evidence or logic,” he writes. “Convictions reach us through the prism of human interpretation” (109). He takes this somewhat banal observation (but do they really never follow straight from evidence?) as a license to dismiss the arguments of others based on silly psychologizing. “In the same way that firefighters are sometimes stealth arsonists,” he writes, “and homophobes closet homosexuals, do some atheists secretly long for the certitude of religion?” (88). We could of course just as easily turn this Freudian rhetorical trap back against de Waal and his own convictions. Is he a closet dogmatist himself? Does he secretly hold the unconscious conviction that primates are really nothing like humans and that his research is all a big sham?

            Christopher Hitchens was another real-life character whose personality shone through his writing, and like Yossarian in Joseph Heller’s Catch-22 he often found himself in a position where he knew being sane would put him at odds with the masses, thus convincing everyone of his insanity. Hitchens particularly identified with the exchange near the end of Heller’s novel in which an officer, Major Danby, says, “But, Yossarian, suppose everyone felt that way,” to which Yossarian replies, “Then I’d certainly be a damned fool to feel any other way, wouldn’t I?” (446). (The title for his memoir came from a word game he and several of his literary friends played with book titles.) It greatly saddens me to see de Waal pitting himself against such a ham-fisted caricature of a man in whom, had he taken the time to actually explore his writings, he would likely have found much to admire. Why did Hitch become such a strong advocate for atheism? He made no secret of his motivations. And de Waal, who faults Harris (wrongly) for leaving loyalty out of his moral equations, just might identify with them. It began when the theocratic dictator of Iran put a hit out on his friend, the author Salman Rushdie, because he thought one of his books was blasphemous. Hitchens writes in Hitch-22,

When the Washington Post telephoned me at home on Valentine’s Day 1989 to ask my opinion about the Ayatollah Khomeini’s fatwah, I felt at once that here was something that completely committed me. It was, if I can phrase it like this, a matter of everything I hated versus everything I loved. In the hate column: dictatorship, religion, stupidity, demagogy, censorship, bullying, and intimidation. In the love column: literature, irony, humor, the individual, and the defense of free expression. Plus, of course, friendship—though I like to think that my reaction would have been the same if I hadn’t known Salman at all. (268)

Suddenly, neo-atheism doesn’t seem like an empty place-holder anymore. To criticize atheists so harshly for having convictions that are too strong, de Waal has to ignore all the societal and global issues religion is on the wrong side of. But when we consider the arguments on each side of the abortion or gay marriage or capital punishment or science education debates it’s easy to see that neo-atheists are only against religion because they feel it runs counter to the positive values of skeptical inquiry, egalitarian discourse, free society, and the ascendency of reason and evidence.

            De Waal ends The Bonobo and the Atheist with a really corny section in which he imagines how a bonobo would lecture atheists about morality and the proper stance toward religion. “Tolerance of religion,” the bonobo says, “even if religion is not always tolerant in return, allows humanism to focus on what is most important, which is to build a better society based on natural human abilities” (237). Hitchens is of course no longer around to respond to the bonobo, but many of the same issues came up in his debate with Tony Blair (I hope no one reads this as an insult to the former PM), who at one point also argued that religion might be useful in building better societies—look at all the charity work they do for instance. Hitch, already showing signs of physical deterioration from the treatment for the esophageal cancer that would eventually kill him, responds,

The cure for poverty has a name in fact. It’s called the empowerment of women. If you give women some control over the rate at which they reproduce, if you give them some say, take them off the animal cycle of reproduction to which nature and some doctrine, religious doctrine, condemns them, and then if you’ll throw in a handful of seeds perhaps and some credit, the flaw, the flaw of everything in that village, not just poverty, but education, health, and optimism, will increase. It doesn’t matter—try it in Bangladesh, try it in Bolivia. It works. It works all the time. Name me one religion that stands for that—or ever has. Wherever you look in the world and you try to remove the shackles of ignorance and disease and stupidity from women, it is invariably the clerisy that stands in the way. (23:05)

            Later in the debate, Hitch goes on to argue in a way that sounds suspiciously like an echo of de Waal’s challenges to veneer theory and his advocacy for bottom-up morality. He says,

The injunction not to do unto others what would be repulsive if done to yourself is found in the Analects of Confucius if you want to date it—but actually it’s found in the heart of every person in this room. Everybody knows that much. We don’t require divine permission to know right from wrong. We don’t need tablets administered to us ten at a time in tablet form, on pain of death, to be able to have a moral argument. No, we have the reasoning and the moral suasion of Socrates and of our own abilities. We don’t need dictatorship to give us right from wrong. (25:43)

And as a last word in his case and mine I’ll quote this very de Waalian line from Hitch: “There’s actually a sense of pleasure to be had in helping your fellow creature. I think that should be enough” (35:42).

Also read:

TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

Read More
Dennis Junk Dennis Junk

Napoleon Chagnon's Crucible and the Ongoing Epidemic of Moralizing Hysteria in Academia

Napoleon Chagnon was targeted by postmodern activists and anthropologists, who trumped up charges against him and hoped to sacrifice his reputation on the altar of social justice. In retrospect, his case looks like an early warning sign of what would come to be called “cancel culture.” Fortunately, Chagnon was no pushover, and there were a lot of people who saw through the lies being spread about him. “Noble Savages” is in a part a great adventure story and in part his response to the tragic degradation of the field of anthropology as it succumbs to the lures of ideology.

Noble Savages by Napoleon Chagnon

    When Arthur Miller adapted the script of The Crucible, his play about the Salem Witch Trials originally written in 1953, for the 1996 film version, he enjoyed additional freedom to work with the up-close visual dimensions of the tragedy. In one added scene, the elderly and frail George Jacobs, whom we first saw lifting one of his two walking sticks to wave an unsteady greeting to a neighbor, sits before a row of assembled judges as the young Ruth Putnam stands accusing him of assaulting her. The girl, ostensibly shaken from the encounter and frightened lest some further terror ensue, dramatically recounts her ordeal, saying,

He come through my window and then he lay down upon me. I could not take breath. His body crush heavy upon me, and he say in my ear, “Ruth Putnam, I will have your life if you testify against me in court.”

This quote she delivers in a creaky imitation of the old man’s voice. When one of the judges asks Jacobs what he has to say about the charges, he responds with the glaringly obvious objection: “But, your Honor, I must have these sticks to walk with—how may I come through a window?” The problem with this defense, Jacobs comes to discover, is that the judges believe a person can be in one place physically and in another in spirit. This poor tottering old man has no defense against so-called “spectral evidence.” Indeed, as judges in Massachusetts realized the year after Jacobs was hanged, no one really has any defense against spectral evidence. That’s part of the reason why it was deemed inadmissible in their courts, and immediately thereafter convictions for the crime of witchcraft ceased entirely. 

            Many anthropologists point to the low cost of making accusations as a factor in the evolution of moral behavior. People in small societies like the ones our ancestors lived in for millennia, composed of thirty or forty profoundly interdependent individuals, would have had to balance any payoff that might come from immoral deeds against the detrimental effects to their reputations of having those deeds discovered and word of them spread. As the generations turned over and over again, human nature adapted in response to the social enforcement of cooperative norms, and individuals came to experience what we now recognize as our moral emotions—guilt which is often preëmptive and prohibitive, shame, indignation, outrage, along with the more positive feelings associated with empathy, compassion, and loyalty.

The legacy of this process of reputational selection persists in our prurient fascination with the misdeeds of others and our frenzied, often sadistic, delectation in the spreading of salacious rumors. What Miller so brilliantly dramatizes in his play is the irony that our compulsion to point fingers, which once created and enforced cohesion in groups of selfless individuals, can in some environments serve as a vehicle for our most viciously selfish and inhuman impulses. This is why it is crucial that any accusation, if we as a society are to take it at all seriously, must provide the accused with some reliable means of acquittal. Charges that can neither be proven nor disproven must be seen as meaningless—and should even be counted as strikes against the reputation of the one who levels them. 

            While this principle runs into serious complications in situations with crimes that are as inherently difficult to prove as they are horrific, a simple rule proscribing any glib application of morally charged labels is a crucial yet all-too-popularly overlooked safeguard against unjust calumny. In this age of viral dissemination, the rapidity with which rumors spread coupled with the absence of any reliable assurances of the validity of messages bearing on the reputations of our fellow citizens demand that we deliberately work to establish as cultural norms the holding to account of those who make accusations based on insufficient, misleading, or spectral evidence—and the holding to account as well, to only a somewhat lesser degree, of those who help propagate rumors without doing due diligence in assessing their credibility.

            The commentary attending the publication of anthropologist Napoleon Chagnon’s memoir of his research with the Yanomamö tribespeople in Venezuela calls to mind the insidious “Teach the Controversy” PR campaign spearheaded by intelligent design creationists. Coming out against the argument that students should be made aware of competing views on the value of intelligent design inevitably gives the impression of close-mindedness or dogmatism. But only a handful of actual scientists have any truck with intelligent design, a dressed-up rehashing of the old God-of-the-Gaps argument based on the logical fallacy of appealing to ignorance—and that ignorance, it so happens, is grossly exaggerated.

Teaching the controversy would therefore falsely imply epistemological equivalence between scientific views on evolution and those that are not-so-subtly religious. Likewise, in the wake of allegations against Chagnon about mistreatment of the people whose culture he made a career of studying, many science journalists and many of his fellow anthropologists still seem reluctant to stand up for him because they fear doing so would make them appear insensitive to the rights and concerns of indigenous peoples. Instead, they take refuge in what they hope will appear a balanced position, even though the evidence on which the accusations rested has proven to be entirely spectral.

Chagnon’s Noble Savages: My Life among Two Dangerous Tribes—the Yanomamö and the Anthropologists is destined to be one of those books that garners commentary by legions of outspoken scholars and impassioned activists who never find the time to actually read it. Science writer John Horgan, for instance, has published two blog posts on Chagnon in recent weeks, and neither of them features a single quote from the book. In the first, he boasts of his resistance to bullying, via email, by five prominent sociobiologists who had caught wind of his assignment to review Patrick Tierney’s book Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon and insisted that he condemn the work and discourage anyone from reading it. Against this pressure, Horgan wrote a positive review in which he repeats several horrific accusations that Tierney makes in the book before going on to acknowledge that the author should have worked harder to provide evidence of the wrongdoings he reports on.

But Tierney went on to become an advocate for Indian rights. And his book’s faults are outweighed by its mass of vivid, damning detail. My guess is that it will become a classic in anthropological literature, sparking countless debates over the ethics and epistemology of field studies.

Horgan probably couldn’t have known at the time (though those five scientists tried to warn him) that giving Tierney credit for prompting debates about Indian rights and ethnographic research methods was a bit like praising Abigail Williams, the original source of accusations of witchcraft in Salem, for sparking discussions about child abuse. But that he stands by his endorsement today, saying,

“I have one major regret concerning my review: I should have noted that Chagnon is a much more subtle theorist of human nature than Tierney and other critics have suggested,” as balanced as that sounds, casts serious doubt on his scholarship, not to mention his judgment.

            What did Tierney falsely accuse Chagnon of? There are over a hundred specific accusations in the book (Chagnon says his friend William Irons flagged 106 [446]), but the most heinous whopper comes in the fifth chapter, titled “Outbreak.” In 1968, Chagnon was helping the geneticist James V. Neel collect blood samples from the Yanomamö—in exchange for machetes—so their DNA could be compared with that of people in industrialized societies. While they were in the middle of this project, a measles epidemic broke out, and Neel had discovered through earlier research that the Indians lacked immunity to this disease, so the team immediately began trying to reach all of the Yanomamö villages to vaccinate everyone before the contagion reached them. Most people who knew about the episode considered what the scientists did heroic (and several investigations now support this view). But Tierney, by creating the appearance of pulling together multiple threads of evidence, weaves together a much different story in which Neel and Chagnon are cast as villains instead of heroes. (The version of the book I’ll quote here is somewhat incoherent because it went through some revisions in attempts to deal with holes in the evidence that were already emerging pre-publication.)

First, Tierney misinterprets some passages from Neel’s books as implying an espousal of eugenic beliefs about the Indians, namely that by remaining closer to nature and thus subject to ongoing natural selection they retain all-around superior health, including better immunity. Next, Tierney suggests that the vaccine Neel chose, Edmonston B, which is usually administered with a drug called gamma globulin to minimize reactions like fevers, is so similar to the measles virus that in the immune-suppressed Indians it actually ended up causing a suite of symptoms that was indistinguishable from full-blown measles. The implication is clear. Tierney writes,

Chagnon and Neel described an effort to “get ahead” of the measles epidemic by vaccinating a ring around it. As I have reconstructed it, the 1968 outbreak had a single trunk, starting at the Ocamo mission and moving up the Orinoco with the vaccinators. Hundreds of Yanomami died in 1968 on the Ocamo River alone. At the time, over three thousand Yanomami lived on the Ocamo headwaters; today there are fewer than two hundred. (69)

At points throughout the chapter, Tierney seems to be backing off the worst of his accusations; he writes, “Neel had no reason to think Edmonston B could become transmissible. The outbreak took him by surprise.” But even in this scenario Tierney suggests serious wrongdoing: “Still, he wanted to collect data even in the midst of a disaster” (82).

Earlier in the chapter, though, Tierney makes a much more serious charge. Pointing to a time when Chagnon showed up at a Catholic mission after having depleted his stores of gamma globulin and nearly run out of Edmonston B, Tierney suggests the shortage of drugs was part of a deliberate plan. “There were only two possibilities,” he writes,

Either Chagnon entered the field with only forty doses of virus; or he had more than forty doses. If he had more than forty, he deliberately withheld them while measles spread for fifteen days. If he came to the field with only forty doses, it was to collect data on a small sample of Indians who were meant to receive the vaccine without gamma globulin. Ocamo was a good choice because the nuns could look after the sick while Chagnon went on with his demanding work. Dividing villages into two groups, one serving as a control, was common in experiments and also a normal safety precaution in the absence of an outbreak. (60)

Thus Tierney implies that Chagnon was helping Neel test his eugenics theory and in the process became complicit in causing an epidemic, maybe deliberately, that killed hundreds of people. Tierney claims he isn’t sure how much Chagnon knew about the experiment; he concedes at one point that “Chagnon showed genuine concern for the Yanomami,” before adding, “At the same time, he moved quickly toward a cover-up” (75).

            Near the end of his “Outbreak” chapter, Tierney reports on a conversation with Mark Papania, a measles expert at the Center for Disease Control in Atlanta. After running his hypothesis about how Neel and Chagnon caused the epidemic with the Edmonston B vaccine by Papania, Tierney claims he responded, “Sure, it’s possible.” He goes on to say that while Papania informed him there were no documented cases of the vaccine becoming contagious he also admitted that no studies of adequate sensitivity had been done. “I guess we didn’t look very hard,” Tierney has him saying (80). But evolutionary psychologist John Tooby got a much different answer when he called Papania himself. In a an article published on Slate—nearly three weeks before Horgan published his review, incidentally—Tooby writes that the epidemiologist had a very different attitude to the adequacy of past safety tests from the one Tierney reported:

it turns out that researchers who test vaccines for safety have never been able to document, in hundreds of millions of uses, a single case of a live-virus measles vaccine leading to contagious transmission from one human to another—this despite their strenuous efforts to detect such a thing. If attenuated live virus does not jump from person to person, it cannot cause an epidemic. Nor can it be planned to cause an epidemic, as alleged in this case, if it never has caused one before.

Tierney also cites Samuel Katz, the pediatrician who developed Edmonston B, at a few points in the chapter to support his case. But Katz responded to requests from the press to comment on Tierney’s scenario by saying,

the use of Edmonston B vaccine in an attempt to halt an epidemic was a justifiable, proven and valid approach. In no way could it initiate or exacerbate an epidemic. Continued circulation of these charges is not only unwarranted, but truly egregious.

Tooby included a link to Katz’s response, along with a report from science historian Susan Lindee of her investigation of Neel’s documents disproving many of Tierney’s points. It seems Horgan should’ve paid a bit more attention to those emails he was receiving.

Further investigations have shown that pretty much every aspect of Tierney’s characterization of Neel’s beliefs and research agenda was completely wrong. The report from a task force investigation by the American Society of Human Genetics gives a sense of how Tierney, while giving the impression of having conducted meticulous research, was in fact perpetrating fraud. The report states,

Tierney further suggests that Neel, having recognized that the vaccine was the cause of the epidemic, engineered a cover-up. This is based on Tierney’s analysis of audiotapes made at the time. We have reexamined these tapes and provide evidence to show that Tierney created a false impression by juxtaposing three distinct conversations recorded on two separate tapes and in different locations. Finally, Tierney alleges, on the basis of specific taped discussions, that Neel callously and unethically placed the scientific goals of the expedition above the humanitarian need to attend to the sick. This again is shown to be a complete misrepresentation, by examination of the relevant audiotapes as well as evidence from a variety of sources, including members of the 1968 expedition.

This report was published a couple years after Tierney’s book hit the shelves. But there was sufficient evidence available to anyone willing to do the due diligence in checking out the credibility of the author and his claims to warrant suspicion that the book’s ability to make it onto the shortlist for the National Book Award is indicative of a larger problem.

*******

With the benefit of hindsight and a perspective from outside the debate (though I’ve been following the sociobiology controversy for a decade and a half, I wasn’t aware of Chagnon’s longstanding and personal battles with other anthropologists until after Tierney’s book was published) it seems to me that once Tierney had been caught misrepresenting the evidence in support of such an atrocious accusation his book should have been removed from the shelves, and all his reporting should have been dismissed entirely. Tierney himself should have been made to answer for his offense. But for some reason none of this happened.

The anthropologist Marshall Sahlins, for instance, to whom Chagnon has been a bête noire for decades, brushed off any concern for Tierney’s credibility in his review of Darkness in El Dorado, published a full month after Horgan’s, apparently because he couldn’t resist the opportunity to write about how much he hates his celebrated colleague. Sahlins’s review is titled “Guilty not as Charged,” which is already enough to cast doubt on his capacity for fairness or rationality. Here’s how he sums up the issue of Tierney’s discredited accusation in relation to the rest of the book:

The Kurtzian narrative of how Chagnon achieved the political status of a monster in Amazonia and a hero in academia is truly the heart of Darkness in El Dorado. While some of Tierney’s reporting has come under fire, this is nonetheless a revealing book, with a cautionary message that extends well beyond the field of anthropology. It reads like an allegory of American power and culture since Vietnam.

Sahlins apparently hasn’t read Conrad’s novel Heart of Darkness or he’d know Chagnon is no Kurtz. And Vietnam? The next paragraph goes into more detail about this “allegory,” as if Sahlins’s conscripting of him into service as a symbol of evil somehow establishes his culpability. To get an idea of how much Chagnon actually had to do with Vietnam, we can look at a passage early in Noble Savages about how disconnected from the outside world he was while doing his field work:

I was vaguely aware when I went into the Yanomamö area in late 1964 that the United States had sent several hundred military advisors to South Vietnam to help train the South Vietnamese army. When I returned to Ann Arbor in 1966 the United States had some two hundred thousand combat troops there. (36)

But Sahlins’s review, as bizarre as it is, is important because it’s representative of the types of arguments Chagnon’s fiercest anthropological critics make against his methods, his theories, but mainly against him personally. In another recent comment on how “The Napoleon Chagnon Wars Flare Up Again,” Barbara J. King betrays a disconcerting and unscholarly complacence with quoting other, rival anthropologists’ words as evidence of Chagnon’s own thinking. Alas, King too is weighing in on the flare-up without having read the book, or anything else by the author it seems. And she’s also at pains to appear fair and balanced, even though the sources she cites against Chagnon are neither, nor are they the least bit scientific. Of Sahlins’s review of Darkness in El Dorado, she writes,

The Sahlins essay from 2000 shows how key parts of Chagnon’s argument have been “dismembered” scientifically. In a major paper published in 1988, Sahlins says, Chagnon left out too many relevant factors that bear on Ya̧nomamö males’ reproductive success to allow any convincing case for a genetic underpinning of violence.

It’s a bit sad that King feels it’s okay to post on a site as popular as NPR and quote a criticism of a study she clearly hasn’t read—she could have downloaded the pdf of Chagnon’s landmark paper “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” for free. Did Chagnon claim in the study that it proved violence had a genetic underpinning? It’s difficult to tell what the phrase “genetic underpinning” even means in this context.

To lend further support to Sahlins’s case, King selectively quotes another anthropologist, Jonathan Marks. The lines come from a rant on his blog (I urge you to check it out for yourself if you’re at all suspicious about the aptness of the term rant to describe the post) about a supposed takeover of anthropology by genetic determinism. But King leaves off the really interesting sentence at the end of the remark. Here’s the whole passage explaining why Marks thinks Chagnon is an incompetent scientist:

Let me be clear about my use of the word “incompetent”. His methods for collecting, analyzing and interpreting his data are outside the range of acceptable anthropological practices. Yes, he saw the Yanomamo doing nasty things. But when he concluded from his observations that the Yanomamo are innately and primordially “fierce” he lost his anthropological credibility, because he had not demonstrated any such thing. He has a right to his views, as creationists and racists have a right to theirs, but the evidence does not support the conclusion, which makes it scientifically incompetent.

What Marks is saying here is not that he has evidence of Chagnon doing poor field work; rather, Marks dismisses Chagnon merely because of his sociobiological leanings. Note too that the italicized words in the passage are not quotes. This is important because along with the false equation of sociobiology with genetic determinism this type of straw man underlies nearly all of the attacks on Chagnon. Finally, notice how Marks slips into the realm of morality as he tries to traduce Chagnon’s scientific credibility. In case you think the link with creationism and racism is a simple analogy—like the one I used myself at the beginning of this essay—look at how Marks ends his rant:

So on one side you’ve got the creationists, racists, genetic determinists, the Republican governor of Florida, Jared Diamond, and Napoleon Chagnon–and on the other side, you’ve got normative anthropology, and the mother of the President. Which side are you on?

How can we take this at all seriously? And why did King misleadingly quote, on a prominent news site, such a seemingly level-headed criticism which in context reveals itself as anything but level-headed? I’ll risk another analogy here and point out that Marks’s comments about genetic determinism taking over anthropology are similar in both tone and intellectual sophistication to Glenn Beck’s comments about how socialism is taking over American politics.

             King also links to a review of Noble Savages that was published in the New York Times in February, and this piece is even harsher to Chagnon. After repeating Tierney’s charge about Neel deliberately causing the 1968 measles epidemic and pointing out it was disproved, anthropologist Elizabeth Povinelli writes of the American Anthropological Association investigation that,

The committee was split over whether Neel’s fervor for observing the “differential fitness of headmen and other members of the Yanomami population” through vaccine reactions constituted the use of the Yanomamö as a Tuskegee-­like experimental population.

Since this allegation has been completely discredited by the American Society of Human Genetics, among others, Povinelli’s repetition of it is irresponsible, as was the Times failure to properly vet the facts in the article.

Try as I might to remain detached from either side as I continue to research this controversy (and I’ve never met any of these people), I have to say I found Povinelli’s review deeply offensive. The straw men she shamelessly erects and the quotes she shamelessly takes out of context, all in the service of an absurdly self-righteous and substanceless smear, allow no room whatsoever for anything answering to the name of compassion for a man who was falsely accused of complicity in an atrocity. And in her zeal to impugn Chagnon she propagates a colorful and repugnant insult of her own creation, which she misattributes to him. She writes,

Perhaps it’s politically correct to wonder whether the book would have benefited from opening with a serious reflection on the extensive suffering and substantial death toll among the Yanomamö in the wake of the measles outbreak, whether or not Chagnon bore any responsibility for it. Does their pain and grief matter less even if we believe, as he seems to, that they were brutal Neolithic remnants in a land that time forgot? For him, the “burly, naked, sweaty, hideous” Yanomamö stink and produce enormous amounts of “dark green snot.” They keep “vicious, underfed growling dogs,” engage in brutal “club fights” and—God forbid!—defecate in the bush. By the time the reader makes it to the sections on the Yanomamö’s political organization, migration patterns and sexual practices, the slant of the argument is evident: given their hideous society, understanding the real disaster that struck these people matters less than rehabilitating Chagnon’s soiled image.

In other words, Povinelli’s response to Chagnon’s “harrowing” ordeal, is to effectively say, Maybe you’re not guilty of genocide, but you’re still guilty for not quitting your anthropology job and becoming a forensic epidemiologist. Anyone who actually reads Noble Savages will see quite clearly the “slant” Povinelli describes, along with those caricatured “brutal Neolithic remnants,” must have flown in through her window right next to George Jacobs.

            Povinelli does characterize one aspect of Noble Savages correctly when she complains about its “Manichean rhetorical structure,” with the bad Rousseauian, Marxist, postmodernist cultural anthropologists—along with the corrupt and PR-obsessed Catholic missionaries—on one side, and the good Hobbesian, Darwinian, scientific anthropologists on the other, though it’s really just the scientific part he’s concerned with. I actually expected to find a more complicated, less black-and-white debate taking place when I began looking into the attacks on Chagnon’s work—and on Chagnon himself. But what I ended up finding was that Chagnon’s description of the division, at least with regard to the anthropologists (I haven’t researched his claims about the missionaries) is spot-on, and Povinelli’s repulsive review is a case in point.

This isn’t to say that there aren’t legitimate scientific disagreements about sociobiology. In fact, Chagnon writes about how one of his heroes is “calling into question some of the most widely accepted views” as early as his dedication page, referring to E.O. Wilson’s latest book The Social Conquest of Earth. But what Sahlins, Marks, and Povinelli offer is neither legitimate nor scientific. These commenters really are, as Chagnon suggests, representative of a subset of cultural anthropologists completely given over to a moralizing hysteria. Their scholarship is as dishonest as it is defamatory, their reasoning rests on guilt by free-association and the tossing up and knocking down of the most egregious of straw men, and their tone creates the illusion of moral certainty coupled with a longsuffering exasperation with entrenched institutionalized evils. For these hysterical moralizers, it seems any theory of human behavior that involves evolution or biology represents the same kind of threat as witchcraft did to the people of Salem in the 1690s, or as communism did to McCarthyites in the 1950s. To combat this chimerical evil, the presumed righteous ends justify the deceitful means.

The unavoidable conclusion with regard to the question of why Darkness in El Dorado wasn’t dismissed outright when it should have been is that even though it has been established that Chagnon didn’t commit any of the crimes Tierney accused him of, as far as his critics are concerned, he may as well have. Somehow cultural anthropologists have come to occupy a bizarre culture of their own in which charging a colleague with genocide doesn’t seem like a big deal. Before Tierney’s book hit the shelves, two anthropologists, Terence Turner and Leslie Sponsel, co-wrote an email to the American Anthropological Association which was later sent to several journalists. Turner and Sponsel later claimed the message was simply a warning about the “impending scandal” that would result from the publication of Darkness in El Dorado. But the hyperbole and suggestive language make it read more like a publicity notice than a warning. “This nightmarish story—a real anthropological heart of darkness beyond the imagining of even a Josef Conrad (though not, perhaps, a Josef Mengele)”—is it too much to ask of those who are so fond of referencing Joseph Conrad that they actually read his book?—“will be seen (rightly in our view) by the public, as well as most anthropologists, as putting the whole discipline on trial.” As it turned out, though, the only one who was put on trial, by the American Anthropological Association—though officially it was only an “inquiry”—was Napoleon Chagnon.

Chagnon’s old academic rivals, many of whom claim their problem with him stems from the alleged devastating impact of his research on Indians, fail to appreciate the gravity of Tierney’s accusations. Their blasé response to the author being exposed as a fraud gives the impression that their eagerness to participate in the pile-on has little to do with any concern for the Yanomamö people. Instead, they embraced Darkness in El Dorado because it provided good talking points in the campaign against their dreaded nemesis Napoleon Chagnon. Sahlins, for instance, is strikingly cavalier about the personal effects of Tierney’s accusations in the review cited by King and Horgan:

The brouhaha in cyberspace seemed to help Chagnon’s reputation as much as Neel’s, for in the fallout from the latter’s defense many academics also took the opportunity to make tendentious arguments on Chagnon’s behalf. Against Tierney’s brief that Chagnon acted as an anthro-provocateur of certain conflicts among the Yanomami, one anthropologist solemnly demonstrated that warfare was endemic and prehistoric in the Amazon. Such feckless debate is the more remarkable because most of the criticisms of Chagnon rehearsed by Tierney have been circulating among anthropologists for years, and the best evidence for them can be found in Chagnon’s writings going back to the 1960s.

Sahlins goes on to offer his own sinister interpretation of Chagnon’s writings, using the same straw man and guilt-by-free-association techniques common to anthropologists in the grip of moralizing hysteria. But I can’t help wondering why anyone would take a word he says seriously after he suggests that being accused of causing a deadly epidemic helped Neel’s and Chagnon’s reputations.

*******

            Marshall Sahlins recently made news by resigning from the National Academy of Sciences in protest against the organization’s election of Chagnon to its membership and its partnerships with the military. In explaining his resignation, Sahlins insists that Chagnon, based on the evidence of his own writings, did serious harm to the people whose culture he studied. Sahlins also complains that Chagnon’s sociobiological ideas about violence are so wrongheaded that they serve to “discredit the anthropological discipline.” To back up his objections, he refers interested parties to that same review of Darkness in El Dorado King links to on her post.

Though Sahlins explains his moral and intellectual objections separately, he seems to believe that theories of human behavior based on biology are inherently immoral, as if theorizing that violence has “genetic underpinnings” is no different from claiming that violence is inevitable and justifiable. This is why Sahlins can’t discuss Chagnon without reference to Vietnam. He writes in his review,

The ‘60s were the longest decade of the 20th century, and Vietnam was the longest war. In the West, the war prolonged itself in arrogant perceptions of the weaker peoples as instrumental means of the global projects of the stronger. In the human sciences, the war persists in an obsessive search for power in every nook and cranny of our society and history, and an equally strong postmodern urge to “deconstruct” it. For his part, Chagnon writes popular textbooks that describe his ethnography among the Yanomami in the 1960s in terms of gaining control over people.

Sahlins doesn’t provide any citations to back up this charge—he’s quite clearly not the least bit concerned with fairness or solid scholarship—and based on what Chagnon writes in Noble Savages this fantasy of “gaining control” originates in the mind of Sahlins, not in the writings of Chagnon.

For instance, Chagnon writes of being made the butt of an elaborate joke several Yanomamö conspired to play on him by giving him fake names for people in their village (like Hairy Cunt, Long Dong, and Asshole). When he mentions these names to people in a neighboring village, they think it’s hilarious. “My face flushed with embarrassment and anger as the word spread around the village and everybody was laughing hysterically.” And this was no minor setback: “I made this discovery some six months into my fieldwork!” (66) Contrary to the despicable caricature Povinelli provides as well, Chagnon writes admiringly of the Yanomamö’s “wicked humor,” and how “They enjoyed duping others, especially the unsuspecting and gullible anthropologist who lived among them” (67). Another gem comes from an episode in which he tries to treat a rather embarrassing fungal infection: “You can’t imagine the hilarious reaction of the Yanomamö watching the resident fieldworker in a most indescribable position trying to sprinkle foot powder onto his crotch, using gravity as a propellant” (143).

            The bitterness, outrage, and outright hatred directed at Chagnon, alongside the overt nonexistence of evidence that he’s done anything wrong, seem completely insane until you consider that this preeminent anthropologist falls afoul of all the –isms that haunt the fantastical armchair obsessions of postmodern pseudo-scholars. Chagnon stands as a living symbol of the white colonizer exploiting indigenous people and resources (colonialism); he propagates theories that can be read as supportive of fantasies about individual and racial superiority (Social Darwinism, racism); he reports on tribal warfare and cruelty toward women, with the implication that these evils are encoded in our genes (neoconservativism, sexism, biological determinism). It should be clear that all of this is nonsense: any exploitation is merely alleged and likely outweighed by efforts at vaccination against diseases introduced by missionaries and gold miners; sociobiology doesn’t focus on racial differences, and superiority is a scientifically meaningless term; and the fact that genes play a role in some behavior implies neither that the behavior is moral nor that it is inevitable. The truly evil –ism at play in the campaign against Chagnon is postmodernism—an ideology which functions as little more than a factory for the production of false accusations.

            There are two main straw men that are bound to be rolled out by postmodern critics of evolutionary theories of behavior in any discussion of morally charged topics. The first is the gene-for misconception.

Every anthropologist, sociobiologist, and evolutionary psychologist knows that there is no gene for violence and warfare in the sense that would mean everyone born with a particular allele will inevitably grow up to be physically aggressive. Yet, in any discussion of the causes of violence, or any other issue in which biology is implicated, critics fall all over themselves trying to catch their opponents out for making this mistake, and they pretend by doing so they’re defeating an attempt to undermine efforts to make the world more peaceful. It so happens that scientists actually have discovered a gene variation, known popularly as “the warrior gene,” that increases the likelihood that an individual carrying it will engage in aggressive behavior—but only if that individual experiences a traumatic childhood. Having a gene variation associated with a trait only ever means someone is more likely to express that trait, and there will almost always be other genes and several environmental factors contributing to the overall likelihood.

You can be reasonably sure that if a critic is taking a sociobiologist or an evolutionary psychologist to task for suggesting a direct one-to-one correspondence between a gene and a behavior that critic is being either careless or purposely misleading. In trying to bring about a more peaceful world, it’s far more effective to study the actual factors that contribute to violence than it is to write moralizing criticisms of scientific colleagues. The charge that evolutionary approaches can only be used to support conservative or reactionary views of society isn’t just a misrepresentation of sociobiological theories; it’s also empirically false—surveys demonstrate that grad students in evolutionary anthropology are overwhelmingly liberal in their politics, just as liberal in fact as anthropology students in non-evolutionary concentrations.

Another thing anyone who has taken a freshman anthropology course knows, but that anti-evolutionary critics fall all over themselves taking sociobiologists to task for not understanding, is that people who live in foraging or tribal cultures cannot be treated as perfect replicas of our Pleistocene ancestors, or as Povinelli calls them “prehistoric time capsules.” Hunters and gatherers are not “living fossils,” because they’ve been evolving just as long as people in industrialized societies, their histories and environments are unique, and it’s almost impossible for them to avoid being impacted by outside civilizations. If you flew two groups of foragers from different regions each into the territory of the other, you would learn quite quickly that each group’s culture is intricately adapted to the environment it originally inhabited. This does not mean, however, that evidence about how foraging and tribal peoples live is irrelevant to questions about human evolution.

As different as those two groups are, they are both probably living lives much more similar to those of our ancestors than anyone in industrialized societies. What evolutionary anthropologists and psychologists tend to be most interested in are the trends that emerge when several of these cultures are compared to one another. The Yanomamö actually subsist largely on slash-and-burn agriculture, and they live in groups much larger than those of most foraging peoples. Their culture and demographic patterns may therefore provide clues to how larger and more stratified societies developed after millennia of evolution in small, mobile bands. But, again, no one is suggesting the Yanomamö are somehow interchangeable with the people who first made this transition to more complex social organization historically.

The prehistoric time-capsule straw man often goes hand-in-hand with an implication that the anthropologists supposedly making the blunder see the people whose culture they study as somehow inferior, somehow less human than people who live in industrialized civilizations. It seems like a short step from this subtle dehumanization to the kind of whole-scale exploitation indigenous peoples are often made to suffer. But the sad truth is there are plenty of economic, religious, and geopolitical forces working against the preservation of indigenous cultures and the protection of indigenous people’s rights to make scapegoating scientists who gather cultural and demographic information completely unnecessary. And you can bet Napoleon Chagnon is, if anything, more outraged by the mistreatment of the Yanomamö than most of the activists who falsely accuse him of complicity, because he knows so many of them personally. Chagnon is particularly critical of Brazilian gold miners and Salesian missionaries, both of whom it seems have far more incentive to disrespect the Yanomamö culture (by supplanting their religion and moving them closer to civilization) and ravage the territory they inhabit. The Salesians’ reprisals for his criticisms, which entailed pulling strings to keep him out of the territory and efforts to create a public image of him as a menace, eventually provided fodder for his critics back home as well. 

*******

In an article published in the journal American Anthropologist in 2004 titled Guilt by Association, about the American Anthropological Association’s compromised investigation of Tierney’s accusations against Chagnon, Thomas Gregor and Daniel Gross describe “chains of logic by which anthropological research becomes, at the end of an associative thread, an act of misconduct” (689). Quoting Defenders of the Truth, sociologist Ullica Segerstrale’s indispensable 2000 book on the sociobiology debate, Gregor and Gross explain that Chagnon’s postmodern accusers relied on a rhetorical strategy common among critics of evolutionary theories of human behavior—a strategy that produces something startlingly indistinguishable from spectral evidence. Segerstrale writes,

In their analysis of their target’s texts, the critics used a method I call moral reading. The basic idea behind moral reading was to imagine the worst possible political consequences of a scientific claim. In this way, maximum moral guilt might be attributed to the perpetrator of this claim. (206)

She goes on to cite a “glaring” example of how a scholar drew an imaginary line from sociobiology to Nazism, and then connected it to fascist behavioral control, even though none of these links were supported by any evidence (207). Gregor and Gross describe how this postmodern version of spectral evidence was used to condemn Chagnon.

In the case at hand, for example, the Report takes Chagnon to task for an article in Science on revenge warfare, in which he reports that “Approximately 30% of Yanomami adult male deaths are due to violence”(Chagnon 1988:985). Chagnon also states that Yanomami men who had taken part in violent acts fathered more children than those who had not. Such facts could, if construed in their worst possible light, be read as suggesting that the Yanomami are violent by nature and, therefore, undeserving of protection. This reading could give aid and comfort to the opponents of creating a Yanomami reservation. The Report, therefore, criticizes Chagnon for having jeopardized Yanomami land rights by publishing the Science article, although his research played no demonstrable role in the demarcation of Yanomami reservations in Venezuela and Brazil. (689)

The task force had found that Chagnon was guilty—even though it was nominally just an “inquiry” and had no official grounds for pronouncing on any misconduct—of harming the Indians by portraying them negatively. Gregor and Gross, however, sponsored a ballot at the AAA to rescind the organization’s acceptance of the report; in 2005, it was voted on by the membership and passed by a margin of 846 to 338. “Those five years,” Chagnon writes of the time between that email warning about Tierney’s book and the vote finally exonerating him, “seem like a blurry bad dream” (450).

            Anthropological fieldwork has changed dramatically since Chagnon’s early research in Venezuela. There was legitimate concern about the impact of trading manufactured goods like machetes for information, and you can read about some of the fracases it fomented among the Yanomamö in Noble Savages. The practice is now prohibited by the ethical guidelines of ethnographic field research. The dangers to isolated or remote populations from communicable diseases must also be considered while planning any expeditions to study indigenous cultures. But Chagnon was entering the Ocamo region after many missionaries and just before many gold miners. And we can’t hold him accountable for disregarding rules that didn’t exist at the time. Sahlins, however, echoing Tierney’s perversion of Neel and Chagnon’s race to immunize the Indians so that the two men appeared to be the source of contagion, accuses Chagnon of causing much of the violence he witnessed and reported by spreading around his goods.

Hostilities thus tracked the always-changing geopolitics of Chagnon-wealth, including even pre-emptive attacks to deny others access to him. As one Yanomami man recently related to Tierney: “Shaki [Chagnon] promised us many things, and that’s why other communities were jealous and began to fight against us.”

Aside from the fact that some Yanomamö men had just returned from a raid the very first time he entered one of their villages, and the fact that the source of this quote has been discredited, Sahlins is also basing his elaborate accusation on some pretty paltry evidence.

            Sahlins also insists that the “monster in Amazonia” couldn’t possibly have figured out a way to learn the names and relationships of the people he studied without aggravating intervillage tensions (thus implicitly conceding those tensions already existed). The Yanomamö have a taboo against saying the names of other adults, similar to our own custom of addressing people we’ve just met by their titles and last names, but with much graver consequences for violations. This is why Chagnon had to confirm the names of people in one tribe by asking about them in another, the practice that led to his discovery of the prank that was played on him. Sahlins uses Tierney’s reporting as the only grounds for his speculations on how disruptive this was to the Yanomamö. And, in the same way he suggested there was some moral equivalence between Chagnon going into the jungle to study the culture of a group of Indians and the US military going into the jungles to engage in a war against the Vietcong, he fails to distinguish between the Nazi practice of marking Jews and Chagnon’s practice of writing numbers on people’s arms to keep track of their problematic names. Quoting Chagnon, Sahlins writes,

“I began the delicate task of identifying everyone by name and numbering them with indelible ink to make sure that everyone had only one name and identity.” Chagnon inscribed these indelible identification numbers on people’s arms—barely 20 years after World War II.

This juvenile innuendo calls to mind Jon Stewart’s observation that it’s not until someone in Washington makes the first Hitler reference that we know a real political showdown has begun (and Stewart has had to make the point a few times again since then).

One of the things that makes this type of trashy pseudo-scholarship so insidious is that it often creates an indelible impression of its own. Anyone who reads Sahlins’ essay could be forgiven for thinking that writing numbers on people might really be a sign that he was dehumanizing them. Fortunately, Chagnon’s own accounts go a long way toward dispelling this suspicion. In one passage, he describes how he made the naming and numbering into a game for this group of people who knew nothing about writing:

I had also noted after each name the item that person wanted me to bring on my next visit, and they were surprised at the total recall I had when they decided to check me. I simply looked at the number I had written on their arm, looked the number up in my field book, and then told the person precisely what he had requested me to bring for him on my next trip. They enjoyed this, and then they pressed me to mention the names of particular people in the village they would point to. I would look at the number on the arm, look it up in my field book, and whisper his name into someone’s ear. The others would anxiously and eagerly ask if I got it right, and the informant would give an affirmative quick raise of the eyebrows, causing everyone to laugh hysterically. (157)

Needless to say, this is a far cry from using the labels to efficiently herd people into cargo trains to transport them to concentration camps and gas chambers. Sahlins disgraces himself by suggesting otherwise and by not distancing himself from Tierney when it became clear that his atrocious accusations were meritless.

            Which brings us back to John Horgan. One week after the post in which he bragged about standing up to five email bullies who were urging him not to endorse Tierney’s book and took the opportunity to say he still stands by the mostly positive review, he published another post on Chagnon, this time about the irony of how close Chagnon’s views on war are to those of Margaret Mead, a towering figure in anthropology whose blank-slate theories sociobiologists often challenge. (Both of Horgan’s posts marking the occasion of Chagnon’s new book—neither of which quote from it—were probably written for publicity; his own book on war was published last year.) As I read the post, I came across the following bewildering passage: 

Chagnon advocates have cited a 2011 paper by bioethicist Alice Dreger as further “vindication” of Chagnon. But to my mind Dreger’s paper—which wastes lots of verbiage bragging about all the research that she’s done and about how close she has gotten to Chagnon–generates far more heat than light. She provides some interesting insights into Tierney’s possible motives in writing Darkness in El Dorado, but she leaves untouched most of the major issues raised by Chagnon’s career.

Horgan’s earlier post was one of the first things I’d read in years about Chagnon, and Tierney’s accusations against him. I read Alice Dreger’s report on her investigation of those accusations, and the “inquiry” by the American Anthropological Association that ensued from them, shortly afterward. I kept thinking back to Horgan’s continuing endorsement of Tieney’s book as I read the report because she cites several other reports that establish, at the very least, that there was no evidence to support the worst of the accusations. My conclusion was that Horgan simply hadn’t done his homework. How could he endorse a work featuring such horrific accusations if he knew most of them, the most horrific in particular, had been disproved? But with this second post he was revealing that he knew the accusations were false—and yet he still hasn’t recanted his endorsement.

            If you only read two supplements to Noble Savages, I recommend Dreger’s report and Emily Eakin’s profile of Chagnon in the New York Times. The one qualm I have about Eakin’s piece is that she too sacrifices the principle of presuming innocence in her effort to achieve journalistic balance, quoting Leslie Sponsel, one of the authors of the appalling email that sparked the AAA’s investigation of Chagnon, as saying, “The charges have not all been disproven by any means.” It should go without saying that the burden of proof is on the accuser. It should also go without saying that once the most atrocious of Tierney’s accusations were disproven the discussion of culpability should have shifted its focus away from Chagnon onto Tierney and his supporters. That it didn’t calls to mind the scene in The Crucible when an enraged John Proctor, whose wife is being arrested, shouts in response to an assurance that she’ll be released if she’s innocent—“If she is innocent! Why do you never wonder if Paris be innocent, or Abigail? Is the accuser always holy now? Were they born this morning as clean as God’s fingers?” (73). Aside from Chagnon himself, Dreger is about the only one who realized Tierney himself warranted some investigating.

            Eakin echoes Horgan a bit when she faults the “zealous tone” of Dreger’s report. Indeed, at one point, Dreger compares Chagnon’s trial to Galileo’s being called before the Inquisition. The fact is, though, there’s an important similarity. One of the most revealing discoveries of Dreger’s investigation was that the members of the AAA task force knew Tierney’s book was full of false accusations but continued with their inquiry anyway because they were concerned about the organization’s public image. In an email to the sociobiologist Sarah Blaffer Hrdy, Jane Hill, the head of the task force, wrote,

Burn this message. The book is just a piece of sleaze, that’s all there is to it (some cosmetic language will be used in the report, but we all agree on that). But I think the AAA had to do something because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations, and silence on the part of the AAA would have been interpreted as either assent or cowardice.

How John Horgan could have read this and still claimed that Dreger’s report “generates more heat than light” is beyond me. I can only guess that his judgment has been distorted by cognitive dissonance.

        To Horgan's other complaints, that she writes too much about her methods and admits to having become friends with Chagnon, she might respond that there is so much real hysteria surrounding this controversy, along with a lot of commentary reminiscent of the type of ridiculous rhetoric one hears on cable news, it was important to distinguish her report from all the groundless and recriminatory he-said-she-said. As for the friendship, it came about over the course of Dreger’s investigation. This is important because, for one, it doesn’t suggest any pre-existing bias, and two, one of the claims by critics of Chagnon’s work is that the violence he reported was either provoked by the man himself, or represented some kind of mental projection of his own bellicose character onto the people he was studying.

Dreger’s friendship with Chagnon shows that he’s not the monster portrayed by those in the grip of moralizing hysteria. And if parts of her report strike many as sententious it’s probably owing to their unfamiliarity with how ingrained that hysteria has become. It seems odd that anyone would pronounce on the importance of evidence or fairness—but basic principles we usually take for granted where trammeled in the frenzy to condemn Chagnon. 

If his enemies are going to compare him to Mengele, then a comparison with Galileo seems less extreme.

  Dreger, it seems to me, deserves credit for bringing a sorely needed modicum of sanity to the discussion. And she deserves credit as well for being one of the only people commenting on the controversy who understands the devastating personal impact of such vile accusations. She writes,

Meanwhile, unlike Neel, Chagnon was alive to experience what it is like to be drawn-and-quartered in the international press as a Nazi-like experimenter responsible for the deaths of hundreds, if not thousands, of Yanomamö. He tried to describe to me what it is like to suddenly find yourself accused of genocide, to watch your life’s work be twisted into lies and used to burn you.

So let’s make it clear: the scientific controversy over sociobiology and the scandal over Tierney’s discredited book are two completely separate issues. In light of the findings from all the investigations of Tierney’s claims, we should all, no matter our theoretical leanings, agree that Darkness in El Dorado is, in the words of Jane Hill, who headed a task force investigating it, “just a piece of sleaze.” We should still discuss whether it was appropriate or advisable for Chagnon to exchange machetes for information—I’d be interested to hear what he has to say himself, since he describes all kinds of frustrations the practice caused him in his book. We should also still discuss the relative threat of contagion posed by ethnographers versus missionaries, weighed of course against the benefits of inoculation campaigns.

But we shouldn’t discuss any ethical or scientific matter with reference to Darkness in El Dorado or its disgraced author aside from questions like: Why was the hysteria surrounding the book allowed to go so far? Why were so many people willing to scapegoat Chagnon? Why doesn’t anyone—except Alice Dreger—seem at all interested in bringing Tierney to justice in some way for making such outrageous accusations based on misleading or fabricated evidence? What he did is far worse than what Jonah Lehrer or James Frey did, and yet both of those men have publically acknowledged their dishonesty while no one has put even the slightest pressure on Tierney to publically admit wrongdoing.

            There’s some justice to be found in how easy Tierney and all the self-righteous pseudo-scholars like Sahlins have made it for future (and present) historians of science to cast them as deluded and unscrupulous villains in the story of a great—but flawed, naturally—anthropologist named Napoleon Chagnon. There’s also justice to be found in how snugly the hysterical moralizers’ tribal animosity toward Chagnon, their dehumanization of him, fits within a sociobiological framework of violence and warfare. One additional bit of justice might come from a demonstration of how easily Tierney’s accusatory pseudo-reporting can be turned inside-out. Tierney at one point in his book accuses Chagnon of withholding names that would disprove the central finding of his famous Science paper, and reading into the fact that the ascendant theories Chagnon criticized were openly inspired by Karl Marx’s ideas, he writes,

Yet there was something familiar about Chagnon’s strategy of secret lists combined with accusations against ubiquitous Marxists, something that traced back to his childhood in rural Michigan, when Joe McCarthy was king. Like the old Yanomami unokais, the former senator from Wisconsin was in no danger of death. Under the mantle of Science, Tailgunner Joe was still firing away—undefeated, undaunted, and blessed with a wealth of off-spring, one of whom, a poor boy from Port Austin, had received a full portion of his spirit. (180)

Tierney had no evidence that Chagnon kept any data out of his analysis. Nor did he have any evidence regarding Chagnon’s ideas about McCarthy aside from what he thought he could divine from knowing where he grew up (he cited no surveys of opinions from the town either). His writing is so silly it would be laughable if we didn’t know about all the anguish it caused. Tierney might just as easily have tried to divine Chagnon’s feelings about McCarthyism based on his alma mater. It turns out Chagnon began attending classes at the University of Michigan, the school where he’d write the famous dissertation for his PhD that would become the classic anthropology text The Fierce People, just two decades after another famous alumnus, one who actually stood up to McCarthy at a time when he was enjoying the success of a historical play he'd written, an allegory on the dangers of moralizing hysteria, in particular the one we now call the Red Scare. His name was Arthur Miller.

Also read

Can't Win for Losing: Why There are So Many Losers in Literature and Why It Has to Change

And

The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins

And

The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy

Read More
Dennis Junk Dennis Junk

The Feminist Sociobiologist: An Appreciation of Sarah Blaffer Hrdy Disguised as a Review of “Mothers and Others: The Evolutionary Origins of Mutual Understanding”

Sarah Blaffer Hrdy’s book “Mother Nature” was one of the first things I ever read about evolutionary psychology. With her new book, “Mothers and Others,” Hrdy lays out a theory for why humans are so cooperative compared to their ape cousins. Once again, she’s managed to pen a work that will stand the test of time, rewarding multiple readings well into the future.

One way to think of the job of anthropologists studying human evolution is to divide it into two basic components: the first is to arrive at a comprehensive and precise catalogue of the features and behaviors that make humans different from the species most closely related to us, and the second is to arrange all these differences in order of their emergence in our ancestral line. Knowing what came first is essential—though not sufficient—to the task of distinguishing between causes and effects. For instance, humans have brains that are significantly larger than those of any other primate, and we use these brains to fashion tools that are far more elaborate than the stones, sticks, leaves, and sponges used by other apes. Humans are also the only living ape that routinely walks upright on two legs. Since most of us probably give pride of place in the hierarchy of our species’ idiosyncrasies to our intelligence, we can sympathize with early Darwinian thinkers who felt sure brain expansion must have been what started our ancestors down their unique trajectory, making possible the development of increasingly complex tools, which in turn made having our hands liberated from locomotion duty ever more advantageous.

This hypothetical sequence, however, was dashed rather dramatically with the discovery in 1974 of Lucy, the 3.2 million-year-old skeleton of an Australopithecus Afarensis, in Ethiopia. Lucy resembles a chimpanzee in most respects, including cranial capacity, except that her bones have all the hallmarks of a creature with a bipedal gait. Anthropologists like to joke that Lucy proved butts were more important to our evolution than brains. But, though intelligence wasn’t the first of our distinctive traits to evolve, most scientists still believe it was the deciding factor behind our current dominance. At least for now, humans go into the jungle and build zoos and research facilities to study apes, not the other way around. Other apes certainly can’t compete with humans in terms of sheer numbers. Still, intelligence is a catch-all term. We must ask what exactly it is that our bigger brains can do better than those of our phylogenetic cousins.

A couple decades ago, that key capacity was thought to be language, which makes symbolic thought possible. Or is it symbolic thought that makes language possible? Either way, though a handful of ape prodigies have amassed some high vocabulary scores in labs where they’ve been taught to use pictographs or sign language, human three-year-olds accomplish similar feats as a routine part of their development. As primatologist and sociobiologist (one of the few who unabashedly uses that term for her field) Sarah Blaffer Hrdy explains in her 2009 book Mothers and Others: The Evolutionary Origins of Mutual Understanding, human language relies on abilities and interests aside from a mere reporting on the state of the outside world, beyond simply matching objects or actions with symbolic labels. Honeybees signal the location of food with their dances, vervet monkeys have distinct signals for attacks by flying versus ground-approaching predators, and the list goes on. Where humans excel when it comes to language is not just in the realm of versatility, but also in our desire to bond through these communicative efforts. Hrdy writes,

The open-ended qualities of language go beyond signaling. The impetus for language has to do with wanting to “tell” someone else what is on our minds and learn what is on theirs. The desire to psychologically connect with others had to evolve before language. (38)

The question Hrdy attempts to answer in Mothers and Others—the difference between humans and other apes she wants to place within a theoretical sequence of evolutionary developments—is how we evolved to be so docile, tolerant, and nice as to be able to cram ourselves by the dozens into tight spaces like airplanes without conflict. “I cannot help wondering,” she recalls having thought in a plane preparing for flight,

what would happen if my fellow human passengers suddenly morphed into another species of ape. What if I were traveling with a planeload of chimpanzees? Any of us would be lucky to disembark with all ten fingers and toes still attached, with the baby still breathing and unmaimed. Bloody earlobes and other appendages would litter the aisles. Compressing so many highly impulsive strangers into a tight space would be a recipe for mayhem. (3)

Over the past decade, the human capacity for cooperation, and even for altruism, has been at the center of evolutionary theorizing. Some clever experiments in the field of economic game theory have revealed several scenarios in which humans can be counted on to act against their own interest. What survival and reproductive advantages could possibly accrue to creatures given to acting for the benefit of others?

When it comes to economic exchanges, of course, human thinking isn’t tied to the here-and-now the way the thinking of other animals tends to be. To explain why humans might, say, forgo a small payment in exchange for the opportunity to punish a trading partner for withholding a larger, fairer payment, many behavioral scientists point out that humans seldom think in terms of one-off deals. Any human living in a society of other humans needs to protect his or her reputation for not being someone who abides cheating. Experimental settings are well and good, but throughout human evolutionary history individuals could never have been sure they wouldn’t encounter exchange partners a second or third time in the future. It so happens that one of the dominant theories to explain ape intelligence relies on the need for individuals within somewhat stable societies to track who owes whom favors, who is subordinate to whom, and who can successfully deceive whom. This “Machiavellian intelligence” hypothesis explains the cleverness of humans and other apes as the outcome of countless generations vying for status and reproductive opportunities in intensely competitive social groups.

One of the difficulties in trying to account for the evolution of intelligence is that its advantages seem like such a no-brainer. Isn’t it always better to be smarter? But, as Hrdy points out, the Machiavellian intelligence hypothesis runs into a serious problem. Social competition may have been an important factor in making primates brainer than other mammals, but it can’t explain why humans are brainer than other apes. She writes,

We still have to explain why humans are so much better than chimpanzees at conceptualizing what others are thinking, why we are born innately eager to interpret their motives, feelings, and intentions as well as care about their affective states and moods—in short, why humans are so well equipped for mutual understanding. Chimpanzees, after all, are at least as socially competitive as humans are. (46)

To bolster this point, Hrdy cites research showing that infant chimps have some dazzling social abilities once thought to belong solely to humans. In 1977, developmental psychologist Andrew Meltzoff published his finding that newborn humans mirror the facial expressions of adults they engage with. It was thought that this tendency in humans relied on some neurological structures unique to our lineage which provided the raw material for the evolution of our incomparable social intelligence. But then in 1996 primatologist Masako Myowa replicated Meltzoff’s findings with infant chimps. This and other research suggests that other apes have probably had much the same raw material for natural selection to act on. Yet, whereas the imitative and empathic skills flourish in maturing humans, they seem to atrophy in apes. Hrdy explains,

Even though other primates are turning out to be far better at reading intentions than primatologists initially realized, early flickerings of empathic interest—what might even be termed tentative quests for intersubjective engagement—fade away instead of developing and intensifying as they do in human children. (58)

So the question of what happened in human evolution to make us so different remains.

*****

Sarah Blaffer Hrdy exemplifies a rare, possibly unique, blend of scientific rigor and humanistic sensitivity—the vision of a great scientist and the fine observation of a novelist (or the vision of a great novelist and fine observation of a scientist). Reading her 1999 book, Mother Nature: A History of Mothers, Infants, and Natural Selection, was a watershed experience for me. In going beyond the realm of the literate into that of the literary while hewing closely to strict epistemic principle, she may surpass the accomplishments of even such great figures as Richard Dawkins and Stephen Jay Gould. In fact, since Mother Nature was one of the books through which I was introduced to sociobiology—more commonly known today as evolutionary psychology—I was a bit baffled at first by much of the criticism leveled against the field by Gould and others who claimed it was founded on overly simplistic premises and often produced theories that were politically reactionary.

The theme to which Hrdy continually returns is the too-frequently overlooked role of women and their struggles in those hypothetical evolutionary sequences anthropologists string together. For inspiration in her battle against facile biological theories whose sole purpose is to provide a cheap rationale for the political status quo, she turned, not to a scientist, but a novelist. The man single-most responsible for the misapplication of Darwin’s theory of natural selection to the justification of human societal hierarchies was the philosopher Herbert Spencer, in whose eyes women were no more than what Hrdy characterizes as “Breeding Machines.” Spencer and his fellow evolutionists in the Victorian age, she explains in Mother Nature,

took for granted that being female forestalled women from evolving “the power of abstract reasoning and that most abstract of emotions, the sentiment of justice.” Predestined to be mothers, women were born to be passive and noncompetitive, intuitive rather than logical. Misinterpretations of the evidence regarding women’s intelligence were cleared up early in the twentieth century. More basic difficulties having to do with this overly narrow definition of female nature were incorporated into Darwinism proper and linger to the present day. (17)

Many women over the generations have been unable to envision a remedy for this bias in biology. Hrdy describes the reaction of a literary giant whose lead many have followed.

For Virginia Woolf, the biases were unforgivable. She rejected science outright. “Science, it would seem, in not sexless; she is a man, a father, and infected too,” Woolf warned back in 1938. Her diagnosis was accepted and passed on from woman to woman. It is still taught today in university courses. Such charges reinforce the alienation many women, especially feminists, feel toward evolutionary theory and fields like sociobiology. (xvii)

But another literary luminary much closer to the advent of evolutionary thinking had a more constructive, and combative, response to short-sighted male biologists. And it is to her that Hrdy looks for inspiration. “I fall in Eliot’s camp,” she writes, “aware of the many sources of bias, but nevertheless impressed by the strength of science as a way of knowing” (xviii). She explains that George Eliot,

whose real name was Mary Ann Evans, recognized that her own experiences, frustrations, and desires did not fit within the narrow stereotypes scientists then prescribed for her sex. “I need not crush myself… within a mould of theory called Nature!” she wrote. Eliot’s primary interest was always human nature as it could be revealed through rational study. Thus she was already reading an advance copy of On the Origin of Species on November 24, 1859, the day Darwin’s book was published. For her, “Science has no sex… the mere knowing and reasoning faculties, if they act correctly, must go through the same process and arrive at the same result.” (xvii)

Eliot’s distaste for Spencer’s idea that women’s bodies were designed to divert resources away from the brain to the womb was as personal as it was intellectual. She had in fact met and quickly fallen in love with Spencer in 1851. She went on to send him a proposal which he rejected on eugenic grounds: “…as far as posterity is concerned,” Hrdy quotes, “a cultivated intelligence based upon a bad physique is of little worth, seeing that its descendants will die out in a generation or two.” Eliot’s retort came in the form of a literary caricature—though Spencer already seems a bit like his own caricature. Hrdy writes,

In her first major novel, Adam Bede (read by Darwin as he relaxed after the exertions of preparing Origin for publication), Eliot put Spencer’s views concerning the diversion of somatic energy into reproduction in the mouth of a pedantic and blatantly misogynist old schoolmaster, Mr. Bartle: “That’s the way with these women—they’ve got no head-pieces to nourish, and so their food all runs either to fat or brats.” (17)

A mother of three and an Emeritus Professor of Anthropology at the University of California, Davis, Hrdy is eloquent on the need for intelligence—and lots of familial and societal support—if one is to balance duties and ambitions like her own. Her first contribution to ethology came when she realized that the infanticide among hanuman langurs, which she’d gone to Mount Abu in Rajasthan, India to study at age 26 for her doctoral thesis, had nothing to do with overpopulation, as many suspected. Instead, the pattern she observed was that whenever an outside male deposed a group’s main breeder he immediately began exterminating all of the prior male’s offspring to induce the females to ovulate and give birth again—this time to the new male’s offspring. This was the selfish gene theory in action. But the females Hrdy was studying had an interesting response to this strategy.

In the early 1970s, it was still widely assumed by Darwinians that females were sexually passive and “coy.” Female langurs were anything but. When bands of roving males approached the troop, females would solicit them or actually leave their troop to go in search of them. On occasion, a female mated with invaders even though she was already pregnant and not ovulating (something else nonhuman primates were not supposed to do). Hence, I speculated that mothers were mating with outside males who might take over her troop one day. By casting wide the web of possible paternity, mothers could increase the prospects of future survival of offspring, since males almost never attack infants carried by females that, in the biblical sense of the word, they have “known.” Males use past relations with the mother as a cue to attack or tolerate her infant. (35)

Hrdy would go on to discover this was just one of myriad strategies primate females use to get their genes into future generations. The days of seeing females as passive vehicles while the males duke it out for evolutionary supremacy were now numbered.

I’ll never forget the Young-Goodman-Brown experience of reading the twelfth chapter of Mother Nature, titled “Unnatural Mothers,” and covering an impressive variety of evidence sources that simply devastates any notion of women as nurturing automatons, evolved for the sole purpose of serving as loving mothers. The verdict researchers arrive at whenever they take an honest look into the practices of women with newborns is that care is contingent. To give just one example, Hrdy cites the history of one of the earliest foundling homes in the world, the “Hospital of the Innocents” in Florence.

Founded in 1419, with assistance from the silk guilds, the Ospedale delgi Innocenti was completed in 1445. Ninety foundlings were left there the first year. By 1539 (a famine year), 961 babies were left. Eventually five thousand infants a year poured in from all corners of Tuscany. (299)

What this means is that a troubling number of new mothers were realizing they couldn't care for their infants. Unfortunately, newborns without direct parental care seldom fare well. “Of 15,000 babies left at the Innocenti between 1755 and 1773,” Hrdy reports, “two thirds died before reaching their first birthday” (299). And there were fifteen other foundling homes in the Grand Duchy of Tuscany at the time.

The chapter amounts to a worldwide tour of infant abandonment, exposure, or killing. (I remember having a nightmare after reading it about being off-balance and unable to set a foot down without stepping on a dead baby.) Researchers studying sudden infant death syndrome in London set up hidden cameras to monitor mothers interacting with babies but ended up videotaping them trying to smother them. Cases like this have made it necessary for psychiatrists to warn doctors studying the phenomenon “that some undeterminable portion of SIDS cases might be infanticides” (292). Why do so many mothers abandon or kill their babies? Turning to the ethnographic data, Hrdy explains,

Unusually detailed information was available for some dozen societies. At a gross level, the answer was obvious. Mothers kill their own infants where other forms of birth control are unavailable. Mothers were unwilling to commit themselves and had no way to delegate care of the unwanted infant to others—kin, strangers, or institutions. History and ecological constraints interact in complex ways to produce different solutions to unwanted births. (296)

Many scholars see the contingent nature of maternal care as evidence that motherhood is nothing but a social construct. Consistent with the blank-slate view of human nature, this theory holds that every aspect of child-rearing, whether pertaining to the roles of mothers or fathers, is determined solely by culture and therefore must be learned. Others, who simply can’t let go of the idea of women as virtuous vessels, suggest that these women, as numerous as they are, must all be deranged.

Hrdy demolishes both the purely social constructivist view and the suggestion of pathology. And her account of the factors that lead women to infanticide goes to the heart of her arguments about the centrality of female intelligence in the history of human evolution. Citing the pioneering work of evolutionary psychologists Martin Daly and MargoWilson, Hrdy writes,

How a mother, particularly a very young mother, treats one infant turns out to be a poor predictor of how she might treat another one born when she is older, or faced with improved circumstances. Even with culture held constant, observing modern Western women all inculcated with more or less the same post-Enlightenment values, maternal age turned out to be a better predictor of how effective a mother would be than specific personality traits or attitudes. Older women describe motherhood as more meaningful, are more likely to sacrifice themselves on behalf of a needy child, and mourn lost pregnancies more than do younger women. (314)

The takeaway is that a woman, to reproduce successfully, must assess her circumstances, including the level of support she can count on from kin, dads, and society. If she lacks the resources or the support necessary to raise the child, she may have to make a hard decision. But making that decision in the present unfavorable circumstances in no way precludes her from making the most of future opportunities to give birth to other children and raise them to reproductive age.

Hrdy goes on to describe an experimental intervention that took place in a hospital located across the street from a foundling home in 17th century France. The Hospice des Enfants Assistes cared for indigent women and assisted them during childbirth. It was the only place where poor women could legally abandon their babies. What the French reformers did was tell a subset of the new mothers that they had to stay with their newborns for eight days after birth.

Under this “experimental” regimen, the proportion of destitute mothers who subsequently abandoned their babies dropped from 24 to 10 percent. Neither cultural concepts about babies nor their economic circumstances had changed. What changed was the degree to which they had become attached to their breast-feeding infants. It was as though their decision to abandon their babies and their attachment to their babies operated as two different systems. (315)

Following the originator of attachment theory, John Bowlby, who set out to integrate psychiatry and developmental psychology into an evolutionary framework, Hrdy points out that the emotions underlying the bond between mothers and infants (and fathers and infants too) are as universal as they are consequential. Indeed, the mothers who are forced to abandon their infants have to be savvy enough to realize they have to do so before these emotions are engaged or they will be unable to go through with the deed.

Female strategy plays a crucial role in reproductive outcomes in several domains beyond the choice of whether or not to care for infants. Women must form bonds with other women for support, procure the protection of men (usually from other men), and lay the groundwork for their children’s own future reproductive success. And that’s just what women have to do before choosing a mate—a task that involves striking a balance between good genes and a high level of devotion—getting pregnant, and bringing the baby to term. The demographic transition that occurs when an agrarian society becomes increasingly industrialized is characterized at first by huge population increases as infant mortality drops but then levels off as women gain more control over their life trajectories. Here again, the choices women tend to make are at odds with Victorian (and modern evangelical) conceptions of their natural proclivities. Hrdy writes,

Since, formerly, status and well-being tended to be correlated with reproductive success, it is not surprising that mothers, especially those in higher social ranks, put the basics first. When confronted with a choice between striving for status and striving for children, mothers gave priority to status and “cultural success” ahead of a desire for many children. (366)

And then of course come all the important tasks and decisions associated with actually raising any children the women eventually do give birth to. One of the basic skill sets women have to master to be successful mothers is making and maintaining friendships; they must be socially savvy because more than with any other ape the support of helpers, what Hrdy calls allomothers, will determine the fate of their offspring.

*****

Mother Nature is a massive work—541pages before the endnotes—exploring motherhood through the lens of sociobiology and attachment theory. Mothers and Others is leaner, coming in at just under 300 pages, because its focus is narrower. Hrdy feels that in attempting to account for humans’ prosocial impulses over the past decade, the role of women and motherhood has once again been scanted. She points to the prevalence of theories focusing on competition between groups, with the edge going to those made up of the most cooperative and cohesive members. Such theories once again give the leading role to males and their conflicts, leaving half the species out of the story—unless that other half’s only role is to tend to the children and forage for food while the “band of brothers” is out heroically securing borders.

Hrdy doesn’t weigh in directly on the growing controversy over whether group selection has operated as a significant force in human evolution. The problem she sees with intertribal warfare as an explanation for human generosity and empathy is that the timing isn’t right. What Hrdy is after are the selection pressures that led to the evolution of what she calls “emotionally modern humans,” the “people preadapted to get along with one another even when crowded together on an airplane” (66). And she argues that humans must have been emotionally modern before they could have further evolved to be cognitively modern. “Brains require care more than caring requires brains” (176). Her point is that bonds of mutual interest and concern came before language and the capacity for runaway inventiveness. Humans, Hrdy maintains, would have had to begin forming these bonds long before the effects of warfare were felt.

Apart from periodic increases in unusually rich locales, most Pleistocene humans lived at low population densities. The emergence of human mind reading and gift-giving almost certainly preceded the geographic spread of a species whose numbers did not begin to really expand until the past 70,000 years. With increasing population density (made possible, I would argue, because they were already good at cooperating), growing pressure on resources, and social stratification, there is little doubt that groups with greater internal cohesion would prevail over less cooperative groups. But what was the initial payoff? How could hypersocial apes evolve in the first place? (29)

In other words, what was it that took inborn capacities like mirroring an adult’s facial expressions, present in both human and chimp infants, and through generations of natural selection developed them into the intersubjective tendencies displayed by humans today?

Like so many other anthropologists before her, Hrdy begins her attempt to answer this question by pointing to a trait present in humans but absent in our fellow apes. “Under natural conditions,” she writes, “an orangutan, chimpanzee, or gorilla baby nurses for four to seven years and at the outset is inseparable from his mother, remaining in intimate front-to-front contact 100 percent of the day and night” (68). But humans allow others to participate in the care of their babies almost immediately after giving birth to them. Who besides Sarah Blaffer Hrdy would have noticed this difference, or given it more than a passing thought? (Actually, there are quite a few candidates among anthropologists—Kristen Hawkes for instance.) Ape mothers remain in constant contact with their infants, whereas human mothers often hand over their babies to other women to hold as soon as they emerge from the womb. The difference goes far beyond physical contact. Humans are what Hrdy calls “cooperative breeders,” meaning a child will in effect have several parents aside from the primary one. Help from alloparents opens the way for an increasingly lengthy development, which is important because the more complex the trait—and human social intelligence is about as complex as they come—the longer it takes to develop in maturing individuals. Hrdy writes,

One widely accepted tenet of life history theory is that, across species, those with bigger babies relative to the mother’s body size will also tend to exhibit longer intervals between births because the more babies cost the mother to produce, the longer she will need to recoup before reproducing again. Yet humans—like marmosets—provide a paradoxical exception to this rule. Humans, who of all the apes produce the largest, slowest-maturing, and most costly babies, also breed the fastest. (101)

Those marmosets turn out to be central to Hrdy’s argument because, along with their cousins in the family Callitrichidae, the tamarins, they make up almost the totality of the primate species whom she classifies as “full-fledged cooperative breeders” (92). This and other similarities between humans and marmosets and tamarins have long been overlooked because anthropologists have understandably been focused on the great apes, as well as other common research subjects like baboons and macaques.

Golden Lion Tamarins, by Sarah Landry

Callitrichidae, it so happens, engage in some uncannily human-like behaviors. Plenty of primate babies wail and shriek when they’re in distress, but infants who are frequently not in direct contact with their mothers would have to find a way to engage with them, as well as other potential caregivers, even when they aren’t in any trouble. “The repetitive, rhythmical vocalizations known as babbling,” Hrdy points out, “provided a particularly elaborate way to accomplish this” (122). But humans aren’t the only primates that babble “if by babble we mean repetitive strings of adultlike vocalizations uttered without vocal referents”; marmosets and tamarins do it too. Some of the other human-like patterns aren’t as cute though. Hrdy writes,

Shared care and provisioning clearly enhances maternal reproductive success, but there is also a dark side to such dependence. Not only are dominant females (especially pregnant ones) highly infanticidal, eliminating babies produced by competing breeders, but tamarin mothers short on help may abandon their own young, bailing out at birth by failing to pick up neonates when they fall to the ground or forcing clinging newborns off their bodies, sometimes even chewing on their hands or feet. (99)

It seems that the more cooperative infant care tends to be for a given species the more conditional it is—the more likely it will be refused when the necessary support of others can’t be counted on.

Hrdy’s cooperative breeding hypothesis is an outgrowth of George Williams and Kristen Hawkes’s so-called “Grandmother Hypothesis.” For Hawkes, the important difference between humans and apes is that human females go on living for decades after menopause, whereas very few female apes—or any other mammals for that matter—live past their reproductive years. Hawkes hypothesized that the help of grandmothers made it possible for ever longer periods of dependent development for children, which in turn made it possible for the incomparable social intelligence of humans to evolve. Until recently, though, this theory had been unconvincing to anthropologists because a renowned compendium of data compiled by George Peter Murdock in his Ethnographic Atlas revealed that there was a strong trend toward patrilocal residence patterns in all the societies that had been studied. Since grandmothers are thought to be much more likely to help care for their daughters’ children than their sons’—owing to paternity uncertainty—the fact that most humans raise their children far from maternal grandmothers made any evolutionary role for them unlikely.

But then in 2004 anthropologist Helen Alvarez reexamined Murdock’s analysis of residence patterns and concluded that pronouncements about widespread patrilocality were based on a great deal of guesswork. After eliminating societies for which too little evidence existed to determine the nature of their residence practices, Alvarez calculated that the majority of the remaining societies were bilocal, which means couples move back and forth between the mother’s and the father’s groups. Citing “The Alvarez Corrective” and other evidence, Hrdy concludes,

Instead of some highly conserved tendency, the cross-cultural prevalence of patrilocal residence patterns looks less like an evolved human universal than a more recent adaptation to post-Pleistocene conditions, as hunters moved into northern climes where women could no longer gather wild plants year-round or as groups settled into circumscribed areas. (246)

But Hrdy extends the cast of alloparents to include a mother’s preadult daughters, as well as fathers and their extended families, although the male contribution is highly variable across cultures (and variable too of course among individual men).

With the observation that human infants rely on multiple caregivers throughout development, Hrdy suggests the mystery of why selection favored the retention and elaboration of mind reading skills in humans but not in other apes can be solved by considering the life-and-death stakes for human babies trying to understand the intentions of mothers and others. She writes,

Babies passed around in this way would need to exercise a different skill set in order to monitor their mothers’ whereabouts. As part of the normal activity of maintaining contact both with their mothers and with sympathetic alloparents, they would find themselves looking for faces, staring at them, and trying to read what they reveal. (121)

Mothers, of course, would also have to be able to read the intentions of others whom they might consider handing their babies over to. So the selection pressure occurs on both sides of the generational divide. And now that she’s proposed her candidate for the single most pivotal transition in human evolution Hrdy’s next task is to place it in a sequence of other important evolutionary developments.

Without a doubt, highly complex coevolutionary processes were involved in the evolution of extended lifespans, prolonged childhoods, and bigger brains. What I want to stress here, however, is that cooperative breeding was the pre-existing condition that permitted the evolution of these traits in the hominin line. Creatures may not need big brains to evolve cooperative breeding, but hominins needed shared care and provisioning to evolve big brains. Cooperative breeding had to come first. (277)

*****

Flipping through Mother Nature, a book I first read over ten years ago, I can feel some of the excitement I must have experienced as a young student of behavioral science, having graduated from the pseudoscience of Freud and Jung to the more disciplined—and in its way far more compelling—efforts of John Bowlby, on a path, I was sure, to becoming a novelist, and now setting off into this newly emerging field with the help of a great scientist who saw the value of incorporating literature and art into her arguments, not merely as incidental illustrations retrofitted to recently proposed principles, but as sources of data in their own right, and even as inspiration potentially lighting the way to future discovery. To perceive, to comprehend, we must first imagine. And stretching the mind to dimensions never before imagined is what art is all about.

Yet there is an inescapable drawback to massive books like Mother Nature—for writers and readers alike—which is that any effort to grasp and convey such a massive array of findings and theories comes with the risk of casual distortion since the minutiae mastered by the experts in any subdiscipline will almost inevitably be heeded insufficiently in the attempt to conscript what appear to be basic points in the service of a broader perspective. Even more discouraging is the assurance that any intricate tapestry woven of myriad empirical threads will inevitably be unraveled by ongoing research. Your tapestry is really a snapshot taken from a distance of a field in flux, and no sooner does the shutter close than the beast continues along the path of its stubbornly unpredictable evolution.

When Mothers and Others was published just four years ago in 2009, for instance, reasoning based on the theory of kin selection led most anthropologists to assume, as Hrdy states, that “forager communities are composed of flexible assemblages of close and more distant blood relations and kin by marriage” (132).

This assumption seems to have been central to the thinking that led to the principal theory she lays out in the book, as she explains that “in foraging contexts the majority of children alloparents provision are likely to be cousins, nephews, and nieces rather than unrelated children” (158). But as theories evolve old assumptions come under new scrutiny, and in an article published in the journal Science in March of 2011 anthropologist Kim Hill and his colleagues report that after analyzing the residence and relationship patterns of 32 modern foraging societies their conclusion is that “most individuals in residential groups are genetically unrelated” (1286). In science, two years can make a big difference. This same study does, however, bolster a different pillar of Hrdy’s argument by demonstrating that men relocate to their wives’ groups as often as women relocate to their husbands’, lending further support to Alvarez’s corrective of Murdock’s data. 

Even if every last piece of evidence she marshals in her case for how pivotal the transition to cooperative breeding was in the evolution of mutual understanding in humans is overturned, Hrdy’s painstaking efforts to develop her theory and lay it out so comprehensively, so compellingly, and so artfully, will not have been wasted. Darwin once wrote that “all observation must be for or against some view to be of any service,” but many scientists, trained as they are to keep their eyes on the data and to avoid the temptation of building grand edifices on foundations of inference and speculation, look askance at colleagues who dare to comment publically on fields outside their specialties, especially in cases like Jared Diamond’s where their efforts end up winning them Pulitzers and guaranteed audiences for their future works.

But what use are legions of researchers with specialized knowledge hermetically partitioned by narrowly focused journals and conferences of experts with homogenous interests? Science is contentious by nature, so whenever a book gains notoriety with a nonscientific audience we can count on groaning from the author’s colleagues as they rush to assure us what we’ve read is a misrepresentation of their field. But stand-alone findings, no matter how numerous, no matter how central they are to researchers’ daily concerns, can’t compete with the grand holistic visions of the Diamonds, Hrdys, or Wilsons, imperfect and provisional as they must be, when it comes to inspiring the next generation of scientists. Nor can any number of correlation coefficients or regression analyses spark anything like the same sense of wonder that comes from even a glimmer of understanding about how a new discovery fits within, and possibly transforms, our conception of life and the universe in which it evolved. The trick, I think, is to read and ponder books like the ones Sarah Blaffer Hrdy writes as soon as they’re published—but to be prepared all the while, as soon as you’re finished reading them, to read and ponder the next one, and the one after that.

Also read:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

Read More
Dennis Junk Dennis Junk

Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education

Maria Konnikova’s book “Mastermind: How to Think Like Sherlock Holmes” got me really excited because if the science of psychology is ever brought up in discussions of literature, it’s usually the pseudoscience of Sigmund Freud. Konnikova, whose blog went a long way toward remedying that tragedy, wanted to offer up an alternative approach. However, though the book shows great promise, it’s ultimately disappointing.

Whenever he gets really drunk, my brother has the peculiar habit of reciting the plot of one or another of his favorite shows or books. His friends and I like to tease him about it—“Watch out, Dan’s drunk, nobody mention The Wire!”—and the quirk can certainly be annoying, especially if you’ve yet to experience the story first-hand. But I have to admit, given how blotto he usually is when he first sets out on one of his grand retellings, his ability to recall intricate plotlines right down to their minutest shifts and turns is extraordinary. One recent night, during a timeout in an epic shellacking of Notre Dame’s football team, he took up the tale of Django Unchained, which incidentally I’d sat next to him watching just the week before. Tuning him out, I let my thoughts shift to a post I’d read on The New Yorker’s cinema blog The Front Row.

            In “The Riddle of Tarantino,” film critic Richard Brody analyzes the director-screenwriter’s latest work in an attempt to tease out the secrets behind the popular appeal of his creations and to derive insights into the inner workings of his mind. The post is agonizingly—though also at points, I must admit, exquisitely—overwritten, almost a parody of the grandiose type of writing one expects to find within the pages of the august weekly. Bemused by the lavish application of psychoanalytic jargon, I finished the essay pitying Brody for, in all his writerly panache, having nothing of real substance to say about the movie or the mind behind it. I wondered if he knows the scientific consensus on Freud is that his influence is less in the line of, say, a Darwin or an Einstein than of an L. Ron Hubbard.

            What Brody and my brother have in common is that they were both moved enough by their cinematic experience to feel an urge to share their enthusiasm, complicated though that enthusiasm may have been. Yet they both ended up doing the story a disservice, succeeding less in celebrating the work than in blunting its impact. Listening to my brother’s rehearsal of the plot with Brody’s essay in mind, I wondered what better field there could be than psychology for affording enthusiasts discussion-worthy insights to help them move beyond simple plot references. How tragic, then, that the only versions of psychology on offer in educational institutions catering to those who would be custodians of art, whether in academia or on the mastheads of magazines like The New Yorker, are those in thrall to Freud’s cultish legacy.

There’s just something irresistibly seductive about the promise of a scientific paradigm that allows us to know more about another person than he knows about himself. In this spirit of privileged knowingness, Brody faults Django for its lack of moral complexity before going on to make a silly accusation. Watching the movie, you know who the good guys are, who the bad guys are, and who you want to see prevail in the inevitably epic climax. “And yet,” Brody writes,

the cinematic unconscious shines through in moments where Tarantino just can’t help letting loose his own pleasure in filming pain. In such moments, he never seems to be forcing himself to look or to film, but, rather, forcing himself not to keep going. He’s not troubled by representation but by a visual superego that restrains it. The catharsis he provides in the final conflagration is that of purging the world of miscreants; it’s also a refining fire that blasts away suspicion of any peeping pleasure at misdeeds and fuses aesthetic, moral, and political exultation in a single apotheosis.

The strained stateliness of the prose provides a ready distraction from the stark implausibility of the assessment. Applying Occam’s Razor rather than Freud’s at once insanely elaborate and absurdly reductionist ideology, we might guess that what prompted Tarantino to let the camera linger discomfortingly long on the violent misdeeds of the black hats is that he knew we in the audience would be anticipating that “final conflagration.”

The more outrageous the offense, the more pleasurable the anticipation of comeuppance—but the experimental findings that support this view aren’t covered in film or literary criticism curricula, mired as they are in century-old pseudoscience.

I’ve been eagerly awaiting the day when scientific psychology supplants psychoanalysis (as well as other equally, if not more, absurd ideologies) in academic and popular literary discussions. Coming across the blog Literally Psyched on Scientific American’s website about a year ago gave me a great sense of hope. The tagline, “Conceived in literature, tested in psychology,” as well as the credibility conferred by the host site, promised that the most fitting approach to exploring the resonance and beauty of stories might be undergoing a long overdue renaissance, liberated at last from the dominion of crackpot theorists. So when the author, Maria Konnikova, a doctoral candidate at Columbia, released her first book, I made a point to have Amazon deliver it as early as possible.

Mastermind: How to Think Like Sherlock Holmes does indeed follow the conceived-in-literature-tested-in-psychology formula, taking the principles of sound reasoning expounded by what may be the most recognizable fictional character in history and attempting to show how modern psychology proves their soundness. In what she calls a “Prelude” to her book, Konnikova explains that she’s been a Holmes fan since her father read Conan Doyle’s stories to her and her siblings as children.

The one demonstration of the detective’s abilities that stuck with Konnikova the most comes when he explains to his companion and chronicler Dr. Watson the difference between seeing and observing, using as an example the number of stairs leading up to their famous flat at 221B Baker Street. Watson, naturally, has no idea how many stairs there are because he isn’t in the habit of observing. Holmes, preternaturally, knows there are seventeen steps. Ever since being made aware of Watson’s—and her own—cognitive limitations through this vivid illustration (which had a similar effect on me when I first read “A Scandal in Bohemia” as a teenager), Konnikova has been trying to find the secret to becoming a Holmesian observer as opposed to a mere Watsonian seer. Already in these earliest pages, we encounter some of the principle shortcomings of the strategy behind the book. Konnikova wastes no time on the question of whether or not a mindset oriented toward things like the number of stairs in your building has any actual advantages—with regard to solving crimes or to anything else—but rather assumes old Sherlock is saying something instructive and profound.

Mastermind is, for the most part, an entertaining read. Its worst fault in the realm of simple page-by-page enjoyment is that Konnikova often belabors points that upon reflection expose themselves as mere platitudes. The overall theme is the importance of mindfulness—an important message, to be sure, in this age of rampant multitasking. But readers get more endorsement than practical instruction. You can only be exhorted to pay attention to what you’re doing so many times before you stop paying attention to the exhortations. The book’s problems in both the literary and psychological domains, however, are much more serious. I came to the book hoping it would hold some promise for opening the way to more scientific literary discussions by offering at least a glimpse of what they might look like, but while reading I came to realize there’s yet another obstacle to any substantive analysis of stories. Call it the TED effect. For anything to be read today, or for anything to get published for that matter, it has to promise to uplift readers, reveal to them some secret about how to improve their lives, help them celebrate the horizonless expanse of human potential.

Naturally enough, with the cacophony of competing information outlets, we all focus on the ones most likely to offer us something personally useful. Though self-improvement is a worthy endeavor, the overlooked corollary to this trend is that the worthiness intrinsic to enterprises and ideas is overshadowed and diminished. People ask what’s in literature for me, or what can science do for me, instead of considering them valuable in their own right—and instead of thinking, heaven forbid, we may have a duty to literature and science as institutions serving as essential parts of the foundation of civilized society.

In trying to conceive of a book that would operate as a vehicle for her two passions, psychology and Sherlock Holmes, while at the same time catering to readers’ appetite for life-enhancement strategies and spiritual uplift, Konnikova has produced a work in the grip of a bewildering and self-undermining identity crisis. The organizing conceit of Mastermind is that, just as Sherlock explains to Watson in the second chapter of A Study in Scarlet, the brain is like an attic. For Konnikova, this means the mind is in constant danger of becoming cluttered and disorganized through carelessness and neglect. That this interpretation wasn’t what Conan Doyle had in mind when he put the words into Sherlock’s mouth—and that the meaning he actually had in mind has proven to be completely wrong—doesn’t stop her from making her version of the idea the centerpiece of her argument. “We can,” she writes,

learn to master many aspects of our attic’s structure, throwing out junk that got in by mistake (as Holmes promises to forget Copernicus at the earliest opportunity), prioritizing those things we want to and pushing back those that we don’t, learning how to take the contours of our unique attic into account so that they don’t unduly influence us as they otherwise might. (27)

This all sounds great—a little too great—from a self-improvement perspective, but the attic metaphor is Sherlock’s explanation for why he doesn’t know the earth revolves around the sun and not the other way around. He states quite explicitly that he believes the important point of similarity between attics and brains is their limited capacity. “Depend upon it,” he insists, “there comes a time when for every addition of knowledge you forget something that you knew before.” Note here his topic is knowledge, not attention.

It is possible that a human mind could reach and exceed its storage capacity, but the way we usually avoid this eventuality is that memories that are seldom referenced are forgotten. Learning new facts may of course exhaust our resources of time and attention. But the usual effect of acquiring knowledge is quite the opposite of what Sherlock suggests. In the early 1990’s, a research team led by Patricia Alexander demonstrated that having background knowledge in a subject area actually increased participants’ interest in and recall for details in an unfamiliar text. One of the most widely known replications of this finding was a study showing that chess experts have much better recall for the positions of pieces on a board than novices. However, Sherlock was worried about information outside of his area of expertise. Might he have a point there?

The problem is that Sherlock’s vocation demands a great deal of creativity, and it’s never certain at the outset of a case what type of knowledge may be useful in solving it. In the story “The Lion’s Mane,” he relies on obscure information about a rare species of jellyfish to wrap up the mystery. Konnikova cites this as an example of “The Importance of Curiosity and Play.” She goes on to quote Sherlock’s endorsement for curiosity in The Valley of Fear: “Breadth of view, my dear Mr. Mac, is one of the essentials of our profession. The interplay of ideas and the oblique uses of knowledge are often of extraordinary interest” (151). How does she account for the discrepancy? Could Conan Doyle’s conception of the character have undergone some sort of evolution? Alas, Konnikova isn’t interested in questions like that. “As with most things,” she writes about the earlier reference to the attic theory, “it is safe to assume that Holmes was exaggerating for effect” (150). I’m not sure what other instances she may have in mind—it seems to me that the character seldom exaggerates for effect. In any case, he was certainly not exaggerating his ignorance of Copernican theory in the earlier story.

If Konnikova were simply privileging the science at the expense of the literature, the measure of Mastermind’s success would be in how clearly the psychological theories and findings are laid out. Unfortunately, her attempt to stitch science together with pronouncements from the great detective often leads to confusing tangles of ideas. Following her formula, she prefaces one of the few example exercises from cognitive research provided in the book with a quote from “The Crooked Man.” After outlining the main points of the case, she writes,

How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision. (169)

So far she hasn’t gone beyond the obvious. But she does go on to cite a truly remarkable finding that emerged from research by Amos Tversky and Daniel Kahneman in the early 1980’s. People who read a description of a man named Bill suggesting he lacks imagination tended to feel it was less likely that Bill was an accountant than that he was an accountant who plays jazz for a hobby—even though the two points of information in that second description make in inherently less likely than the one point of information in the first. The same result came when people were asked whether it was more likely that a woman named Linda was a bank teller or both a bank teller and an active feminist. People mistook the two-item choice as more likely. Now, is this experimental finding an example of how people fail to sift crucial from incidental facts?

The findings of this study are now used as evidence of a general cognitive tendency known as the conjunction fallacy. In his book Thinking, Fast and Slow, Kahneman explains how more detailed descriptions (referring to Tom instead of Bill) can seem more likely, despite the actual probabilities, than shorter ones. He writes,

The judgments of probability that our respondents offered, both in the Tom W and Linda problems, corresponded precisely to judgments of representativeness (similarity to stereotypes). Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. (159)

So people are confused because the less probable version is actually easier to imagine. But here’s how Konnikova tries to explain the point by weaving it together with Sherlock’s ideas:

Holmes puts it this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories in our minds. (173)

But Sherlock is not referring to our minds’ tendency to mistake coherence for probability, the tendency that has us seeing more detailed and hence less probable stories as more likely. How could he have been? Instead, he’s talking about the importance of independently assessing the facts instead of passively accepting the assessments of others. Konnikova is fudging, and in doing so she’s shortchanging the story and obfuscating the science.

As the subtitle implies, though, Mastermind is about how to think; it is intended as a self-improvement guide. The book should therefore be judged based on the likelihood that readers will come away with a greater ability to recognize and avoid cognitive biases, as well as the ability to sustain the conviction to stay motivated and remain alert. Konnikova emphasizes throughout that becoming a better thinker is a matter of determinedly forming better habits of thought. And she helpfully provides countless illustrative examples from the Holmes canon, though some of these precepts and examples may not be as apt as she’d like. You must have clear goals, she stresses, to help you focus your attention. But the overall purpose of her book provides a great example of a vague and unrealistic end-point. Think better? In what domain? She covers examples from countless areas, from buying cars and phones, to sizing up strangers we meet at a party. Sherlock, of course, is a detective, so he focuses his attention of solving crimes. As Konnikova dutifully points out, in domains other than his specialty, he’s not such a mastermind.

What Mastermind works best as is a fun introduction to modern psychology. But it has several major shortcomings in that domain, and these same shortcomings diminish the likelihood that reading the book will lead to any lasting changes in thought habits. Concepts are covered too quickly, organized too haphazardly, and no conceptual scaffold is provided to help readers weigh or remember the principles in context. Konnikova’s strategy is to take a passage from Conan Doyle’s stories that seems to bear on noteworthy findings in modern research, discuss that research with sprinkled references back to the stories, and wrap up with a didactic and sententious paragraph or two. Usually, the discussion begins with one of Watson’s errors, moves on to research showing we all tend to make similar errors, and then ends admonishing us not to be like Watson. Following Kahneman’s division of cognition into two systems—one fast and intuitive, the other slower and demanding of effort—Konnikova urges us to get out of our “System Watson” and rely instead on our “System Holmes.” “But how do we do this in practice?” she asks near the end of the book,

How do we go beyond theoretically understanding this need for balance and open-mindedness and applying it practically, in the moment, in situations where we might not have as much time to contemplate our judgments as we do in the leisure of our reading?

The answer she provides: “It all goes back to the very beginning: the habitual mindset that we cultivate, the structure that we try to maintain for our brain attic no matter what” (240). Unfortunately, nowhere in her discussion of built-in biases and the correlates to creativity did she offer any step-by-step instruction on how to acquire new habits. Konnikova is running us around in circles to hide the fact that her book makes an empty promise.

Tellingly, Kahneman, whose work on biases Konnikova cites on several occasions, is much more pessimistic about our prospects for achieving Holmesian thought habits. In the introduction to Thinking, Fast and Slow, he says his goal is merely to provide terms and labels for the regular pitfalls of thinking to facilitate more precise gossiping. He writes,

Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and home. (3)

The worshipful attitude toward Sherlock in Mastermind is designed to pander to our vanity, and so the suggestion that we need to rely on others to help us think is too mature to appear in its pages. The closest Konnikova comes to allowing for the importance of input and criticism from other people is when she suggests that Watson is an indispensable facilitator of Sherlock’s process because he “serves as a constant reminder of what errors are possible” (195), and because in walking him through his reasoning Sherlock is forced to be more mindful. “It may be that you are not yourself luminous,” Konnikova quotes from The Hound of the Baskervilles, “but you are a conductor of light. Some people without possessing genius have a remarkable power of stimulating it. I confess, my dear fellow, that I am very much in your debt” (196).

That quote shows one of the limits of Sherlock’s mindfulness that Konnikova never bothers to address. At times throughout Mastermind, it’s easy to forget that we probably wouldn’t want to live the way Sherlock is described as living. Want to be a great detective? Abandon your spouse and your kids, move into a cheap flat, work full-time reviewing case histories of past crimes, inject some cocaine, shoot holes in the wall of your flat where you’ve drawn a smiley face, smoke a pipe until the air is unbreathable, and treat everyone, including your best (only?) friend with casual contempt. Conan Doyle made sure his character casts a shadow. The ideal character Konnikova holds up, with all his determined mindfulness, often bears more resemblance to Kwai Chang Caine from Kung Fu. This isn’t to say that Sherlock isn’t morally complex—readers love him because he’s so clearly a good guy, as selfish and eccentric as he may be. Konnikova cites an instance in which he holds off on letting the police know who committed a crime. She quotes:

Once that warrant was made out, nothing on earth would save him. Once or twice in my career I feel that I have done more real harm by my discovery of the criminal than ever he had done by his crime. I have learned caution now, and I had rather play tricks with the law of England than with my own conscience. Let us know more before we act.

But Konnikova isn’t interested in morality, complex or otherwise, no matter how central moral intuitions are to our enjoyment of fiction. The lesson she draws from this passage shows her at her most sententious and platitudinous:

You don’t mindlessly follow the same preplanned set of actions that you had determined early on. Circumstances change, and with them so does the approach. You have to think before you leap to act, or judge someone, as the case may be. Everyone makes mistakes, but some may not be mistakes as such, when taken in the context of the time and the situation. (243)

Hard to disagree, isn’t it?

To be fair, Konnikova does mention some of Sherlock’s peccadilloes in passing. And she includes a penultimate chapter titled “We’re Only Human,” in which she tells the story of how Conan Doyle was duped by a couple of young girls into believing they had photographed some real fairies. She doesn’t, however, take the opportunity afforded by this episode in the author’s life to explore the relationship between the man and his creation. She effectively says he got tricked because he didn’t do what he knew how to do, it can happen to any of us, so be careful you don’t let it happen to you. Aren’t you glad that’s cleared up? She goes on to end the chapter with an incongruous lesson about how you should think like a hunter. Maybe we should, but how exactly, and when, and at what expense, we’re never told.

Konnikova clearly has a great deal of genuine enthusiasm for both literature and science, and despite my disappointment with her first book I plan to keep following her blog. I’m even looking forward to her next book—confident she’ll learn from the negative reviews she’s bound to get on this one. The tragic blunder she made in eschewing nuanced examinations of how stories work, how people relate to characters, or how authors create them for a shallow and one-dimensional attempt at suggesting a 100 year-old fictional character somehow divined groundbreaking research findings from the end of the Twentieth and beginning of the Twenty-First Centuries calls to mind an exchange you can watch on YouTube between Neil Degrasse Tyson and Richard Dawkins. Tyson, after hearing Dawkins speak in the way he’s known to, tries to explain why many scientists feel he’s not making the most of his opportunities to reach out to the public.

You’re professor of the public understanding of science, not the professor of delivering truth to the public. And these are two different exercises. One of them is putting the truth out there and they either buy your book or they don’t. That’s not being an educator; that’s just putting it out there. Being an educator is not only getting the truth right; there’s got to be an act of persuasion in there as well. Persuasion isn’t “Here’s the facts—you’re either an idiot or you’re not.” It’s “Here are the facts—and here is a sensitivity to your state of mind.” And it’s the facts and the sensitivity when convolved together that creates impact. And I worry that your methods, and how articulately barbed you can be, ends up being simply ineffective when you have much more power of influence than is currently reflected in your output.

Dawkins begins his response with an anecdote that shows that he’s not the worst offender when it comes to simple and direct presentations of the facts.

A former and highly successful editor of New Scientist Magazine, who actually built up New Scientist to great new heights, was asked “What is your philosophy at New Scientist?” And he said, “Our philosophy at New Scientist is this: science is interesting, and if you don’t agree you can fuck off.”

I know the issue is a complicated one, but I can’t help thinking Tyson-style persuasion too often has the opposite of its intended impact, conveying as it does the implicit message that science has to somehow be sold to the masses, that it isn’t intrinsically interesting. At any rate, I wish that Konnikova hadn’t dressed up her book with false promises and what she thought would be cool cross-references. Sherlock Holmes is interesting. Psychology is interesting. If you don’t agree, you can fuck off.

Also read

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

And

THE STORYTELLING ANIMAL: A LIGHT READ WITH WEIGHTY IMPLICATIONS

And

LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE

Also a propos is

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Sweet Tooth is a Strange Loop: An Aid to Some of the Dimmer Reviewers of Ian McEwan's New Novel

Ian McEwan is one of my literary heroes. “Atonement” and “Saturday” are among my favorite books. But a lot of readers trip over the more experimental aspects of his work. With “Sweet Tooth,” he once again offers up a gem of a story, one that a disconcerting number of reviewers missed the point of.

(I've done my best to avoid spoilers.)

Anytime a character in Ian McEwan’s new novel Sweet Tooth enthuses about a work of literature, another character can be counted on to come along and pronounce that same work dreadful. So there’s a delightful irony in the declaration at the end of a silly review in The Irish Independent, which begins by begrudging McEwan his “reputation as the pulse-taker of the social and political Zeitgeist,” that the book’s ending “might be enough to send McEwan acolytes scurrying back through the novel to see how he did it, but it made me want to throw the book out the window.” Citing McEwan’s renown among the reading public before gleefully launching into critiques that are as difficult to credit as they are withering seems to be the pattern. The notice in The Economist, for instance, begins,

At 64, with a Hollywood film, a Man Booker prize and a gong from the queen, Ian McEwan has become a grand old man of British letters. Publication of his latest novel, “Sweet Tooth”, was announced on the evening news. A reading at the Edinburgh book festival was introduced by none other than the first minister, Alex Salmond.

But, warns the unnamed reviewer, “For all the attendant publicity, ‘Sweet Tooth’ is not Mr. McEwan’s finest book.” My own personal take on the novel—after seeking out all the most negative reviews I could find (most of them are positive)—is that the only readers who won’t appreciate it, aside from the reviewers who can’t stand how much the reading public celebrates McEwan’s offerings, are the ones whose prior convictions about what literature is and what it should do blind them to even the possibility that a novel can successfully operate on as many levels as McEwan folds into his narrative. For these readers, the mere fact of an author’s moving from one level to the next somehow invalidates whatever gratification they got from the most straightforward delivery of the story.

At the most basic level, Sweet Tooth is the first-person account of how Serena Frome is hired by MI5 and assigned to pose as a representative for an arts foundation offering the writer Thomas Haley a pension that will allow him to quit his teaching job so he can work on a novel. The book’s title refers to the name of a Cold War propaganda initiative to support authors whose themes Serena’s agency superiors expect will bolster the cause of the Non-Communist Left. Though Sweet Tooth is fictional, there actually were programs like it that supported authors like George Orwell. Serena is an oldest-sibling type, with an appreciation for the warm security of established traditions and longstanding institutions, along with an attraction for and eagerness to please authority figures. These are exactly the traits that lead to her getting involved in the project of approaching Tom under false pretenses, an arrangement which becomes a serious dilemma for her as the two begin a relationship and she falls deeper and deeper in love with him. Looking back on the affair at the peak of the tension, she admits,

For all the mess I was in, I didn’t know how I could have done things differently. If I hadn’t joined MI5, I wouldn’t have met Tom. If I’d told him who I worked for at our very first meeting—and why would I tell a stranger that?—he would’ve shown me the door. At every point along the way, as I grew fonder of him, then loved him, it became harder, riskier to tell him the truth even as it became more important to do so. (266)

This plot has many of the markings of genre fiction, the secret-burdened romance, the spy thriller. But even on this basic level there’s a crucial element separating the novel from its pulpier cousins; the stakes are actually quite low. The nation isn’t under threat. No one’s life is in danger. The risks are only to jobs and relationships.

James Lasdun, in an otherwise favorable review for The Guardian, laments these low stakes, suggesting that the novel’s early references to big political issues of the 1970s lead readers to the thwarted expectation of more momentous themes. He writes,

I couldn't help feeling like Echo in the myth when Narcissus catches sight of himself in the pool. “What about the IRA?” I heard myself bleating inwardly as the book began fixating on its own reflection. What about the PLO? The cold war? Civilisation and barbarity? You promised!

But McEwan really doesn’t make any such promise in the book’s opening. Lasdun simply makes the mistake of anticipating Sweet Tooth will be more like McEwan’s earlier novel Saturday. In fact, the very first lines of the book reveal what the main focus of the story will be:

My name is Serena Frome (rhymes with plume) and almost forty years ago I was sent on a secret mission for the British Security Service. I didn’t return safely. Within eighteen months of joining I was sacked, having disgraced myself and ruined my lover, though he certainly had a hand in his own undoing. (1)

That “I didn’t return safely” sets the tone—overly dramatic, mock-heroic, but with a smidgen of self-awareness that suggests she’s having some fun at her own expense. Indeed, all the book’s promotional material referring to her as a spy notwithstanding, Serena is more of a secretary or a clerk than a secret agent. Her only field mission is to offer funds to a struggling writer, not exactly worthy of Ian Fleming.

When Lasdun finally begins to pick up on the lighthearted irony and the over all impish tone of the novel, his disappointment has him admitting that all the playfulness is enjoyable but longing nonetheless for it to serve some greater end. Such longing betrays a remarkable degree of obliviousness to the fact that the final revelation of the plot actually does serve an end, a quite obvious one. Lasdun misses it, apparently because the point is moral as opposed to political. A large portion of the novel’s charm stems from the realization, which I’m confident most readers will come to early on, that Sweet Tooth, for all the big talk about global crises and intrigue, is an intimately personal story about a moral dilemma and its outcomes—at least at the most basic level. The novel’s scope expands beyond this little drama by taking on themes that present various riddles and paradoxes. But whereas countless novels in the postmodern tradition have us taking for granted that literary riddles won’t have answers and plot paradoxes won’t have points, McEwan is going for an effect that’s much more profound.

The most serious criticism I came across was at the end of the Economist review. The unnamed critic doesn’t appreciate the surprise revelation that comes near the end of the book, insisting that afterward, “it is hard to feel much of anything for these heroes, who are all notions and no depth.” What’s interesting is that the author presents this not as an observation but as a logical conclusion. I’m aware of how idiosyncratic responses to fictional characters are, and I accept that my own writing here won’t settle the issue, but I suspect most readers will find the assertion that Sweet Tooth’s characters are “all notion” absurd. I even have a feeling that the critic him or herself sympathized with Serena right up until the final chapter—as the critic from TheIrish Independent must have. Why else would they be so frustrated as to want to throw the book out of the window? Several instances of Serena jumping into life from the page suggest themselves for citation, but here’s one I found particularly endearing. It comes as she’s returning to her parents’ house for Christmas after a long absence and is greeted by her father, an Anglican Bishop, at the door:

“Serena!” He said my name with a kindly, falling tone, with just a hint of mock surprise, and put his arms about me. I dropped my bag at my feet and let myself be enfolded, and as I pressed my face into his shirt and caught the familiar scent of Imperial Leather soap, and of church—of lavender wax—I started to cry. I don’t know why, it just came from nowhere and I turned to water. I don’t cry easily and I was as surprised as he was. But there was nothing I could do about it. This was the copious hopeless sort of crying you might hear from a tired child. I think it was his voice, the way he said my name, that set me off. (217)

This scene reminds me of when I heard my dad had suffered a heart attack several years ago: even though at the time I was so pissed off at the man I’d been telling myself I’d be better off never seeing him again, I barely managed two steps after hanging up the phone before my knees buckled and I broke down sobbing—so deep are these bonds we carry on into adulthood even when we barely see our parents, so shocking when their strength is made suddenly apparent. (Fortunately, my dad recovered after a quintuple bypass.)

But, if the critic for the Economist concluded that McEwan’s characters must logically be mere notions despite having encountered them as real people until the end of the novel, what led to that clearly mistaken deduction? I would be willing to wager that McEwan shares with me a fondness for the writing of the computational neuroscientist Douglas Hofstadter, in particular Gödel, Escher, Bach and I am a Strange Loop, both of which set about arriving at an intuitive understanding of the mystery of how consciousness arises from the electrochemical mechanisms of our brains, offering as analogies several varieties of paradoxical, self-referencing feedback loops, like cameras pointing at the TV screens they feed into. What McEwan has engineered—there’s no better for word for it—with his novel is a multilevel, self-referential structure that transforms and transcends its own processes and premises as it folds back on itself.

            One of the strange loops Hofstadter explores, M.C. Escher’s 1960 lithograph Ascending and Descending, can give us some helpful guidance in understanding what McEwan has done. If you look at the square staircase in Escher’s lithograph a section at a time, you see that each step continues either ascending or descending, depending on the point of view you take. And, according to Hofstadter in Strange Loop,

A person is a point of view—not only a physical point of view (looking out of certain eyes in a certain physical space in the universe), but more importantly a psyche’s point of view: a set of hair-trigger associations rooted in a huge bank of memories. (234)

Importantly, many of those associations are made salient with emotions, so that certain thoughts affect us in powerful ways we might not be able to anticipate, as when Serena cries at the sound of her father’s voice, or when I collapsed at the news of my father’s heart attack. These emotionally tagged thoughts form a strange loop when they turn back on the object, now a subject, doing the thinking. The neuron forms the brain that produces the mind that imagines the neuron, in much the same way as each stair in the picture takes a figure both up and down the staircase. What happened for the negative reviewers of Sweet Tooth is that they completed a circuit of the stairs and realized they couldn’t possibly have been going up (or down), even though at each step along the way they were probably convinced.

McEwan, interviewed by Daniel Zalewski for the New Yorker in 2009, said, “When I’m writing I don’t really think about themes,” admitting that instead he follows Nabokov’s dictum to “Fondle details.”

Writing is a bottom-up process, to borrow a term from the cognitive world. One thing that’s missing from the discussion of literature in the academy is the pleasure principle. Not only the pleasure of the reader but also of the writer. Writing is a self-pleasuring act.

The immediate source of pleasure then for McEwan, and he probably assumes for his readers as well, comes at the level of the observations and experiences he renders through prose.

Sweet Tooth is full of great lines like, “Late October brought the annual rite of putting back the clocks, tightening the lid of darkness over our afternoons, lowering the nation’s mood further” (179). But McEwan would know quite well that writing is also a top-down process; at some point themes and larger conceptual patterns come into play. In his novel Saturday, the protagonist, a neurosurgeon named Henry Perowne, is listening to Angela Hewitt’s performance of Bach’s strangely loopy “Goldberg” Variations. He writes,

Well over an hour has passed, and Hewitt is already at the final Variation, the Quodlibet—uproarious and jokey, raunchy even, with its echoes of peasant songs of food and sex. The last exultant chords fade away, a few seconds’ silence, then the Aria returns, identical on the page, but changed by all the variations that have come before. (261-2)

Just as an identical Aria or the direction of ascent or descent in an image of stairs can be transformed  by a shift in perspective, details about a character, though they may be identical on the page, can have radically different meanings, serve radically different purposes depending on your point of view.

Though in the novel Serena is genuinely enthusiastic about Tom’s fiction, the two express their disagreements about what constitutes good literature at several points. “I thought his lot were too dry,” Serena writes, “he thought mine were wet” (183). She likes sentimental endings and sympathetic characters; he admires technical élan. Even when they agree that a particular work is good, it’s for different reasons: “He thought it was beautifully formed,” she says of a book they both love, “I thought it was wise and sad” (183). Responding to one of Tom’s stories that features a talking ape who turns out never to have been real, Serena says,

I instinctively distrusted this kind of fictional trick. I wanted to feel the ground beneath my feet. There was, in my view, an unwritten contract with the reader that the writer must honor. No single element of an imagined world or any of its characters should be allowed to dissolve on authorial whim. The invented had to be as solid and as self-consistent as the actual. This was a contract founded on mutual trust. (183)

A couple of the reviewers suggested that the last chapter of Sweet Tooth revealed that Serena had been made to inhabit precisely the kind of story that she’d been saying all along she hated. But a moment’s careful reflection would have made them realize this isn’t true at all. What’s brilliant about McEwan’s narrative engineering is that it would satisfy the tastes of both Tom and Serena. Despite the surprise revelation at the end—the trick—not one of the terms of Serena’s contract is broken. The plot works as a trick, but it also works as an intimate story about real people in a real relationship. To get a taste of how this can work, consider the following passage:

Tom promised to read me a Kingsley Amis poem, “A Bookshop Idyll,” about men’s and women’s divergent tastes. It went a bit soppy at the end, he said, but it was funny and true. I said I’d probably hate it, except for the end. (175)

The self-referentiality of the idea makes of it a strange loop, so it can be thought of at several levels, each of which is consistent and solid, but none of which captures the whole meaning.

Sweet Tooth is a fun novel to read, engrossing and thought-provoking, combining the pleasures of genre fiction with some of the mind-expanding thought experiments of some of the best science writing. The plot centers on a troubling but compelling moral dilemma, and, astonishingly, the surprise revelation at the end actually represents a solution to this dilemma. I do have to admit, however, that I agree with the Economist that it’s not McEwan’s best novel. The conceptual plot devices bear several similarities with those in his earlier novel Atonement, and that novel is much more serious, its stakes much higher.

Sweet Tooth is nowhere near as haunting as Atonement. But it doesn’t need to be.

Also read:

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And:

MUDDLING THROUGH "LIFE AFTER LIFE": A REFLECTION ON PLOT AND CHARACTER IN KATE ATKINSON’S NEW NOVEL

Read More
Dennis Junk Dennis Junk

The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins – Part 3 of A Crash Course in Multilevel Selection Theory

In “Moral Origins,” anthropologist Christopher Boehm lays out the mind-blowing theory that humans evolved to be cooperative in large part by developing mechanisms to keep powerful men’s selfish impulses in check. These mechanisms included, in rare instances, capital punishment. Once the free-rider problem was addressed, groups could function more as a unit than as a collection of individuals.

In a 1969 account of her time in Labrador studying the culture of the Montagnais-Naskapi people, anthropologist Eleanor Leacock describes how a man named Thomas, who was serving as her guide and informant, responded to two men they encountered while far from home on a hunting trip. The men, whom Thomas recognized but didn’t know very well, were on the brink of starvation. Even though it meant ending the hunting trip early and hence bringing back fewer furs to trade, Thomas gave the hungry men all the flour and lard he was carrying. Leacock figured that Thomas must have felt at least somewhat resentful for having to cut short his trip and that he was perhaps anticipating some return favor from the men in the future. But Thomas didn’t seem the least bit reluctant to help or frustrated by the setback. Leacock kept pressing him for an explanation until he got annoyed with her probing. She writes,

This was one of the very rare times Thomas lost patience with me, and he said with deep, if suppressed anger, “suppose now, not to give them flour, lard—just dead inside.” More revealing than the incident itself were the finality of his tone and the inference of my utter inhumanity in raising questions about his action. (Quoted in Boehm 219)

The phrase “just dead inside” expresses how deeply internalized the ethic of sympathetic giving is for people like Thomas who live in cultures more similar to those our earliest human ancestors created at the time, around 45,000 years ago, when they began leaving evidence of engaging in all the unique behaviors that are the hallmarks of our species. The Montagnais-Naskapi don’t qualify as an example of what anthropologist Christopher Boehm labels Late Pleistocene Appropriate, or LPA, cultures because they had been involved in fur trading with people from industrialized communities going back long before their culture was first studied by ethnographers. But Boehm includes Leacock’s description in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame because he believes Thomas’s behavior is in fact typical of nomadic foragers and because, infelicitously for his research, standard ethnographies seldom cover encounters like the one Thomas had with those hungry acquaintances of his.

In our modern industrialized civilization, people donate blood, volunteer to fight in wars, sign over percentages of their income to churches, and pay to keep organizations like Doctors without Borders and Human Rights Watch in operation even though the people they help live in far-off countries most of us will never visit. One approach to explaining how this type of extra-familial generosity could have evolved is to suggest people who live in advanced societies like ours are, in an important sense, not in their natural habitat. Among evolutionary psychologists, it has long been assumed that in humans’ ancestral environments, most of the people individuals encountered would either be close kin who carried many genes in common, or at the very least members of a moderately stable group they could count on running into again, at which time they would be disposed to repay any favors. Once you take kin selection and reciprocal altruism into account, the consensus held, there was not much left to explain. Whatever small acts of kindness that weren’t directed toward kin or done with an expectation of repayment were, in such small groups, probably performed for the sake of impressing all the witnesses and thus improving the social status of the performer. As the biologist Michael Ghiselin once famously put it, “Scratch an altruist and watch a hypocrite bleed.” But this conception of what evolutionary psychologists call the Environment of Evolutionary Adaptedness, or EEA, never sat right with Boehm.

One problem with the standard selfish gene scenario that has just recently come to light is that modern hunter-gatherers, no matter where in the world they live, tend to form bands made up of high percentages of non-related or distantly related individuals. In an article published in Science in March of 2011, anthropologist Kim Hill and his colleagues report the findings of their analysis of thirty-two hunter-gatherer societies. The main conclusion of the study is that the members of most bands are not closely enough related for kin selection to sufficiently account for the high levels of cooperation ethnographers routinely observe. Assuming present-day forager societies are representative of the types of groups our Late Pleistocene ancestors lived in, we can rule out kin selection as a likely explanation for altruism of the sort displayed by Thomas or by modern philanthropists in complex civilizations. Boehm offers us a different scenario, one that relies on hypotheses derived from ethological studies of apes and archeological records of our human prehistory as much as on any abstract mathematical accounting of the supposed genetic payoffs of behaviors.

In three cave paintings discovered in Spain that probably date to the dawn of the Holocene epoch around 12,000 years ago, groups of men are depicted with what appear to be bows lifted above their heads in celebration while another man lay dead nearby with one arrow from each of them sticking out of his body. We can only speculate about what these images might have meant to the people who created them, but Boehm points out that all extant nomadic foraging peoples, no matter what part of the world they live in, are periodically forced to reenact dramas that resonate uncannily well with these scenes portrayed in ancient cave art. “Given enough time,” he writes, “any band society is likely to experience a problem with a homicide-prone unbalanced individual. And predictably band members will have to solve the problem by means of execution” (253). One of the more gruesome accounts of such an incident he cites comes from Richard Lee’s ethnography of !Kung Bushmen. After a man named /Twi had killed two men, Lee writes, “A number of people decided that he must be killed.” According to Lee’s informant, a man named =Toma (the symbols before the names represent clicks), the first attempt to kill /Twi was botched, allowing him to return to his hut, where a few people tried to help him. But he ended up becoming so enraged that he grabbed a spear and stabbed a woman in the face with it. When the woman’s husband came to her aid, /Twi shot him with a poisoned arrow, killing him and bringing his total body count to four. =Toma continues the story,

Now everyone took cover, and others shot at /Twi, and no one came to his aid because all those people had decided he had to die. But he still chased after some, firing arrows, but he didn’t hit any more…Then they all fired on him with poisoned arrows till he looked like a porcupine. Then he lay flat. All approached him, men and women, and stabbed his body with spears even after he was dead. (261-2)

The two most important elements of this episode for Boehm are the fact that the death sentence was arrived at through a partial group consensus which ended up being unanimous, and that it was carried out with weapons that had originally been developed for hunting. But this particular case of collectively enacted capital punishment was odd not just in how clumsy it was. Boehm writes,

In this one uniquely detailed description of what seems to begin as a delegated execution and eventually becomes a fully communal killing, things are so chaotic that it’s easy to understand why with hunter-gatherers the usual mode of execution is to efficiently delegate a kinsman to quickly kill the deviant by ambush. (261)

The prevailing wisdom among evolutionary psychologists has long been that any appearance of group-level adaptation, like the collective killing of a dangerous group member, must be an illusory outcome caused by selection at the level of individuals or families. As Steven Pinker explains, “If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate.” To demonstrate that some trait or behavior humans reliably engage in really is for the sake of the group as opposed to the individual engaging in it, there would have to be some conflict between the two motives—serving the group would have to entail incurring some kind of cost for the individual. Pinker explains,

It’s only when humans display traits that are disadvantageous to themselves while benefiting their group that group selection might have something to add. And this brings us to the familiar problem which led most evolutionary biologists to reject the idea of group selection in the 1960s. Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.

The ever-present potential for cooperative or altruistic group norms to be subverted by selfish individuals keen on exploitation is known in game theory as the free rider problem. To see how strong selfish individuals can lord over groups of their conspecifics we can look to the hierarchically organized bands great apes naturally form.

In groups of chimpanzees, for instance, an alpha male gets to eat his fill of the most nutritious food, even going so far at times as seizing meat from the subordinates who hunted it down. The alpha chimp also works to secure, as best he can, sole access to reproductively receptive females. For a hierarchical species like this, status is a winner-take-all competition, and so genes for dominance and cutthroat aggression proliferate. Subordinates tolerate being bullied because they know the more powerful alpha will probably kill them if they try to stand up for themselves. If instead of mounting some ill-fated resistance, however, they simply bide their time, they may eventually grow strong enough to more effectively challenge for the top position. Meanwhile, they can also try to sneak off with females to couple behind the alpha’s back. Boehm suggests that two competing motives keep hierarchies like this in place: one is a strong desire for dominance and the other is a penchant for fear-based submission. What this means is that subordinates only ever submit ambivalently. They even have a recognizable vocalization, which Boehm transcribes as the “waa,” that they use to signal their discontent. In his 1999 book Hierarchy in the Forest: The Evolution of Egalitarian Behavior, Boehm explains,

When an alpha male begins to display and a subordinate goes screaming up a tree, we may interpret this as a submissive act of fear; but when that same subordinate begins to waa as the display continues, it is an open, hostile expression of insubordination. (167)

Since the distant ancestor humans shared in common with chimpanzees likely felt this same ambivalence toward alphas, Boehm theorizes that it served as a preadaptation for the type of treatment modern human bullies can count on in every society of nomadic foragers anthropologists have studied. “I believe,” he writes, “that a similar emotional and behavioral orientation underlies the human moral community’s labeling of domination behaviors as deviant” (167).

Boehm has found accounts of subordinate chimpanzees, bonobos, and even gorillas banding together with one or more partner to take on an excessively domineering alpha—though there was only one case in which this happened with gorillas and the animals in question lived in captivity. But humans are much better at this type of coalition building. Two of the most crucial developments in our own lineage that lead to the differences in social organization between ourselves and the other apes were likely to have been an increased capacity for coordinated hunting and the invention of weapons designed to kill big game. As Boehm explains,

Weapons made possible not only killing at a distance, but far more effective threat behavior; brandishing a projectile could turn into an instant lethal attack with relatively little immediate risk to the attacker. (175)

Deadly weapons fundamentally altered the dynamic between lone would-be bullies and those they might try to dominate. As Boehm points out, “after weapons arrived, the camp bully became far more vulnerable” (177). With the advent of greater coalition-building skills and the invention of tools for efficient killing, the opportunities for an individual to achieve alpha status quickly vanished.

            It’s dangerous to assume that any one group of modern people provides the key to understanding our Pleistocene ancestors, but when every group living with similar types of technology and subsistence methods as those ancestors follows a similar pattern it’s much more suggestive. “A distinctively egalitarian political style,” Boehm writes, “is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-6). This egalitarianism must be vigilantly guarded because “A potential bully always seems to be waiting in the wings” (68). Boehm explains what he believes is the underlying motivation,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

The methods used to prevent powerful or influential individuals from acquiring too much control include such collective behaviors as gossiping, ostracism, banishment, and even, in extreme cases, execution. “In egalitarian hierarchies the pyramid of power is turned upside down,” Boehm explains, “with a politically united rank and file dominating the alpha-male types” (66).

The implications for theories about our ancestors are profound. The groups humans were living in as they evolved the traits that made them what we recognize today as human were highly motivated and well-equipped to both prevent and when necessary punish the type of free-riding that evolutionary psychologists and other selfish gene theorists insist would undermine group cohesion. Boehm makes this point explicit, writing,

The overall hypothesis is straightforward: basically, the advent of egalitarianism shifted the balance of forces within natural selection so that within-group selection was substantially debilitated and between-group selection was amplified. At the same time, egalitarian moral communities found themselves uniquely positioned to suppress free-riding… at the level of phenotype. With respect to the natural selection of behavior genes, this mechanical formula clearly favors the retention of altruistic traits. (199)

This is the point where he picks up the argument again in Moral Origins. The story of the homicidal man named /Twi is an extreme example of the predictable results of overly aggressive behaviors. Any nomadic forager who intransigently tries to throw his weight around the way alpha male chimpanzees do will probably end up getting “porcupined” (158) like /Twi and the three men depicted in the Magdalenian cave art in Spain.

Murder is an extreme example of the types of free-riding behavior that nomadic foragers reliably sanction. Any politically overbearing treatment of group mates, particularly the issuing of direct commands, is considered a serious moral transgression. But alongside this disapproval of bossy or bullying behavior there exists an ethic of sharing and generosity, so people who are thought to be stingy are equally disliked. As Boehm writes in Hierarchy in the Forest, “Politically egalitarian foragers are also, to a significant degree, materially egalitarian” (70). The image many of us grew up with of the lone prehistoric male hunter going out to stalk his prey, bringing it back as a symbol of his prowess in hopes of impressing beautiful and fertile females, turns out to be completely off-base. In most hunter-gather groups, the males hunt in teams, and whatever they kill gets turned over to someone else who distributes the meat evenly among all the men so each of their families gets an equal portion. In some cultures, “the hunter who made the kill gets a somewhat larger share,” Boehm writes in Moral Origins, “perhaps as an incentive to keep him at his arduous task” (185). But every hunter knows that most of the meat he procures will go to other group members—and the sharing is done without any tracking of who owes whom a favor. Boehm writes,

The models tell us that the altruists who are helping nonkin more than they are receiving help must be “compensated” in some way, or else they—meaning their genes—will go out of business. What we can be sure of is that somehow natural selection has managed to work its way around these problems, for surely humans have been sharing meat and otherwise helping others in an unbalanced fashion for at least 45,000 years. (184)

Following biologist Richard Alexander, Boehm sees this type of group beneficial generosity as an example of “indirect reciprocity.” And he believes it functions as a type of insurance policy, or, as anthropologists call it, “variance reduction.” It’s often beneficial for an individual’s family to pay in, as it were, but much of the time people contribute knowing full well the returns will go to others.

Less extreme cases than the psychopaths who end up porcupined involve what Boehm calls “meat-cheaters.” A prominent character in Moral Origins is an Mbuti Pygmy man named Cephu, whose story was recounted in rich detail by the anthropologist Colin Turnbull. One of the cooperative hunting strategies the Pygmies use has them stretching a long net through the forest while other group members create a ruckus to scare animals into it. Each net holder is entitled to whatever runs into his section of the net, which he promptly spears to death. What Cephu did was sneak farther ahead of the other men to improve his chances of having an animal run into his section of the net before the others. Unfortunately for him, everyone quickly realized what was happening. Returning to the camp after depositing his ill-gotten gains in his hut, Cephu hears someone call out that he is an animal. Beyond that, everyone was silent. Turnbull writes,

Cephu walked into the group, and still nobody spoke. He went to where a youth was sitting in a chair. Usually he would have been offered a seat without his having to ask, and now he did not dare to ask, and the youth continued to sit there in as nonchalant a manner as he could muster. Cephu went to another chair where Amabosu was sitting. He shook it violently when Amabosu ignored him, at which point he was told, “Animals lie on the ground.” (Quoted 39)

Thus began the accusations. Cephu burst into tears and tried to claim that his repositioning himself in the line was an accident. No one bought it. Next, he made the even bigger mistake of trying to suggest he was entitled to his preferential position. “After all,” Turnbull writes, “was he not an important man, a chief, in fact, of his own band?” At this point, Manyalibo, who was taking the lead in bringing Cephu to task, decided that the matter was settled. He said that

there was obviously no use prolonging the discussion. Cephu was a big chief, and Mbuti never have chiefs. And Cephu had his own band, of which he was chief, so let him go with it and hunt elsewhere and be a chief elsewhere. Manyalibo ended a very eloquent speech with “Pisa me taba” (“Pass me the tobacco”). Cephu knew he was defeated and humiliated. (40)

The guilty verdict Cephu had to accept to avoid being banished from the band came with the sentence that he had to relinquish all the meat he brought home that day. His attempt at free-riding therefore resulted not only in a loss of food but also in a much longer-lasting blow to his reputation.

Boehm has built a large database from ethnographic studies like Lee’s and Turnbull’s, and it shows that in their handling of meat-cheaters and self-aggrandizers nomadic foragers all over the world use strategies similar to those of the Pygmies. First comes the gossip about your big ego, your dishonesty, or your cheating. Soon you’ll recognize a growing reluctance of other’s to hunt with you, or you’ll have a tough time wooing a mate. Next, you may be directly confronted by someone delegated by a quorum of group members. If you persist in your free-riding behavior, especially if it entails murder or serious attempts at domination, you’ll probably be ambushed and turned into a porcupine. Alexander put forth the idea of “reputational selection,” whereby individuals benefit in terms of survival and reproduction from being held in high esteem by their group mates. Boehm prefers the term “social selection,” however, because it encompasses the idea that people are capable of figuring out what’s best for their groups and codifying it in their culture. How well an individual internalizes a group’s norms has profound effects on his or her chances for survival and reproduction. Boehm’s theory is that our consciences are the mechanisms we’ve evolved for such internalization.

Though there remain quite a few chicken-or-egg conundrums to work out, Boehm has cobbled together archeological evidence from butchering cites, primatological evidence from observations of apes in the wild and in captivity, and quantitative analyses of ethnographic records to put forth a plausible history of how our consciences evolved and how we became so concerned for the well-being of people we may barely know. As humans began hunting larger game, demanding greater coordination and more effective long-distance killing tools, an already extant resentment of alphas expressed itself in collective suppression of bullying behavior. And as our developing capacity for language made it possible to keep track of each other’s behavior long-term it started to become important for everyone to maintain a reputation for generosity, cooperativeness, and even-temperedness. Boehm writes,

Ultimately, the social preferences of groups were able to affect gene pools profoundly, and once we began to blush with shame, this surely meant that the evolution of conscientious self-control was well under way. The final result was a full-blown, sophisticated modern conscience, which helps us to make subtle decisions that involve balancing selfish interests in food, power, sex, or whatever against the need to maintain a decent personal moral reputation in society and to feel socially valuable as a person. The cognitive beauty of having such a conscience is that it directly facilitates making useful social decisions and avoiding negative social consequences. Its emotional beauty comes from the fact that we in effect bond with the values and rules of our groups, which means we can internalize our group’s mores, judge ourselves as well as others, and, hopefully, end up with self-respect. (173)

Social selection is actually a force that acts on individuals, selecting for those who can most strategically suppress their own selfish impulses. But in establishing a mechanism that guards the group norm of cooperation against free riders, it increased the potential of competition between groups and quite likely paved the way for altruism of the sort Leacock’s informant Thomas displayed. Boehm writes,

Thomas surely knew that if he turned down the pair of hungry men, they might “bad-mouth” him to people he knew and thereby damage his reputation as a properly generous man. At the same time, his costly generosity might very well be mentioned when they arrived back in their camp, and through the exchange of favorable gossip he might gain in his public esteem in his own camp. But neither of these socially expedient personal considerations would account for the “dead” feeling he mentioned with such gravity. He obviously had absorbed his culture’s values about sharing and in fact had internalized them so deeply that being selfish was unthinkable. (221)

In response to Ghiselin’s cynical credo, “Scratch an altruist and watch a hypocrite bleed,” Boehm points out that the best way to garner the benefits of kindness and sympathy is to actually be kind and sympathetic. He points out further that if altruism is being selected for at the level of phenotypes (the end-products of genetic processes) we should expect it to have an impact at the level of genes. In a sense, we’ve bred altruism into ourselves. Boehm writes,

If such generosity could be readily faked, then selection by altruistic reputation simply wouldn’t work. However, in an intimate band of thirty that is constantly gossiping, it’s difficult to fake anything. Some people may try, but few are likely to succeed. (189)

The result of the social selection dynamic that began all those millennia ago is that today generosity is in our bones. There are of course circumstances that can keep our generous impulses from manifesting themselves, and those impulses have a sad tendency to be directed toward members of our own cultural groups and no one else. But Boehm offers a slightly more optimistic formula than Ghiselin’s:

I do acknowledge that our human genetic nature is primarily egoistic, secondarily nepotistic, and only rather modestly likely to support acts of altruism, but the credo I favor would be “Scratch an altruist, and watch a vigilant and successful suppressor of free riders bleed. But watch out, for if you scratch him too hard, he and his group may retaliate and even kill you. (205)

Read Part 1:

A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD

And Part 2:

A CRASH COURSE IN MULTILEVEL SELECTION THEORY PART 2: STEVEN PINKER FALLS PREY TO THE AVERAGING FALLACY SOBER AND WILSON TRIED TO WARN HIM ABOUT

Also of interest:

THE FEMINIST SOCIOBIOLOGIST: AN APPRECIATION OF SARAH BLAFFER HRDY DISGUISED AS A REVIEW OF “MOTHERS AND OTHERS: THE EVOLUTIONARY ORIGINS OF MUTUAL UNDERSTANDING”

Read More
Dennis Junk Dennis Junk

A Crash Course in Multilevel Selection Theory part 2: Steven Pinker Falls Prey to the Averaging Fallacy Sober and Wilson Tried to Warn Him about

Eliot Sober and David Sloan Wilson’s “Unto Others” lays out a theoretical framework for how selection at the level of the group could have led to the evolution of greater cooperation among humans. They point out the mistake many theorists make in thinking because evolution can be defined as changes in gene frequencies, it’s only genes that matter. But that definition leaves aside the question of how traits and behaviors evolve, i.e. what dynamics lead to the changes in gene frequencies. Steven Pinker failed to grasp their point.

If you were a woman applying to graduate school at the University of California at Berkeley in 1973, you would have had a 35 percent chance of being accepted. If you were a man, your chances would have been significantly better. Forty-four percent of male applicants got accepted that year. Apparently, at this early stage of the feminist movement, even a school as notoriously progressive as Berkeley still discriminated against women. But not surprisingly, when confronted with these numbers, the women of the school were ready to take action to right the supposed injustice. After a lawsuit was filed charging admissions offices with bias, however, a department-by-department examination was conducted which produced a curious finding: not a single department admitted a significantly higher percentage of men than women. In fact, there was a small but significant trend in the opposite direction—a bias against men.

What this means is that somehow the aggregate probability of being accepted into grad school was dramatically different from the probabilities worked out through disaggregating the numbers with regard to important groupings, in this case the academic departments housing the programs assessing the applications. This discrepancy called for an explanation, and statisticians had had one on hand since 1951.

This paradoxical finding fell into place when it was noticed that women tended to apply to departments with low acceptance rates. To see how this can happen, imagine that 90 women and 10 men apply to a department with a 30 percent acceptance rate. This department does not discriminate and therefore accepts 27 women and 3 men. Another department, with a 60 percent acceptance rate, receives applications from 10 women and 90 men. This department doesn’t discriminate either and therefore accepts 6 women and 54 men. Considering both departments together, 100 men and 100 women applied, but only 33 women were accepted, compared with 57 men. A bias exists in the two departments combined, despite the fact that it does not exist in any single department, because the departments contribute unequally to the total number of applicants who are accepted. (25)

This is how the counterintuitive statistical phenomenon known as Simpson’s Paradox is explained by philosopher Elliott Sober and biologist David Sloan Wilson in their 1998 book Unto Others: The Evolution and Psychology of Unselfish Behavior, in which they argue that the same principle can apply to the relative proliferation of organisms in groups with varying percentages of altruists and selfish actors. In this case, the benefit to the group of having more altruists is analogous to the higher acceptance rates for grad school departments which tend to receive a disproportionate number of applications from men. And the counterintuitive outcome is that, in an aggregated population of groups, altruists have an advantage over selfish actors—even though within each of those groups selfish actors outcompete altruists.  

            Sober and Wilson caution that this assessment is based on certain critical assumptions about the population in question. “This model,” they write, “requires groups to be isolated as far as the benefits of altruism are concerned but nevertheless to compete in the formation of new groups” (29). It also requires that altruists and nonaltruists somehow “become concentrated in different groups” (26) so the benefits of altruism can accrue to one while the costs of selfishness accrue to the other. One type of group that follows this pattern is a family, whose members resemble each other in terms of their traits—including a propensity for altruism—because they share many of the same genes. In humans, families tend to be based on pair bonds established for the purpose of siring and raising children, forming a unit that remains stable long enough for the benefits of altruism to be of immense importance. As the children reach adulthood, though, they disperse to form their own family groups. Therefore, assuming families live in a population with other families, group selection ought to lead to the evolution of altruism.

            Sober and Wilson wrote Unto Others to challenge the prevailing approach to solving mysteries in evolutionary biology, which was to focus strictly on competition between genes. In place of this exclusive attention on gene selection, they advocate a pluralistic approach that takes into account the possibility of selection occurring at multiple levels, from genes to individuals to groups. This is where the term multilevel selection comes from. In certain instances, focusing on one level instead of another amounts to a mere shift in perspective. Looking at families as groups, for instance, leads to many of the same conclusions as looking at them in terms of vehicles for carrying genes.

William D. Hamilton, whose thinking inspired both Richard Dawkins’ Selfish Gene and E.O. Wilson’s Sociobiology, long ago explained altruism within families by setting forth the theory of kin selection, which posits that family members will at times behave in ways that benefit each other even at their own expense because the genes underlying the behavior don’t make any distinction between the bodies which happen to be carrying copies of themselves. Sober and Wilson write,

As we have seen, however, kin selection is a special case of a more general theory—a point that Hamilton was among the first to appreciate. In his own words, “it obviously makes no difference if altruists settle with altruists because they are related… or because they recognize fellow altruists as such, or settle together because of some pleiotropic effect of the gene on habitat preference.” We therefore need to evaluate human social behavior in terms of the general theory of multilevel selection, not the special case of kin selection. When we do this, we may discover that humans, bees, and corals are all group-selected, but for different reasons. (134)

A general proclivity toward altruism based on section at the level of family groups may look somewhat different from kin-selected altruism targeted solely at those who are recognized as close relatives. For obvious reasons, the possibility of group selection becomes even more important when it comes to explaining the evolution of altruism among unrelated individuals.

            We have to bear in mind that Dawkins’s selfish genes are only selfish with regard to concerning themselves with nothing but ensuring their own continued existence—by calling them selfish he never meant to imply they must always be associated with selfishness as a trait of the bodies they provide the blueprints for. Selfish genes, in other words, can sometimes code for altruistic behavior, as in the case of kin selection. So the question of what level selection operates on is much more complicated than it would be if the gene-focused approach predicted selfishness while the multilevel approach predicted altruism. But many strict gene selection advocates argue that because selfish gene theory can account for altruism in myriad ways there’s simply no need to resort to group selection. Evolution is, after all, changes over time in gene frequencies. So why should we look to higher levels?

            Sober and Wilson demonstrate that if you focus on individuals in their simple model of predominantly altruistic groups competing against predominantly selfish groups you will conclude that altruism is adaptive because it happens to be the trait that ends up proliferating. You may add the qualifier that it’s adaptive in the specified context, but the upshot is that from the perspective of individual selection altruism outcompetes selfishness. The problem is that this is the same reasoning underlying the misguided accusations against Berkley; for any individual in that aggregate population, it was advantageous to be a male—but there was never any individual selection pressure against females. Sober and Wilson write,

The averaging approach makes “individual selection” a synonym for “natural selection.” The existence of more than one group and fitness differences between the groups have been folded into the definition of individual selection, defining group selection out of existence. Group selection is no longer a process that can occur in theory, so its existence in nature is settled a priori. Group selection simply has no place in this semantic framework. (32)

Thus, a strict focus on individuals, though it may appear to fully account for the outcome, necessarily obscures a crucial process that went into producing it. The same logic might be applicable to any analysis based on gene-level accounting. Sober and Wilson write that

if the point is to understand the processes at work, the resultant is not enough. Simpson’s paradox shows how confusing it can be to focus only on net outcomes without keeping track of the component causal factors. This confusion is carried into evolutionary biology when the separate effects of selection within and between groups are expressed in terms of a single quantity. (33)

They go on to label this approach “the averaging fallacy.” Acknowledging that nobody explicitly insists that group selection is somehow impossible by definition, they still find countless instances in which it is defined out of existence in practice. They write,

Even though the averaging fallacy is not endorsed in its general form, it frequently occurs in specific cases. In fact, we will make the bold claim that the controversy over group selection and altruism in biology can be largely resolved simply by avoiding the averaging fallacy. (34)

            Unfortunately, this warning about the averaging fallacy continues to go unheeded by advocates of strict gene selection theories. Even intellectual heavyweights of the caliber of Steven Pinker fall into the trap. In a severely disappointing essay published just last month at Edge.org called “The False Allure of Group Selection,” Pinker writes

If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate. Individual human traits evolved in an environment that includes other humans, just as they evolved in environments that include day-night cycles, predators, pathogens, and fruiting trees.

Multilevel selectionists wouldn’t disagree with this point; they would readily explain traits that benefit everyone in the group at no cost to the individuals possessing them as arising through individual selection. But Pinker here shows his readiness to fold the process of group competition into some generic “context.” The important element of the debate, of course, centers on traits that benefit the group at the expense of the individual. Pinker writes,

Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.

But, as Sober and Wilson demonstrate, those self-sacrificial traits wouldn’t necessarily be selected against in the population. In fact, self-sacrifice would be selected for if that population is an aggregation of competing groups. Pinker fails to even consider this possibility because he’s determined to stick with the definition of natural selection as occurring at the level of genes.

            Indeed, the centerpiece of Pinker’s argument against group selection in this essay is his definition of natural selection. Channeling Dawkins, he writes that evolution is best understood as competition between “replicators” to continue replicating. The implication is that groups, and even individuals, can’t be the units of selection because they don’t replicate themselves. He writes,

The theory of natural selection applies most readily to genes because they have the right stuff to drive selection, namely making high-fidelity copies of themselves. Granted, it's often convenient to speak about selection at the level of individuals, because it’s the fate of individuals (and their kin) in the world of cause and effect which determines the fate of their genes. Nonetheless, it’s the genes themselves that are replicated over generations and are thus the targets of selection and the ultimate beneficiaries of adaptations.

The underlying assumption is that, because genes rely on individuals as “vehicles” to replicate themselves, individuals can sometimes be used as shorthand for genes when discussing natural selection. Since gene competition within an individual would be to the detriment of all the genes that individual carries and strives to pass on, the genes collaborate to suppress conflicts amongst themselves. The further assumption underlying Pinker’s and Dawkins’s reasoning is that groups make for poor vehicles because suppressing within group conflict would be too difficult. But, as Sober and Wilson write,

This argument does not evaluate group selection on a trait-by-trait basis. In addition, it begs the question of how individuals became such good vehicles of selection in the first place. The mechanisms that currently limit within-individual selection are not a happy coincidence but are themselves adaptions that evolved by natural selection. Genomes that managed to limit internal conflict presumably were more fit than other genomes, so these mechanisms evolve by between-genome selection. Being a good vehicle as Dawkins defines it is not a requirement for individual selection—it’s a product of individual selection. Similarly, groups do not have to be elaborately organized “superorganisms” to qualify as a unit of selection with respect to particular traits. (97)

The idea of a “trait-group” is exemplified by the simple altruistic group versus selfish group model they used to demonstrate the potential confusion arising from Simpson’s paradox. As long as individuals with the altruism trait interact with enough regularity for the benefits to be felt, they can be defined as a group with regard to that trait.

            Pinker makes several other dubious points in his essay, most of them based on the reasoning that group selection isn’t “necessary” to explain this or that trait, only justifying his prejudice in favor of gene selection with reference to the selfish gene definition of evolution. Of course, it may be possible to imagine gene-level explanations to behaviors humans engage in predictably, like punishing cheaters in economic interactions even when doing so means the punisher incurs some cost to him or herself. But Pinker is so caught up with replicators he overlooks the potential of this type of punishment to transform groups into functional vehicles. As Sober and Wilson demonstrate, group competition can lead to the evolution of altruism on its own. But once altruism reaches a certain threshold group selection can become even more powerful because the altruistic group members will, by definition, be better at behaving as a group. And one of the mechanisms we might expect to evolve through an ongoing process of group selection would operate to curtail within group conflict and exploitation. The costly punishment Pinker dismisses as possibly explicable through gene selection is much more likely to havearisen through group selection. Sober and Wilson delight in the irony that, “The entire language of social interactions among individuals in groups has been burrowed to describe genetic interactions within individuals; ‘outlaw’ genes, ‘sheriff’ genes, ‘parliaments’ of genes, and so on” (147).

Unto Others makes such a powerful case against strict gene-level explanations and for the potentially crucial role of group selection that anyone who undertakes to argue that the appeal of multilevel selection theory is somehow false without even mentioning it risks serious embarrassment. Published fourteen years ago, it still contains a remarkably effective rebuttal to Pinker’s essay:  

In short, the concept of genes as replicators, widely regarded as a decisive argument against group selection, is in fact totally irrelevant to the subject. Selfish gene theory does not invoke any processes that are different from the ones described in multilevel selection theory, but merely looks at the same processes in a different way. Those benighted group selectionists might be right in every detail; group selection could have evolved altruists that sacrifice themselves for the benefit of others, animals that regulate their numbers to avoid overexploiting their resources, and so on. Selfish gene theory calls the genes responsible for these behaviors “selfish” for the simple reason that they evolved and therefore replicated more successfully than other genes. Multilevel selection theory, on the other hand, is devoted to showing how these behaviors evolve. Fitness differences must exist somewhere in the biological hierarchy—between individuals within groups, between groups in the global population, and so on. Selfish gene theory can’t even begin to explore these questions on the basis of the replicator concept alone. The vehicle concept is its way of groping toward the very issues that multilevel selection theory was developed to explain. (88)

Sober and Wilson, in opening the field of evolutionary studies to forces beyond gene competition, went a long way toward vindicating Stephen Jay Gould, who throughout his career held that selfish gene theory was too reductionist—he even incorporated their arguments into his final book. But Sober and Wilson are still working primarily in the abstract realm of evolutionary modeling, although in the second half of Unto Others they cite multiple psychological and anthropological sources. A theorist even more after Gould’s own heart, one who synthesizes both models and evidence from multiple fields, from paleontology to primatology to ethnography, into a hypothetical account of the natural history of human evolution, from the ancestor we share with the great apes to modern nomadic foragers and beyond, is the anthropologist Christopher Boehm, whose work we’ll be exploring in part 3.

Read Part 1 of

A CRASH COURSE IN MULTI-LEVEL SELECTION THEORY: PART 1-THE GROUNDWORK LAID BY DAWKINS AND GOULD

And Part 3:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

A Crash Course in Multi-Level Selection Theory: Part 1-The Groundwork Laid by Dawkins and Gould

What is the unit of selection? Richard Dawkins famously argues that it’s genes that are selected for over the course of evolutionary change. Stephen Jay Gould, meanwhile, maintained that it must be individuals and even sometimes groups of individuals. In their fascinating back and forth lies the foundation of today’s debates about multi-level selection theory.

Responding to Stephen Jay Gould’s criticisms of his then most infamous book, Richard Dawkins writes in a footnote to the 1989 edition of The Selfish Gene, “I find his reasoning wrong but interesting, which, incidentally, he has been kind enough to tell me, is how he usually finds mine” (275). Dawkins’s idea was that evolution is, at its core, competition between genes with success measured in continued existence. Genes are replicators. Evolution is therefore best thought of as the outcome of this competition between replicators to keep on replicating. Gould’s response was that natural selection can’t possibly act on genes because genes are always buried in bodies. Those replicators always come grouped with other replicators and have only indirect effects on the bodies they ultimately serve as blueprints for. Natural selection, as Gould suggests, can’t “see” genes; it can only see, and act on, individuals.

The image of individual genes, plotting the course of their own survival, bears little relationship to developmental genetics as we understand it. Dawkins will need another metaphor: genes caucusing, forming alliances, showing deference for a chance to join a pact, gauging probable environments. But when you amalgamate so many genes and tie them together in hierarchical chains of action mediated by environments, we call the resultant object a body. (91)

Dawkins’ rebuttal, in both later editions of The Selfish Gene and in The Extended Phenotype, is, essentially, Duh—of course genes come grouped together with other genes and only ever evolve in context. But the important point is that individuals never replicate themselves. Bodies don’t create copies of themselves. Genes, on the other hand, do just that. Bodies are therefore best thought of as vehicles for these replicators.

            As a subtle hint of his preeminent critic’s unreason, Dawkins quotes himself in his response to Gould, citing a passage Gould must’ve missed, in which the genes making up an individual organism’s genome are compared to the members of a rowing team. Each contributes to the success or failure of the team, but it’s still the individual members that are important. Dawkins describes how the concept of an “Evolutionarily Stable Strategy,” can be applied to a matter

arising from the analogy of oarsmen in a boat (representing genes in a body) needing a good team spirit. Genes are selected, not as “good” in isolation, but as good at working against the background of the other genes in the gene pool. A good gene must be compatible with and complementary to, the other genes with whom it has to share a long succession of bodies. A gene for plant-grinding teeth is a good gene in the gene pool of a herbivorous species, but a bad gene in the gene pool of a carnivorous species. (84)

Gould, in other words, isn’t telling Dawkins anything he hasn’t already considered. But does that mean Gould’s point is moot? Or does the rowing team analogy actually support his reasoning? In any case, they both agree that the idea of a “good gene” is meaningless without context.

            The selfish gene idea has gone on to become the linchpin of research in many subfields of evolutionary biology, its main appeal being the ease with which it lends itself to mathematical modeling. If you want to know what traits are the most likely to evolve, you create a simulation in which individuals with various traits compete. Run the simulation and the outcome allows you to determine the relative probability of a given trait evolving in the context of individuals with other traits. You can then compare the statistical outcomes derived from the simulation with experimental data on how the actual animals behave. This sort of analysis relies on the assumption that the traits in question are both discrete and can be selected for, and this reasoning usually rest on the further assumption that the traits are, beyond a certain threshold probability, the end-product of chemical processes set in motion by a particular gene or set of genes. In reality, everyone acknowledges that this one-to-one correspondence between gene and trait—or constellation of genes and trait—seldom occurs. All genes can do is make their associated traits more likely to develop in specific environments. But if the sample size is large enough, meaning that the population you’re modeling is large enough, and if the interactions go through enough iterations, the complicating nuances will cancel out in the final statistical averaging.  

            Gould’s longstanding objection to this line of research—as productive as he acknowledged it could be—was that processes, and even events, like large-scale natural catastrophes, that occur at higher levels of analysis can be just as or more important than the shuffling of gene frequencies at the lowest level. It’s hardly irrelevant that Dawkins and most of his fellow ethologists who rely on his theories primarily study insects—relatively simple-bodied species that produce huge populations and have rapid generational turnover. Gould, on the other hand, focused his research on the evolution of snail shells. And he kept his eye throughout his career on the big picture of how evolution worked over vast periods of time. As a paleontologist, he found himself looking at trends in the fossil record that didn’t seem to follow the expected patterns of continual, gradual development within species. In fact, the fossil records of most lineages seem to be characterized by long periods of slow or no change followed by sudden disruptions—a pattern he and Niles Eldredge refer to as punctuated equilibrium. In working out an explanation for this pattern, Eldredge and Gould did Dawkins one better: sure, genes are capable of a sort of immortality, they reasoned, but then so are species. Evolution then isn’t just driven by competition between genes or individuals; something like species selection must also be taking place.

            Dawkins accepted this reasoning up to a point, seeing that it probably even goes some way toward explaining the patterns that often emerge in the fossil record. But whereas Gould believed there was so much randomness at play in large populations that small differences would tend to cancel out, and that “speciation events”—periods when displacement or catastrophe led to smaller group sizes—were necessary for variations to take hold in the population, Dawkins thought it unlikely that variations really do cancel each other out even in large groups. This is because he knows of several examples of “evolutionary arms races,” multigenerational exchanges in which a small change leads to a big advantage, which in turn leads to a ratcheting up of the trait in question as all the individuals in the population are now competing in a changed context. Sexual selection, based on competition for reproductive access to females, is a common cause of arms races. That’s why extreme traits in the form of plumage or body size or antlers are easy to point to. Once you allow for this type of change within populations, you are forced to conclude that gene-level selection is much more powerful and important than species-level selection. As Dawkins explains in The Extended Phenotype,

Accepting Eldredge and Gould’s belief that natural selection is a general theory that can be phrased on many levels, the putting together of a certain quantity of evolutionary change demands a certain minimum number of selective replicator-eliminations. Whether the replicators that are selectively eliminated are genes or species, a simple evolutionary change requires only a few replicator substitutions. A large number of replicator substitutions, however, are needed for the evolution of a complex adaptation. The minimum replacement cycle time when we consider the gene as replicator is one individual generation, from zygote to zygote. It is measured in years or months, or smaller time units. Even in the largest organisms it is measured in only tens of years. When we consider the species as replicator, on the other hand, the replacement cycle time is the interval from speciation event to speciation event, and may be measured in thousands of years, tens of thousands, hundreds of thousands. In any given period of geological time, the number of selective species extinctions that can have taken place is many orders of magnitude less than the number of selective allele replacements that can have taken place. (106)

This reasoning, however, applies only to features and traits that are under intense selection pressure. So in determining whether a given trait arose through a process of gene selection or species selection you would first have to know certain features about the nature of that trait: how much of an advantage it confers if any, how widely members of the population vary in terms of it, and what types of countervailing forces might cancel out or intensify the selection pressure.

            The main difference between Dawkins’s and Gould’s approaches to evolutionary questions is that Dawkins prefers to frame answers in terms of the relative success of competing genes while Gould prefers to frame them in terms of historical outcomes. Dawkins would explain a wasp’s behavior by pointing out that behaving that way ensures copies of the wasp’s genes will persist in the population. Gould would explain the shape of some mammalian skull by pointing out how contingent that shape is on the skulls of earlier creatures in the lineage. Dawkins knows history is important. Gould knows gene competition is important. The difference is in the relative weights given to each. Dawkins might challenge Gould, “Gene selection explains self-sacrifice for the sake of close relatives, who carry many of the same genes”—an idea known as kin selection—“what does your historical approach say about that?” Gould might then point to the tiny forelimbs of a tyrannosaurus, or the original emergence of feathers (which were probably sported by some other dinosaur) and challenge Dawkins, “Account for that in terms of gene competition.”

            The area where these different perspectives came into the most direct conflict was sociobiology, which later developed into evolutionary psychology. This is a field in which theorists steeped in selfish gene thinking look at human social behavior and see in it the end product of gene competition. Behaviors are treated as traits, traits are assumed to have a genetic basis, and, since the genes involved exist because they outcompeted other genes producing other traits, their continuing existence suggests that the traits are adaptive, i.e. that they somehow make the continued existence of the associated genes more likely. The task of the evolutionary psychologist is to work out how. This was in fact the approach ethologists had been applying, primarily to insects, for decades.

E.O. Wilson, a renowned specialist on ant behavior, was the first to apply it to humans in his book Sociobiology, and in a later book, On Human Nature, which won him the Pulitzer. But the assumption that human behavior is somehow fixed to genes and that it always serves to benefit those genes was anathema to Gould. If ever there were a creature for whom the causal chain from gene to trait or behavior was too long and complex for the standard ethological approaches to yield valid insights, it had to be humans.

Gould famously compared evolutionary psychological theories to the “Just-so” stories of Kipling, suggesting they relied on far too many shaky assumptions and made use of far too little evidence. From Gould’s perspective, any observable trait, in humans or any other species, was just as likely to have no effect on fitness at all as it was to be adaptive. For one thing, the trait could be a byproduct of some other trait that’s adaptive; it could have been selected for indirectly. Or it could emerge from essentially random fluctuations in gene frequencies that take hold in populations because they neither help nor hinder survival and reproduction. And in humans of course there are things like cultural traditions, forethought, and technological intervention (as when a gene for near-sightedness is rendered moot with contact lenses). The debate got personal and heated, but in the end evolutionary psychology survived Gould’s criticisms. Outsiders could even be forgiven for suspecting that Gould actually helped the field by highlighting some of its weaknesses. He, in fact, didn’t object in principle to the study of human behavior from the perspective of biological evolution; he just believed the earliest attempts were far too facile. Still, there are grudges being harbored to this day.

            Another way to look at the debate between Dawkins and Gould, one which lies at the heart of the current debate over group selection, is that Dawkins favored reductionism while Gould preferred holism. Dawkins always wants to get down to the most basic unit. His “‘central theorem’ of the extended phenotype” is that “An animal’s behaviour tends to maximize the survival of genes ‘for’ that behaviour, whether or not those genes happen to be in the body of the particular animal performing it” (233). Reductionism, despite its bad name, is an extremely successful approach to arriving at explanations, and it has a central role in science. Gould’s holistic approach, while more inclusive, is harder to quantify and harder to model. But there are several analogues to natural selection that suggest ways in which higher-order processes might be important for changes at lower orders. Regular interactions between bodies—or even between groups or populations of bodies—may be crucial in accounting for changes in gene frequencies the same way software can impact the functioning of hardware or symbolic thoughts can determine patterns of neural connections.

            The question becomes whether or not higher-level processes operate regularly enough that their effects can’t safely be assumed to average out over time. One pitfall of selfish gene thinking is that it lends itself to the conflation of definitions and explanations. Evolution can be defined as changes in gene frequencies. But assuming a priori that competition at the level of genes causes those changes means running the risk of overlooking measurable outcomes of processes at higher levels. The debate, then, isn’t over whether evolution occurs at the level of genes—it has to—but rather over what processes lead to the changes. It could be argued that Gould, in his magnum opus The Structure of Evolutionary Theory, which was finished shortly before his death, forced Dawkins into making just this mistake. Responding to the book in an essay in his own book A Devil’s Chaplain, Dawkins writes,

Gould saw natural selection as operating on many levels in the hierarchy of life. Indeed it may, after a fashion, but I believe that such selection can have evolutionary consequences only when the entities selected consist of “replicators.” A replicator is a unit of coded information, of high fidelity but occasionally mutable, with some causal power over its own fate. Genes are such entities… Biological natural selection, at whatever level we may see it, results in evolutionary effects only insofar as it gives rise to changes in gene frequencies in gene pools. Gould, however, saw genes only as “book-keepers,” passively tracking the changes going on at other levels. In my view, whatever else genes are, they must be more than book-keepers, otherwise natural selection cannot work. If a genetic change has no causal influence on bodies, or at least on something that natural selection can “see,” natural selection cannot favour or disfavour it. No evolutionary change will result. (221-222)

Thus we come full circle as Dawkins comes dangerously close to acknowledging Gould’s original point about the selfish gene idea. With the book-keeper metaphor, Gould wasn’t suggesting that genes are perfectly inert. Of course, they cause something—but they don’t cause natural selection. Genes build bodies and influence behaviors, but natural selection acts on bodies and behaviors. Genes are the passive book-keepers with regard to the effects of natural selection, even though they’re active agents with regard to bodies. Again, the question becomes, do the processes that happen at higher levels of analysis operate with enough regularity to produce measurable changes in gene frequencies that a strict gene-level analysis would miss or obscure? Yes, evolution is genetic change. But the task of evolutionary biologists is to understand how those changes come about.

            Gould died in May of 2002, in the middle of a correspondence he had been carrying on with Dawkins regarding how best to deal with an emerging creationist propaganda campaign called intelligent design, a set of ideas they both agreed were contemptible nonsense. These men were in many ways the opposing generals of the so-called Darwin Wars in the 1990s, but, as exasperated as they clearly got with each other’s writing at times, they always seemed genuinely interested and amused with what the other had to say. In his essay on Gould’s final work, Dawkins writes,

The Structure of Evolutionary Theory is such a massively powerful last word, it will keep us all busy replying to it for years. What a brilliant way for a scholar to go. I shall miss him. (222)

[I’ve narrowed the scope of this post to make the ideas as manageable as possible. This account of the debate leaves out many important names and is by no means comprehensive. A good first step if you’re interested in Dawkins’s and Gould’s ideas is to read The Selfish Gene and Full House.]  

Read Part 2:

A CRASH COURSE IN MULTILEVEL SELECTION THEORY PART 2: STEVEN PINKER FALLS PREY TO THE AVERAGING FALLACY SOBER AND WILSON TRIED TO WARN HIM ABOUT

And Part 3:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini

If you can convince people you know how to broaden the contours of selfhood and show them the world as they’ve never experienced it before, if you can give them the sense that their world is expanding, they’ll at the very least want to keep talking to you so they can keep the feeling going and maybe learn what your secret is. Much of this desire to better ourselves is focused on our social lives, and that’s why duping delight is so seductive—it gives us a taste of what it’s like to be the charismatic and irresistible characters we always expected ourselves to become.

Psychologist Paul Ekman is renowned for his research on facial expressions, and he frequently studies and consults with law enforcement agencies, legal scholars, and gamblers on the topic of reading people who don’t want to be read. In his book Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage, Ekman focuses on three emotions would-be lie detectors should be on the lookout for subtle expressions of. The first two—detection apprehension and deception guilt—are pretty obvious. But the third is more interesting. Many people actually enjoy deceiving others because, for one thing, the threat of detection is more thrilling to them than terrifying, and, for another, being able to pull off the deception successfully can give them a sense of “pride in the achievement, or feelings of smug contempt toward the target” (76). Ekman calls this “Duping Delight,” and he suggests it leads many liars to brag about their crimes, which in turn leads to them being caught.

The takeaway insight is that knowing something others don’t, or having the skill to trick or deceive others, can give us an inherently rewarding feeling of empowerment.

Alex Stone, in his new book Fooling Houdini: Magicians, Mentalists, Math Geeks & the Hidden Powers of the Mind, suggests that duping delight is what drives the continuing development of the magician’s trade. The title refers to a bit of lore that has reached the status of founding myth among aficionados of legerdemain. Houdini used to boast that he could figure out the secret behind any magic trick if he saw it performed three times. Time and again, he backed up his claim, sending defeated tricksters away to nurse their wounded pride. But then came Dai Vernon, who performed for Houdini what he called the Ambitious Card, a routine in which a card signed by a volunteer and then placed in the middle of a deck mysteriously appears at the top. After watching Vernon go through the routine seven times, Houdini turned around and walked away in a huff. Vernon went on to become a leading figure in Twentieth Century magic, and every magician today has his (they’re almost all male) own version of Ambitious Card, which serves as a type of signature.

In Fooling Houdini, Stone explains that for practiced magicians, tricking the uninitiated loses its thrill over time. So they end up having to up the ante, and in the process novitiates find themselves getting deeper and deeper into the practice, tradition, culture, and society of magic and magicians. He writes,

Sure, it’s fun to fool laypeople, but they’re easy prey. It’s far more thrilling to hunt your own kind. As a result, magicians are constantly engineering new ways to dupe one another. A hierarchy of foolmanship, a who-fooled-whom pecking order, rules the conjuror’s domain. This gladiatorial spirit in turn drives considerable evolution in the art. (173)

Stone’s own story begins with a trip to Belgium to compete in the 2006 Magic Olympics. His interest in magic was, at the time, little more than an outgrowth of his interest in science. He’d been an editor at Discover magazine and had since gone on to graduate school in physics at Columbia University. But after the Magic Olympics, where he performed dismally and was left completely humiliated and averse to the thought of ever doing magic again, he gradually came to realize that one way or another he would have to face his demons by mastering the art he’d only so far dabbled in.

Fooling Houdini chronicles how Stone became obsessed with developing his own personalized act and tweaking it to perfection, and how he went from being a pathetic amateur to a respectable semi-professional. The progress of a magician, Stone learns from Jeff McBride, follows “four cardinal stations of magic: Trickster, Sorcerer, Oracle, and Sage” (41). And the resultant story as told in Stone’s book follows an eminently recognizable narrative course, from humiliation and defeat to ever-increasing mastery and self-discovery.

Fooling Houdini will likely appeal to the same audience as did Moonwalking with Einstein, Joshua Foer’s book about how he ended up winning the U.S. Memory Championships. Foer, in fact, makes a guest appearance in Fooling Houdini when Stone seeks out his expertise to help him memorize a deck of cards for an original routine of his own devising. (He also gave the book a nice plug for the back cover.) The appeal of both books comes not just from the conventional narrative arc but also from the promise of untapped potential, a sense that greater mastery, and even a better life, lie just beyond reach, accessible to anyone willing to put in those enchanted ten thousand hours of training made famous by Malcolm Gladwell. It’s the same thing people seem to love about TED lectures, the idea that ideas will almost inevitably change our lives. Nathan Heller, in a recent New Yorker article, attempts to describe the appeal of TED conferences and lectures in terms that apply uncannily well to books like Foer’s and Stone’s. Heller writes,

Debby Ruth, a Long Beach attendee, told me that she started going to TED after reaching a point in her life when “nothing excited me anymore”; she returns now for a yearly fix. TED may present itself as an ideas conference, but most people seem to watch the lectures not so much for the information as for how they make them feel. (73)

The way they make us feel is similar to the way a good magic show can make us feel—like anything is possible, like on the other side of this great idea that breaks down the walls of our own limitations is a better, fuller, more just, and happier life. “Should we be grateful to TED for providing this form of transcendence—and on the Internet, of all places?” Heller asks.

Or should we feel manipulated by one more product attempting to play on our emotions? It’s tricky, because the ideas undergirding a TED talk are both the point and, for viewers seeking a generic TED-type thrill, essentially beside it: the appeal of TED comes as much from its presentation as from it substance. (73-4)

At their core, Fooling Houdini and Moonwalking with Einstein—and pretty much every TED lecture—are about transforming yourself, and to a somewhat lesser degree the world, either with new takes on deep-rooted traditions, reconnection with ancient wisdom, or revolutionary science.

Foer presumably funded the epic journey recounted in Moonwalking with his freelance articles and maybe with expense accounts from the magazines he wrote for. Still, it seems you could train to become a serviceable “mental athlete” without spending all that much money. Not so with magic. Stone’s prose is much more quirky and slightly more self-deprecatory than Foer’s, and in one of his funniest and most revealing chapters he discusses some of the personal and financial costs associated with his obsession. The title, “It’s Annoying and I Asked You to Stop,” is a quote from a girlfriend who was about to dump him. The chapter begins,

One of my biggest fears is that someday I’ll be audited. Not because my taxes aren’t in perfect order—I’m very OCD about saving receipts and keeping track of my expenses, a habit I learned from my father—but because it would bring me face-to-face with a very difficult and decidedly lose-lose dilemma in which I’d have to choose between going to jail for tax fraud and disclosing to another adult, in naked detail, just how much money I’ve spent on magic over the years. (That, and I’d have to fess up to eating at Arby’s multiple times while traveling to magic conventions.) (159)

Having originally found magic fun and mentally stimulating, Stone ends up being seduced into spending astronomical sums by the terrible slight he received from the magic community followed by a regimen of Pavlovian conditioning based on duping delight. Both Foer’s and Stone’s stories are essentially about moderately insecure guys who try to improve themselves by learning a new skill set.

The market for a renewed sense of limitless self-potential is booming. As children, it seems every future we can imagine for ourselves is achievable—that we can inhabit them all simultaneously—so whatever singular life we find ourselves living as adults inevitably falls short of our dreams. We may have good jobs, good relationships, good health, but we can’t help sometimes feeling like we’re missing out on something, like we’re trapped in overscheduled rutted routines of workaday compromise. After a while, it becomes more and more difficult to muster any enthusiasm for much of anything beyond the laziest indulgences like the cruises we save up for and plan months or years in advance, the three-day weekend at the lake cottage, a shopping date with an old friend, going out to eat with the gang. By modern, middle-class standards, this is the good life. What more can we ask for?

What if I told you, though, that there’s a training regimen that will make you so much more creative and intelligent that you’ll wonder after a few months how you ever managed to get by with a mind as dull as yours is now? What if I told you there’s a revolutionary diet and exercise program that is almost guaranteed to make you so much more attractive that even your friends won’t recognize you? What if I told you there’s a secret set of psychological principles that will allow you to seduce almost any member of the opposite sex, or prevail in any business negotiation you ever engage in? What if I told you you’ve been living in a small dark corner of the world, and that I know the way to a boundless life of splendor?

If you can convince people you know how to broaden the contours of selfhood and show them the world as they’ve never experienced it before, if you can give them the sense that their world is expanding, they’ll at the very least want to keep talking to you so they can keep the feeling going and maybe learn what your secret is. Much of this desire to better ourselves is focused on our social lives, and that’s why duping delight is so seductive—it gives us a taste of what it’s like to be the charismatic and irresistible characters we always expected ourselves to become. This is how Foer writes about his mindset at the outset of his memory training, after he’d read about the mythic feats of memory champion Ben Pridmore:

What would it mean to have all that otherwise-lost knowledge at my fingertips? I couldn’t help but think that it would make me more persuasive, more confident and, in some fundamental sense, smarter. Certainly I’d be a better journalist, friend, and boyfriend. But more than that, I imagined that having a memory like Ben Pridmore’s would make me an altogether more attentive, perhaps even wiser, person. To the extent that experience is the sum of our memories and wisdom the sum of experience, having a better memory would mean knowing not only more about the world, but also more about myself. (7)

Stone strikes a similar chord when he’s describing what originally attracted him to magic back when he was an awkward teenager. He writes,

In my mind, magic was also a disarming social tool, a universal language that transcended age and gender and culture. Magic would be a way out of my nerdiness. That’s right, I thought magic would make me less nerdy. Or at the very least it would allow me to become a different, more interesting nerd. Through magic, I would be able to navigate awkward social situations, escape the boundaries of culture and class, connect with people from all walks of life, and seduce beautiful women. In reality, I ended up spending most of my free time with pasty male virgins. (5)

This last line echoes Foer’s observation that the people you find at memory competitions are “indistinguishable from those” you’d find at a “‘Weird Al’ Yankovic (five of spades) concert”? (189).

Though Stone’s openness about his nerdiness at times shades into some obnoxious playing up of his nerdy credentials, it does lend itself to some incisive observations. One of the lessons he has to learn to become a better magician is that his performances have to be more about the audiences than they are about the tricks—less about duping and more about connecting. What this means is that magic isn’t the key to becoming more confident he hoped it would be; instead, he has to be more confident before he can be good at magic. For Stone, this means embracing, not trying to overcome, his nerdish tendencies. He writes,

Magicians like to pretend that they’re cool and mysterious, cultivating the image of the smooth operator, the suave seducer. Their stage names are always things like the Great Tomsoni or the Amazing Randi or the International Man of Mystery—never Alex the Magical Superdoofus or the Incredible Nerdini. But does all this posturing really make them look cooler? Or just more ridiculous for trying to hide their true stripes? Why couldn’t more magicians own up to their own nerdiness? Magic was geeky. And that was okay. (243)

Stone reaches this epiphany largely through the inspiration of a clown who, in a surprising twist, ends up stealing the show from many of the larger-than-life characters featured in Fooling Houdini. In an effort to improve his comfort while performing before crowds and thus to increase his stage presence, Stone works with his actress girlfriend, takes improv classes, and attends a “clown workshop” led by “a handsome man in his early forties named Christopher Bayes,” who begins by telling the class that “The clown is the physical manifestation of the unsocialized self… It’s the essence of the playful spirit before you were defeated by society, by the world” (235). Stone immediately recognizes the connection with magic. Here you have that spark of joyful spontaneity, that childish enthusiasm before a world where everything is new and anything is possible.

“The main trigger for laughter is surprise,” Bayes told us, speaking of how the clown gets his laughs. “There’s lots of ways to find that trigger. Some of them are tricks. Some of them are math. And some of them come from building something with integrity and then smashing it. So you smash the expectation and of what you think is going to happen. (239)

In smashing something you’ve devoted considerable energy to creating, you’re also asserting your freedom to walk away from what you’ve invested yourself in, to reevaluate your idea of what’s really important, to change your life on a whim. And surprise, as Bayes describes it, isn’t just the essential tool for comedians. Stone explains,

The same goes for the magician. Magic transports us to an absurd universe, parodying the laws of physics in a whimsical toying of cause and effect. “Magic creates tension,” says Juan Tamariz, “a logical conflict that it does not resolve. That’s why people often laugh after a trick, even when we haven’t done anything funny. Tamariz is also fond of saying that magic holds a mirror up to our impossible desires. We all would like to fly, see the future, know another’s thoughts, mend what has been broken. Great magic is anchored to a symbolic logic that transcends its status as an illusion. (239)

Stone’s efforts to become a Sage magician have up till this point in the story entailed little more than a desperate stockpiling of tricks. But he comes to realize that not all tricks jive with his personality, and if he tries to go too far outside of character his performances come across as strained and false. This is the stock ironic trope that this type of story turns on—he starts off trying to become something great only to discover that he first has to accept who he is. He goes on relaying the lessons of the clown workshop,

“Try to proceed with a kind of playful integrity,” Chris Bayes told us. “Because in that integrity we actually find more possibility of surprise than we do in an idea of how to trick us into laughing. You bring it from yourself. And we see this little gift that you brought for us, which is the gift of your truth. Not an idea of your truth, but the gift of your real truth. And you can play forever with that, because it’s infinite. (244)

What’s most charming about the principle of proceeding with playful integrity is that it applies not just to clowning and magic, but to almost every artistic and dramatic endeavor—and even to science. “Every truly great idea, be it in art or science,” Stone writes, “is a kind of magic” (289). Aspirants may initially be lured into any of these creative domains by the promise of greater mastery over other people, but the true sages end up being the ones who are the most appreciative and the most susceptible to the power of the art to produce in them that sense of playfulness and boundless exuberance.

Being fooled is fun, too, because it’s a controlled way of experiencing a loss of control. Much like a roller coaster or a scary movie, it lets you loosen your grip on reality without actually losing your mind. This is strangely cathartic, and when it’s over you feel more in control, less afraid. For magicians, watching magic is about chasing this feeling—call it duped delight, the maddening ecstasy of being a layperson again, a novice, if only for a moment. (291)

“Just before Vernon,” the Man Who Fooled Houdini, “died,” Stone writes, “comedian and amateur magician Dick Cavett asked him if there was anything he wished for.” Vernon answered, “I wish somebody could fool me one more time” (291). Stone uses this line to wrap up his book, neatly bringing the story full-circle.

Fooling Houdini is unexpectedly engrossing. It reads quite a bit different from the 2010 book on magic by neuroscientists Stephen Macknik and Susana Martinez-Conde, which they wrote with Sandra Blakeslee, Sleights of Mind: What the Neuroscience of Magic Reveals about Our Everyday Deceptions. For one thing, Stone focuses much more on the people he comes to know on his journey and less on the underlying principles. And, though Macknik and Martinez-Conde also use their own education in the traditions and techniques of magic as a narrative frame, Stone gets much more personal. One underlying message of both books is that our sense of being aware of what’s going on around us is illusory, and that illusion makes us ripe for the duping. But Stone conveys more of the childlike wonder of magic, despite his efforts at coming across as a stylish hipster geek. Unfortunately, when I got to the end I was reminded of the title of a TED lecture that’s perennially on the most-watched list, Brene Brown’s “The Power of Vulnerability,” which I came away from scratching my head because it seemed utterly nonsensical.

It’s interesting that duping delight is a feeling few anticipate and many fail to recognize even as they’re experiencing it. It is the trick that ends up being played on the trickster. Like most magic, it takes advantage of a motivational system that most of us are only marginally aware of, if at all. But there’s another motivational system and another magic trick that makes things like TED lectures so thrilling. It’s also the trick that makes books like Moonwalking with Einstein and Fooling Houdini so engrossing. Arthur and Elaine Aron use what’s called “The Self-Expansion Model” to explain what attracts us to other people, and even what attracts us to groups of people we end up wanting to identify with. The basic idea is that we’re motivated to increase our efficacy, not with regard to achieving any specific goals but in terms of our general potential. One of the main ways we seek to augment our potential is by establishing close relationships with other people who have knowledge or skills or resources that would contribute to our efficacy. Foer learns countless mnemonic techniques from guys like Ed Cooke. Stone learns magic from guys like Wes James. Meanwhile, we readers are getting a glimpse of all of it through our connection with Foer and Stone.

Self-expansion theory is actually somewhat uplifting in its own right because it offers a more romantic perspective on our human desire to associate with high-status individuals and groups. But the triggers for a sense of self-expansion are probably pretty easy to mimic, even for those who lack the wherewithal or the intention to truly increase our potential or genuinely broaden our horizons. Indeed, it seems as though self-expansion has become the main psychological principle exploited by politicians, marketers, and P.R. agents. This isn’t to say we should discount every book or lecture that we find uplifting, but we should keep in mind that there are genuine experiences of self-expansion and counterfeit ones. Heller’s observation about how TED lectures are more about presentation than substance, for instance, calls to mind an experiment done in the early 1970s in which Dr. Myron Fox gave a lecture titled “Mathematical Game Theory as Applied to Physician Education.” His audience included psychologists, psychiatrists, educators, and graduate students, virtually all of whom rated his ideas and his presentation highly. But Dr. Fox wasn’t actually a doctor; he was the actor Michael Fox. And his lecture was designed to be completely devoid of meaningful content but high on expressiveness and audience connection.

The Dr. Fox Effect is the term now used to describe our striking inability to recognize nonsense coming from the mouths of charismatic speakers.

         And if keeping our foolability in mind somehow makes that sense of self-transcendence elude us today, "tomorrow we will run faster, stretch out our arms farther....

And one fine morning—"

Also read:

PERCY FAWCETT’S 2 LOST CITIES

Read More
Dennis Junk Dennis Junk

Stories, Social Proof, & Our Two Selves

Robert Cialdini describes the phenomenon of social proof, whereby we look to how others are responding to something before we form an opinion ourselves. What are the implications of social proof for our assessments of literature? Daniel Kahneman describes two competing “selves,” the experiencing self and the remembering self. Which one should we trust to let us know how we truly feel about a story?

            You’ll quickly come up with a justification for denying it, but your response to a story is influenced far more by other people’s responses to it than by your moment-to-moment experience of reading or watching it. The impression that we either enjoy an experience or we don’t, that our enjoyment or disappointment emerges directly from the scenes, sensations, and emotions of the production itself, results from our cognitive blindness to several simultaneously active processes that go into our final verdict. We’re only ever aware of the output of the various algorithms, never the individual functions.

            None of us, for instance, directly experiences the operation of what psychologist and marketing expert Robert Cialdini calls social proof, but its effects on us are embarrassingly easy to measure. Even the way we experience pain depends largely on how we perceive others to be experiencing it. Subjects receiving mild shocks not only report them to be more painful when they witness others responding to them more dramatically, but they also show physiological signs of being in greater distress.

            Cialdini opens the chapter on social proof in his classic book Influence: Science and Practice by pointing to the bizarre practice of setting television comedies to laugh tracks. Most people you talk to will say canned laughter is annoying—and they’ll emphatically deny the mechanically fake chuckles and guffaws have any impact on how funny the jokes seem to them. The writers behind those jokes, for their part, probably aren’t happy about the implicit suggestion that their audiences need to be prompted to laugh at the proper times. So why do laugh tracks accompany so many shows? “What can it be about canned laughter that is so attractive to television executives?” Cialdini asks.

Why are these shrewd and tested people championing a practice that their potential watchers find disagreeable and their most creative talents find personally insulting? The answer is both simple and intriguing: They know what the research says. (98)

As with all the other “weapons of influence” Cialdini writes about in the book, social proof seems as obvious to people as it is dismissible. “I understand how it’s supposed to work,” we all proclaim, “but you’d have to be pretty stupid to fall for it.” And yet it still works—and it works on pretty much every last one of us. Cialdini goes on to discuss the finding that even suicide rates increase after a highly publicized story of someone killing themselves. The simple, inescapable reality is that when we see someone else doing something, we become much more likely to do it ourselves, whether it be writhing in genuine pain, laughing in genuine hilarity, or finding life genuinely intolerable.

            Another factor that complicates our responses to stories is that, unlike momentary shocks or the telling of jokes, they usually last long enough to place substantial demands on working memory. Movies last a couple hours. Novels can take weeks. What this means is that when we try to relate to someone else what we thought of a movie or a book we’re relying on a remembered abstraction as opposed to a real-time recording of how much we enjoyed the experience. In his book Thinking, Fast and Slow, Daniel Kahneman suggests that our memories of experiences can diverge so much from our feelings at any given instant while actually having those experiences that we effectively have two selves: the experiencing self and the remembering self. To illustrate, he offers the example of a man who complains that a scratch at the end of a disc of his favorite symphony ruined the listening experience for him.“But the experience was not actually ruined, only the memory of it,” Kahneman points out. “The experiencing self had had an experience that was almost entirely good, and the bad end could not undo it, because it had already happened” (381). But the distinction usually only becomes apparent when the two selves disagree—and such disagreements usually require some type of objective recording to discover. Kahneman explains,

Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experiences. This is the tyranny of the remembering self. (381)

Kahneman suggests the priority we can’t help but give to the remembering self explains why tourists spend so much time taking pictures. The real objective of a vacation is not to have a pleasurable or fun experience; it’s to return home with good vacation stories.

            Kahneman reports the results of a landmark study he designed with Don Redelmeier that compared moment-to-moment pain recordings of men undergoing colonoscopies to global pain assessments given by the patients after the procedure. The outcome demonstrated that the remembering self was remarkably unaffected by the duration of the procedure or the total sum of pain experienced, as gauged by adding up the scores given moment-to-moment during the procedure. Men who actually experienced more pain nevertheless rated the procedure as less painful when the discomfort tapered off gradually as opposed to dropping off precipitously after reaching a peak. The remembering self is reliably guilty of what Kahneman calls “duration neglect,” and it assesses experiences based on a “peak-end rule,” whereby the “global retrospective rating” will be “well predicted by the average level of pain reported at the worst moment of the experience and at its end” (380). Duration neglect and the peak-end rule probably account for the greater risk of addiction for users of certain drugs like heroine or crystal meth, which result in rapid, intense highs and precipitous drop-offs, as opposed to drugs like marijuana whose effects are longer-lasting but less intense.

            We’ve already seen that pain in real time can be influenced by how other people are responding to it, and we can probably extrapolate and assume that the principle applies to pleasurable experiences as well. How does the divergence between experience and memory factor into our response to stories as expressed by our decisions about further reading or viewing, or in things like reviews or personal recommendations? For one thing, we can see that most good stories are structured in a way that serves not so much as a Jamesian “direct impression of life,”i.e. as reports from the experiencing self, but much more like the tamed abstractions Stevenson described in his “Humble Remonstrance” to James. As Kahneman explains,

A story is about significant events and memorable moments, not about time passing. Duration neglect is normal in a story, and the ending often defines its character. The same core features appear in the rules of narratives and in the memories of colonoscopies, vacations, and films. This is how the remembering self works: it composes stories and keeps them for future reference. (387)

            Now imagine that you’re watching a movie in a crowded theater. Are you influenced by the responses of your fellow audience members? Are you more likely to laugh if everyone else is laughing, wince if everyone else is wincing, cheer if everyone else is cheering? These are the effects on your experiencing self. What happens, though, in the hours and days and weeks after the movie is over—or after you’re done reading the book? Does your response to the story start to become intertwined with and indistinguishable from the cognitive schema you had in place before ever watching or reading it? Are your impressions influenced by the opinions of critics or friends whose opinions you respect? Do you give a celebrated classic the benefit of the doubt, assuming it has some merit even if you enjoyed it much less than some less celebrated work? Do you read into it virtues whose real source may be external to the story itself? Do you miss virtues that actually are present in less celebrated stories?

             Taken to its extreme, this focus on social proof leads to what’s known as social constructivism. In the realm of stories, this would be the idea that there are no objective criteria at all with which to assess merit; it’s all based on popular opinion or the dictates of authorities. Much of the dissatisfaction with the so-called canon is based on this type of thinking. If we collectively decide some work of literature is really good and worth celebrating, the reasoning goes, then it magically becomes really good and worth celebrating. There’s an undeniable kernel of truth to this—and there’s really no reason to object to the idea that one of the things that makes a work of art attention-worthy is that a lot of people are attending to it. Art serves a social function after all; part of the fun comes from sharing the experience and having conversations about it. But I personally can’t credit the absolutist version of social constructivism. I don’t think you’re anything but a literary tourist until you can make a convincing case for why a few classics don’t deserve the distinction—even though I acknowledge that any such case will probably be based largely on the ideas of other people.

            The research on the experiencing versus the remembering self also suggests a couple criteria we can apply to our assessments of stories so that they’re more meaningful to people who haven’t been initiated into the society and culture of highbrow literature. Too often, the classics are dismissed as works only English majors can appreciate. And too often, they're written in a way that justifies that dismissal. One criterion should be based on how well the book satisfies the experiencing self: I propose that a story should be considered good insofar as it induces a state of absorption. You forget yourself and become completely immersed in the plot. Mihaly Csikszentmihalyi calls this state flow, and has found that the more time people spend in it the happier and more fulfilled they tend to be. But the total time a reader or viewer spends in a state of flow will likely be neglected if the plot never reaches a peak of intensity, or if it ends on note of tedium. So the second criterion should be how memorable the story is. Assessments based on either of these criteria are of course inevitably vulnerable to social proof and idiosyncratic factors of the individual audience member (whether I find Swann’s Way tedious or absorbing depends on how much sleep and caffeine I’ve had). And yet knowing what the effects are that make for a good aesthetic experience, in real time and in our memories, can help us avoid the trap of merely academic considerations. And knowing that our opinions will always be somewhat contaminated by outside influences shouldn’t keep us from trying to be objective any more than knowing that surgical theaters can never be perfectly sanitized should keep doctors from insisting they be as well scrubbed and scoured as possible. 

Also read:

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And:

TOO PSYCHED FOR SHERLOCK: A REVIEW OF MARIA KONNIKOVA’S “MASTERMIND: HOW TO THINK LIKE SHERLOCK HOLMES”—WITH SOME THOUGHTS ON SCIENCE EDUCATION

And:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Why Shakespeare Nauseated Darwin: A Review of Keith Oatley's "Such Stuff as Dreams"

Does practicing science rob one of humanity? Why is it that, if reading fiction trains us to take the perspective of others, English departments are rife with pettiness and selfishness? Keith Oately is trying to make the study of literature more scientific, and he provides hints to these riddles and many others in his book “Such Stuff as Dreams.”

Late in his life, Charles Darwin lost his taste for music and poetry. “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he laments in his autobiography, and for many of us the temptation to place all men and women of science into a category of individuals whose minds resemble machines more than living and emotionally attuned organs of feeling and perceiving is overwhelming. In the 21st century, we even have a convenient psychiatric diagnosis for people of this sort. Don’t we just assume Sheldon in The Big Bang Theory has autism, or at least the milder version of it known as Asperger’s? It’s probably even safe to assume the show’s writers had the diagnostic criteria for the disorder in mind when they first developed his character. Likewise, Dr. Watson in the BBC’s new and obscenely entertaining Sherlock series can’t resist a reference to the quintessential evidence-crunching genius’s own supposed Asperger’s.

In Darwin’s case, however, the move away from the arts couldn’t have been due to any congenital deficiency in his finer human sentiments because it occurred only in adulthood. He writes,

I have said that in one respect my mind has changed during the last twenty or thirty years. Up to the age of thirty, or beyond it, poetry of many kinds, such as the works of Milton, Gray, Byron, Wordsworth, Coleridge, and Shelley, gave me great pleasure, and even as a schoolboy I took intense delight in Shakespeare, especially in the historical plays. I have also said that formerly pictures gave me considerable, and music very great delight. But now for many years I cannot endure to read a line of poetry: I have tried lately to read Shakespeare, and found it so intolerably dull that it nauseated me. I have also almost lost my taste for pictures or music. Music generally sets me thinking too energetically on what I have been at work on, instead of giving me pleasure.

We could interpret Darwin here as suggesting that casting his mind too doggedly into his scientific work somehow ruined his capacity to appreciate Shakespeare. But, like all thinkers and writers of great nuance and sophistication, his ideas are easy to mischaracterize through selective quotation (or, if you’re Ben Stein or any of the other unscrupulous writers behind creationist propaganda like the pseudo-documentary Expelled, you can just lie about what he actually wrote).

One of the most charming things about Darwin is that his writing is often more exploratory than merely informative. He writes in search of answers he has yet to discover. In a wider context, the quote about his mind becoming a machine, for instance, reads,

This curious and lamentable loss of the higher aesthetic tastes is all the odder, as books on history, biographies, and travels (independently of any scientific facts which they may contain), and essays on all sorts of subjects interest me as much as ever they did. My mind seems to have become a kind of machine for grinding general laws out of large collections of facts, but why this should have caused the atrophy of that part of the brain alone, on which the higher tastes depend, I cannot conceive. A man with a mind more highly organised or better constituted than mine, would not, I suppose, have thus suffered; and if I had to live my life again, I would have made a rule to read some poetry and listen to some music at least once every week; for perhaps the parts of my brain now atrophied would thus have been kept active through use. The loss of these tastes is a loss of happiness, and may possibly be injurious to the intellect, and more probably to the moral character, by enfeebling the emotional part of our nature.

His concern for his lost aestheticism notwithstanding, Darwin’s humanism, his humanity, radiates in his writing with a warmth that belies any claim about thinking like a machine, just as the intelligence that shows through it gainsays his humble deprecations about the organization of his mind.

           In this excerpt, Darwin, perhaps inadvertently, even manages to put forth a theory of the function of art. Somehow, poetry and music not only give us pleasure and make us happy—enjoying them actually constitutes a type of mental exercise that strengthens our intellect, our emotional awareness, and even our moral character. Novelist and cognitive psychologist Keith Oatley explores this idea of human betterment through aesthetic experience in his book Such Stuff as Dreams: The Psychology of Fiction. This subtitle is notably underwhelming given the long history of psychoanalytic theorizing about the meaning and role of literature. However, whereas psychoanalysis has fallen into disrepute among scientists because of its multiple empirical failures and a general methodological hubris common among its practitioners, the work of Oatley and his team at the University of Toronto relies on much more modest, and at the same time much more sophisticated, scientific protocols. One of the tools these researchers use, The Reading the Mind in the Eyes Test, was in fact first developed to research our new category of people with machine-like minds. What the researchers find bolsters Darwin’s impression that art, at least literary art, functions as a kind of exercise for our faculty of understanding and relating to others.

           Reasoning that “fiction is a kind of simulation of selves and their vicissitudes in the social world” (159), Oatley and his colleague Raymond Mar hypothesized that people who spent more time trying to understand fictional characters would be better at recognizing and reasoning about other, real-world people’s states of mind. So they devised a test to assess how much fiction participants in their study read based on how well they could categorize a long list of names according to which ones belonged to authors of fiction, which to authors of nonfiction, and which to non-authors. They then had participants take the Mind-in-the-Eyes Test, which consists of matching close-up pictures of peoples’ eyes with terms describing their emotional state at the time they were taken. The researchers also had participants take the Interpersonal Perception Test, which has them answer questions about the relationships of people in short video clips featuring social interactions. An example question might be “Which of the two children, or both, or neither, are offspring of the two adults in the clip?”  (Imagine Sherlock Holmes taking this test.) As hypothesized, Oatley writes, “We found that the more fiction people read, the better they were at the Mind-in-the-Eyes Test. A similar relationship held, though less strongly, for reading fiction and the Interpersonal Perception Test” (159).

            One major shortcoming of this study is that it fails to establish causality; people who are naturally better at reading emotions and making sound inferences about social interactions may gravitate to fiction for some reason. So Mar set up an experiment in which he had participants read either a nonfiction article from an issue of the New Yorker or a work of short fiction chosen to be the same length and require the same level of reading skills. When the two groups then took a test of social reasoning, the ones who had read the short story outperformed the control group. Both groups also took a test of analytic reasoning as a further control; on this variable there was no difference in performance between the groups. The outcome of this experiment, Oatley stresses, shouldn’t be interpreted as evidence that reading one story will increase your social skills in any meaningful and lasting way. But reading habits established over long periods likely explain the more significant differences between individuals found in the earlier study. As Oatley explains,

Readers of fiction tend to become more expert at making models of others and themselves, and at navigating the social world, and readers of non-fiction are likely to become more expert at genetics, or cookery, or environmental studies, or whatever they spend their time reading. Raymond Mar’s experimental study on reading pieces from the New Yorker is probably best explained by priming. Reading a fictional piece puts people into a frame of mind of thinking about the social world, and this is probably why they did better at the test of social reasoning. (160)

Connecting these findings to real-world outcomes, Oatley and his team also found that “reading fiction was not associated with loneliness,” as the stereotype suggests, “but was associated with what psychologists call high social support, being in a circle of people whom participants saw a lot, and who were available to them practically and emotionally” (160).

            These studies by the University of Toronto team have received wide publicity, but the people who should be the most interested in them have little or no idea how to go about making sense of them. Most people simply either read fiction or they don’t. If you happen to be of the tribe who studies fiction, then you were probably educated in a way that engendered mixed feelings—profound confusion really—about science and how it works. In his review of The Storytelling Animal, a book in which Jonathan Gottschall incorporates the Toronto team’s findings into the theory that narrative serves the adaptive function of making human social groups more cooperative and cohesive, Adam Gopnik sneers,

Surely if there were any truth in the notion that reading fiction greatly increased our capacity for empathy then college English departments, which have by far the densest concentration of fiction readers in human history, would be legendary for their absence of back-stabbing, competitive ill-will, factional rage, and egocentric self-promoters; they’d be the one place where disputes are most often quickly and amiably resolved by mutual empathetic engagement. It is rare to see a thesis actually falsified as it is being articulated.

Oatley himself is well aware of the strange case of university English departments. He cites a report by Willie van Peer on a small study he did comparing students in the natural sciences to students in the humanities. Oatley explains,

There was considerable scatter, but on average the science students had higher emotional intelligence than the humanities students, the opposite of what was expected; van Peer indicts teaching in the humanities for often turning people away from human understanding towards technical analyses of details. (160)

Oatley suggests in a footnote that an earlier study corroborates van Peer’s indictment. It found that high school students who show more emotional involvement with short stories—the type of connection that would engender greater empathy—did proportionally worse on standard academic assessments of English proficiency. The clear implication of these findings is that the way literature is taught in universities and high schools is long overdue for an in-depth critical analysis.

            The idea that literature has the power to make us better people is not new; indeed, it was the very idea on which the humanities were originally founded. We have to wonder what people like Gopnik believe the point of celebrating literature is if not to foster greater understanding and empathy. If you either enjoy it or you don’t, and it has no beneficial effects on individuals or on society in general, why bother encouraging anyone to read? Why bother writing essays about it in the New Yorker? Tellingly, many scholars in the humanities began doubting the power of art to inspire greater humanity around the same time they began questioning the value and promise of scientific progress. Oatley writes,

Part of the devastation of World War II was the failure of German citizens, one of the world’s most highly educated populations, to prevent their nation’s slide into Nazism. George Steiner has famously asserted: “We know that a man can read Goethe or Rilke in the evening, that he can play Bach and Schubert, and go to his day’s work at Auschwitz in the morning.” (164)

Postwar literary theory and criticism has, perversely, tended toward the view that literature and language in general serve as a vessel for passing on all the evils inherent in our western, patriarchal, racist, imperialist culture. The purpose of literary analysis then becomes to shift out these elements and resist them. Unfortunately, such accusatory theories leave unanswered the question of why, if literature inculcates oppressive ideologies, we should bother reading it at all. As van Peer muses in the report Oatley cites, “The Inhumanity of the Humanities,”

Consider the ills flowing from postmodern approaches, the “posthuman”: this usually involves the hegemony of “race/class/gender” in which literary texts are treated with suspicion. Here is a major source of that loss of emotional connection between student and literature. How can one expect a certain humanity to grow in students if they are continuously instructed to distrust authors and texts? (8)

           Oatley and van Peer point out, moreover, that the evidence for concentration camp workers having any degree of literary or aesthetic sophistication is nonexistent. According to the best available evidence, most of the greatest atrocities were committed by soldiers who never graduated high school. The suggestion that some type of cozy relationship existed between Nazism and an enthusiasm for Goethe runs afoul of recorded history. As Oatley points out,

Apart from propensity to violence, nationalism, and anti-Semitism, Nazism was marked by hostility to humanitarian values in education. From 1933 onwards, the Nazis replaced the idea of self-betterment through education and reading by practices designed to induce as many as possible into willing conformity, and to coerce the unwilling remainder by justified fear. (165)

Oatley also cites the work of historian Lynn Hunt, whose book Inventing Human Rights traces the original social movement for the recognition of universal human rights to the mid-1700s, when what we recognize today as novels were first being written. Other scholars like Steven Pinker have pointed out too that, while it’s hard not to dwell on tragedies like the Holocaust, even atrocities of that magnitude are resoundingly overmatched by the much larger post-Enlightenment trend toward peace, freedom, and the wider recognition of human rights. It’s sad that one of the lasting legacies of all the great catastrophes of the 20th Century is a tradition in humanities scholarship that has the people who are supposed to be the custodians of our literary heritage hell-bent on teaching us all the ways that literature makes us evil.

            Because Oatley is a central figure in what we can only hope is a movement to end the current reign of self-righteous insanity in literary studies, it pains me not to be able to recommend Such Stuff as Dreams to anyone but dedicated specialists. Oatley writes in the preface that he has “imagined the book as having some of the qualities of fiction. That is to say I have designed it to have a narrative flow” (x), and it may simply be that this suggestion set my expectations too high. But the book is poorly edited, the prose is bland and often roles over itself into graceless tangles, and a couple of the chapters seem like little more than haphazardly collated reports of studies and theories, none exactly off-topic, none completely without interest, but all lacking any central progression or theme. The book often reads more like an annotated bibliography than a story. Oatley’s scholarly range is impressive, however, bearing not just on cognitive science and literature through the centuries but extending as well to the work of important literary theorists. The book is never unreadable, never opaque, but it’s not exactly a work of art in its own right.

            Insofar as Such Stuff as Dreams is organized around a central idea, it is that fiction ought be thought of not as “a direct impression of life,” as Henry James suggests in his famous essay “The Art of Fiction,” and as many contemporary critics—notably James Wood—seem to think of it. Rather, Oatley agrees with Robert Louis Stevenson’s response to James’s essay, “A Humble Remonstrance,” in which he writes that

Life is monstrous, infinite, illogical, abrupt and poignant; a work of art in comparison is neat, finite, self-contained, rational, flowing, and emasculate. Life imposes by brute energy, like inarticulate thunder; art catches the ear, among the far louder noises of experience, like an air artificially made by a discreet musician. (qtd on pg 8)

Oatley theorizes that stories are simulations, much like dreams, that go beyond mere reflections of life to highlight through defamiliarization particular aspects of life, to cast them in a new light so as to deepen our understanding and experience of them. He writes,

Every true artistic expression, I think, is not just about the surface of things. It always has some aspect of the abstract. The issue is whether, by a change of perspective or by a making the familiar strange, by means of an artistically depicted world, we can see our everyday world in a deeper way. (15)

Critics of high-brow literature like Wood appreciate defamiliarization at the level of description; Oatley is suggesting here though that the story as a whole functions as a “metaphor-in-the-large” (17), a way of not just making us experience as strange some object or isolated feeling, but of reconceptualizing entire relationships, careers, encounters, biographies—what we recognize in fiction as plots. This is an important insight, and it topples verisimilitude from its ascendant position atop the hierarchy of literary values while rendering complaints about clichéd plots potentially moot. Didn’t Shakespeare recycle plots after all?

            The theory of fiction as a type of simulation to improve social skills and possibly to facilitate group cooperation is emerging as the frontrunner in attempts to explain narrative interest in the context of human evolution. It is to date, however, impossible to rule out the possibility that our interest in stories is not directly adaptive but instead emerges as a byproduct of other traits that confer more immediate biological advantages. The finding that readers track actions in stories with the same brain regions that activate when they witness similar actions in reality, or when they engage in them themselves, is important support for the simulation theory. But the function of mirror neurons isn’t well enough understood yet for us to determine from this study how much engagement with fictional stories depends on the reader's identifying with the protagonist. Oatley’s theory is more consonant with direct and straightforward identification. He writes,

A very basic emotional process engages the reader with plans and fortunes of a protagonist. This is what often drives the plot and, perhaps, keeps us turning the pages, or keeps us in our seat at the movies or at the theater. It can be enjoyable. In art we experience the emotion, but with it the possibility of something else, too. The way we see the world can change, and we ourselves can change. Art is not simply taking a ride on preoccupations and prejudices, using a schema that runs as usual. Art enables us to experience some emotions in contexts that we would not ordinarily encounter, and to think of ourselves in ways that usually we do not. (118)

Much of this change, Oatley suggests, comes from realizing that we too are capable of behaving in ways that we might not like. “I am capable of this too: selfishness, lack of sympathy” (193), is what he believes we think in response to witnessing good characters behave badly.

            Oatley’s theory has a lot to recommend it, but William Flesch’s theory of narrative interest, which suggests we don’t identify with fictional characters directly but rather track them and anxiously hope for them to get whatever we feel they deserve, seems much more plausible in the context of our response to protagonists behaving in surprisingly selfish or antisocial ways. When I see Ed Norton as Tyler Durden beating Angel Face half to death in Fight Club, for instance, I don’t think, hey, that’s me smashing that poor guy’s face with my fists. Instead, I think, what the hell are you doing? I had you pegged as a good guy. I know you’re trying not to be as much of a pushover as you used to be but this is getting scary. I’m anxious that Angel Face doesn’t get too damaged—partly because I imagine that would be devastating to Tyler. And I’m anxious lest this incident be a harbinger of worse behavior to come.

            The issue of identification is just one of several interesting questions that can lend itself to further research. Oatley and Mar’s studies are not enormous in terms of sample size, and their subjects were mostly young college students. What types of fiction work the best to foster empathy? What types of reading strategies might we encourage students to apply to reading literature—apart from trying to remove obstacles to emotional connections with characters? But, aside from the Big-Bad-Western Empire myth that currently has humanities scholars grooming successive generations of deluded ideologues to be little more than culture vultures presiding over the creation and celebration of Loser Lit, the other main challenge to transporting literary theory onto firmer empirical grounds is the assumption that the arts in general and literature in particular demand a wholly different type of thinking to create and appreciate than the type that goes into the intricate mechanics and intensely disciplined practices of science.

As Oatley and the Toronto team have shown, people who enjoy fiction tend to have the opposite of autism. And people who do science are, well, Sheldon. Interestingly, though, the writers of The Big Bang Theory, for whatever reason, included some contraindications for a diagnosis of autism or Asperger’s in Sheldon’s character. Like the other scientists in the show, he’s obsessed with comic books, which require at least some understanding of facial expression and body language to follow. As Simon Baron-Cohen, the autism researcher who designed the Mind-in-the-Eyes test, explains, “Autism is an empathy disorder: those with autism have major difficulties in 'mindreading' or putting themselves into someone else’s shoes, imagining the world through someone else’s feelings” (137). Baron-Cohen has coined the term “mindblindness” to describe the central feature of the disorder, and many have posited that the underlying cause is abnormal development of the brain regions devoted to perspective taking and understanding others, what cognitive psychologists refer to as our Theory of Mind.

            To follow comic book plotlines, Sheldon would have to make ample use of his own Theory of Mind. He’s also given to absorption in various science fiction shows on TV. If he were only interested in futuristic gadgets, as an autistic would be, he could just as easily get more scientifically plausible versions of them in any number of nonfiction venues. By Baron-Cohen’s definition, Sherlock Holmes can’t possibly have Asperger’s either because his ability to get into other people’s heads is vastly superior to pretty much everyone else’s. As he explains in “The Musgrave Ritual,”

You know my methods in such cases, Watson: I put myself in the man’s place, and having first gauged his intelligence, I try to imagine how I should myself have proceeded under the same circumstances.

            What about Darwin, though, that demigod of science who openly professed to being nauseated by Shakespeare? Isn’t he a prime candidate for entry into the surprisingly unpopulated ranks of heartless, data-crunching scientists whose thinking lends itself so conveniently to cooptation by oppressors and committers of wartime atrocities? It turns out that though Darwin held many of the same racist views as nearly all educated men of his time, his ability to empathize across racial and class divides was extraordinary. Darwin was not himself a Social Darwinist, a theory devised by Herbert Spencer to justify inequality (which has currency still today among political conservatives). And Darwin was also a passionate abolitionist, as is clear in the following excerpts from The Voyage of the Beagle:

On the 19th of August we finally left the shores of Brazil. I thank God, I shall never again visit a slave-country. To this day, if I hear a distant scream, it recalls with painful vividness my feelings, when passing a house near Pernambuco, I heard the most pitiable moans, and could not but suspect that some poor slave was being tortured, yet knew that I was as powerless as a child even to remonstrate.

Darwin is responding to cruelty in a way no one around him at the time would have. And note how deeply it pains him, how profound and keenly felt his sympathy is.

I was present when a kind-hearted man was on the point of separating forever the men, women, and little children of a large number of families who had long lived together. I will not even allude to the many heart-sickening atrocities which I authentically heard of;—nor would I have mentioned the above revolting details, had I not met with several people, so blinded by the constitutional gaiety of the negro as to speak of slavery as a tolerable evil.

            The question arises, not whether Darwin had sacrificed his humanity to science, but why he had so much more humanity than many other intellectuals of his day.

It is often attempted to palliate slavery by comparing the state of slaves with our poorer countrymen: if the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin; but how this bears on slavery, I cannot see; as well might the use of the thumb-screw be defended in one land, by showing that men in another land suffered from some dreadful disease.

And finally we come to the matter of Darwin’s Theory of Mind, which was quite clearly in no way deficient.

Those who look tenderly at the slave owner, and with a cold heart at the slave, never seem to put themselves into the position of the latter;—what a cheerless prospect, with not even a hope of change! picture to yourself the chance, ever hanging over you, of your wife and your little children—those objects which nature urges even the slave to call his own—being torn from you and sold like beasts to the first bidder! And these deeds are done and palliated by men who profess to love their neighbours as themselves, who believe in God, and pray that His Will be done on earth! It makes one's blood boil, yet heart tremble, to think that we Englishmen and our American descendants, with their boastful cry of liberty, have been and are so guilty; but it is a consolation to reflect, that we at least have made a greater sacrifice than ever made by any nation, to expiate our sin. (530-31)

            I suspect that Darwin’s distaste for Shakespeare was borne of oversensitivity. He doesn't say music failed to move him; he didn’t like it because it made him think “too energetically.” And as aesthetically pleasing as Shakespeare is, existentially speaking, his plays tend to be pretty harsh, even the comedies. When Prospero says, "We are such stuff / as dreams are made on" in Act 4 of The Tempest, he's actually talking not about characters in stories, but about how ephemeral and insignificant real human lives are. But why, beyond some likely nudge from his inherited temperament, was Darwin so sensitive? Why was he so empathetic even to those so vastly different from him? After admitting he’d lost his taste for Shakespeare, paintings, and music, he goes to say,

On the other hand, novels which are works of the imagination, though not of a very high order, have been for years a wonderful relief and pleasure to me, and I often bless all novelists. A surprising number have been read aloud to me, and I like all if moderately good, and if they do not end unhappily—against which a law ought to be passed. A novel, according to my taste, does not come into the first class unless it contains some person whom one can thoroughly love, and if a pretty woman all the better.

Also read

STORIES, SOCIAL PROOF, & OUR TWO SELVES

And:

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

[Check out the Toronto group's blog at onfiction.ca]

Read More
Dennis Junk Dennis Junk

Percy Fawcett’s 2 Lost Cities

David Grann’s “The Lost City of Z,” about Percy Fawcett’s expeditions to find a legendary pre
Columbian city, is an absolute joy to read. But it raises questions about what it is we hope our favorite explorers find in the regrown juggles.

In his surprisingly profound, insanely fun book The Lost City of Z: A Tale of Deadly Obsession in the Amazon, David Grann writes about his visit to a store catering to outdoorspeople in preparation for his trip to research, and to some degree retrace, the last expedition of renowned explorer Percy Fawcett. Grann, a consummate New Yorker, confesses he’s not at all the outdoors type, but once he’s on the trail of a story he does manifest a certain few traits in common with adventurers like Fawcett. Wandering around the store after having been immersed in the storied history of the Royal Geographical Society, Grann observes,

Racks held magazines like Hooked on the Outdoors and Backpacker: The Outdoors at Your Doorstep, which had articles titled “Survive a Bear Attack!” and “America’s Last Wild Places: 31 Ways to Find Solitude, Adventure—and Yourself.” Wherever I turned, there were customers, or “gear heads.” It was as if the fewer opportunities for genuine exploration, the greater the means were for anyone to attempt it, and the more baroque the ways—bungee cording, snowboarding—that people found to replicate the sensation. Exploration, however, no longer seemed aimed at some outward discovery; rather, it was directed inward, to what guidebooks and brochures called “camping and wilderness therapy” and “personal growth through adventure.” (76)

Why do people feel such a powerful attraction to wilderness? And has there really been a shift from outward to inward discovery at the heart of our longings to step away from the paved roads and noisy bustle of civilization? As the element of the extreme makes clear, part of the pull comes from the thrill of facing dangers of one sort or another. But can people really be wired in such a way that many of them are willing to risk dying for the sake of a brief moment of accelerated heart-rate and a story they can lovingly exaggerate into their old age?

The catalogue of dangers Fawcett and his companions routinely encountered in the Amazon is difficult to read about without experiencing a viscerally unsettling glimmer of the sensations associated with each affliction. The biologist James Murray, who had accompanied Ernest Shackleton on his mission to Antarctica in 1907, joined Fawcett’s team for one of its journeys into the South American jungle four years later. This much different type of exploration didn’t turn out nearly as well for him. One of Fawcett’s sturdiest companions, Henry Costin, contracted malaria on that particular expedition and became delirious with the fever. “Murray, meanwhile,” Grann writes,

seemed to be literally coming apart. One of his fingers grew inflamed after brushing against a poisonous plant. Then the nail slid off, as if someone had removed it with pliers. Then his right hand developed, as he put it, a “very sick, deep suppurating wound,” which made it “agony” even to pitch his hammock. Then he was stricken with diarrhea. Then he woke up to find what looked like worms in his knee and arm. He peered closer. They were maggots growing inside him. He counted fifty around his elbow alone. “Very painful now and again when they move,” Murray wrote. (135)

The thick clouds of mosquitoes leave every traveler pocked and swollen and nearly all of them get sick sooner or later. On these journeys, according to Fawcett, “the healthy person was regarded as a freak, an exception, extraordinary” (100). This observation was somewhat boastful; Fawcett himself remained blessedly immune to contagion throughout most of his career as an explorer.

Hammocks are required at night to avoid poisonous or pestilence-carrying ants. Pit vipers abound. The men had to sleep with nets draped over them to ward off the incessantly swarming insects. Fawcett and his team even fell prey to even fell prey tovampire bats. “We awoke to find our hammocks saturated with blood,” he wrote, “for any part of our persons touching the mosquito-nets or protruding beyond them were attacked by the loathsome animals” (127). Such wounds, they knew, could spell their doom the next time they waded into the water of the Amazon. “When bathing,” Grann writes, “Fawcett nervously checked his body for boils and cuts. The first time he swam across the river, he said, ‘there was an unpleasant sinking feeling in the pit of my stomach.’ In addition to piranhas, he dreaded candirus and electric eels, or puraques”(91).

Candirus are tiny catfish notorious for squirming their way up human orifices like the urethra, where they remain lodged to parasitize the bloodstream (although this tendency of theirs turns out to be a myth). But piranhas and eels aren’t even the most menacing monsters in the Amazon. As Grann writes,

One day Fawcett spied something along the edge of the sluggish river. At first it looked like a fallen tree, but it began undulating toward the canoes. It was bigger than an electric eel, and when Fawcett’s companions saw it they screamed. Fawcett lifted his rifle and fired at the object until smoke filled the air. When the creature ceased to move, the men pulled a canoe alongside it. It was an anaconda. In his reports to the Royal Geographical Society, Fawcett insisted that it was longer than sixty feet. (92)

This was likely an exaggeration since the record documented length for an anaconda is just under 27 feet, and yet the men considered their mission a scientific one and so would’ve striven for objectivity. Fawcett even unsheathed his knife to slice off a piece of the snake’s flesh for a specimen jar, but as he broke the skin it jolted back to life and made a lunge at the men in the canoe who panicked and pulled desperately at the oars. Fawcett couldn’t convince his men to return for another attempt.

Though Fawcett had always been fascinated by stories of hidden treasures and forgotten civilizations, the ostensible purpose of his first trip into the Amazon Basin was a surveying mission. As an impartial member of the British Royal Geographical Society, he’d been commissioned by the Bolivian and Brazilian governments to map out their borders so they could avoid a land dispute. But over time another purpose began to consume Fawcett. “Inexplicably,” he wrote, “amazingly—I knew I loved that hell. Its fiendish grasp had captured me, and I wanted to see it again” (116). In 1911, the archeologist Hiram Bingham, with the help of local guides, discovered the colossal ruins of Machu Picchu high in the Peruvian Andes. News of the discovery “fired Fawcett’s imagination” (168), according to Grann, and he began cobbling together evidence he’d come across in the form of pottery shards and local folk histories into a theory about a lost civilization deep in the Amazon, in what many believed to be a “counterfeit paradise,” a lush forest that seemed abundantly capable of sustaining intense agriculture but in reality could only support humans who lived in sparsely scattered tribes.

Percy Harrison Fawcett’s character was in many ways an embodiment of some of the most paradoxical currents of his age. A white explorer determined to conquer unmapped regions, he was nonetheless appalled by his fellow Englishmen’s treatment of indigenous peoples in South America. At the time, rubber was for the Amazon what ivory was for the Belgian Congo, oil is today in the Middle East, and diamonds are in many parts of central and western Africa. When the Peruvian Amazon Company, a rubber outfit whose shares were sold on the London Stock Exchange, attempted to enslave Indians for cheap labor, it lead to violent resistance which culminated in widespread torture and massacre.

Sir Roger Casement, a British consul general who conducted an investigation of the PAC’s practices, determined that this one rubber company alone was responsible for the deaths of thirty thousand Indians. Grann writes,

Long before the Casement report became public, in 1912, Fawcett denounced the atrocities in British newspaper editorials and in meetings with government officials. He once called the slave traders “savages” and “scum.” Moreover, he knew that the rubber boom had made his own mission exceedingly more difficult and dangerous. Even previously friendly tribes were now hostile to foreigners. Fawcett was told of one party of eighty men in which “so many of them were killed with poisoned arrows that the rest abandoned the trip and retired”; other travelers were found buried up to their waists and left to be eaten by alive by fire ants, maggots, and bees. (90)

Fawcett, despite the ever looming threat of attack, was equally appalled by many of his fellow explorers’ readiness to resort to shooting at Indians who approached them in a threatening manner. He had much more sympathy for the Indian Protection Service, whose motto was, “Die if you must, but never kill” (163), but he prided himself on being able to come up with clever ways to entice tribesmen to let his teams pass through their territories without violence. Once, when arrows started raining down on his team’s canoes from the banks, he ordered his men not to flee and instead had one of them start playing his accordion while the rest of them sang to the tune—and it actually worked (148).

But Fawcett was no softy. He was notorious for pushing ahead at a breakneck pace and showing nothing but contempt for members of his own team who couldn’t keep up owing to a lack of conditioning or fell behind owing to sickness. James Murray, the veteran of Shackleton’s Antarctic expedition whose flesh had become infested with maggots, experienced Fawcett’s monomania for maintaining progress firsthand. “This calm admission of the willingness to abandon me,” Murray wrote, “was a queer thing to hear from an Englishman, though it did not surprise me, as I had gauged his character long before” (137). Eventually, Fawcett did put his journey on hold to search out a settlement where they might find help for the dying man. When they came across a frontiersman with a mule, they got him to agree to carry Murray out of the jungle, allowing the rest of the team to continue with their expedition. To everyone’s surprise, Murray, after disappearing for a while, turned up alive—and furious. “Murray accused Fawcett of all but trying to murder him,” Grann writes, “and was incensed that Fawcett had insinuated that he was a coward” (139).

The theory of a lost civilization crystalized in the explorer’s mind when he found a document written by a Portuguese bandeirante—soldier of fortune—describing “a large, hidden, and very ancient city… discovered in the year 1753” (180) while rummaging through old records at the National Library of Brazil. As Grann explains,

Fawcett narrowed down the location. He was sure that he had found proof of archaeological remains, including causeways and pottery, scattered throughout the Amazon. He even believed that there was more than a single ancient city—the one the bandeirante described was most likely, given the terrain, near the eastern Brazilian state of Bahia. But Fawcett, consulting archival records and interviewing tribesmen, had calculated that a monumental city, along with possibly even remnants of its population, was in the jungle surrounding the Xingu River in the Brazilian Mato Grasso. In keeping with his secretive nature, he gave the city a cryptic and alluring name, one that, in all his writings and interviews, he never explained. He called it simply Z. (182)

Fawcett was planning a mission for the specific purpose of finding Z when he was called by the Royal Geographical Society to serve in the First World War. The case for Z had been up till that point mostly based on scientific curiosity, though there was naturally a bit of the Indiana Jones dyad—“fortune and glory”—sharpening his already keen interest. Ever since Hernan Cortes marched into the Aztec city of Tenochtitlan in 1519, and Francisco Pizarro conquered Cuzco, the capital of the Inca Empire, fourteen years later, there had been rumors of a city overflowing with gold called El Dorado, literally “the gilded man,” after an account by the sixteenth century chronicler Gonzalo Fernandez de Oviedo of a king who covered his body every day in gold dust only to wash it away again at night (169-170). It’s impossible to tell how many thousands of men died while searching for that particular lost city.

Fawcett, however, when faced with the atrocities of industrial-scale war, began to imbue Z with an altogether different sort of meaning. As a young man, he and his older brother Edmund had been introduced to Buddhism and the occult by a controversial figure named Helena Petrovna Blavatsky. To her followers, she was simply Madame Blavatsky. “For a moment during the late nineteenth century,” Grann writes, “Blavatsky, who claimed to be psychic, seemed on the threshold of founding a lasting religious movement” (46). It was called theosophy—“wisdom of the gods.” “In the past, Fawcett’s interest in the occult had been largely an expression of his youthful rebellion and scientific curiosity,” Grann explains, “and had contributed to his willingness to defy the prevailing orthodoxies of his own society and to respect tribal legends and religions.” In the wake of horrors like the Battle of the Somme, though, he started taking otherworldly concerns much more seriously. According to Grann, at this point,

his approach was untethered from his rigorous RGS training and acute powers of observation. He imbibed Madame Blavatsky’s most outlandish teachings about Hyperboreans and astral bodies and Lords of the Dark Face and keys to unlocking the universe—the Other World seemingly more tantalizing than the present one. (190)

It was even rumored that Fawcett was basing some of his battlefield tactics on his use of a Ouija board.

Brian Fawcett, Percy’s son and compiler of his diaries and letters into the popular volume Expedition Fawcett, began considering the implications of his father’s shift away from science years after he and Brian’s older brother Jack had failed to return from Fawcett’s last mission in search of Z. Grann writes,

Brian started questioning some of the strange papers that he had found among his father’s collection, and never divulged. Originally, Fawcett had described Z in strictly scientific terms and with caution: “I do not assume that ‘The City’ is either large or rich.” But by 1924 Fawcett had filled his papers with reams of delirious writings about the end of the world and about a mystical Atlantean kingdom, which resembled the Garden of Eden. Z was transformed into “the cradle of all civilizations” and the center of one of Blavatsky’s “White Lodges,” where a group of higher spiritual beings help to direct the fate of the universe. Fawcett hoped to discover a White Lodge that had been there since “the time of Atlantis,” and to attain transcendence. Brian wrote in his diary, “Was Daddy’s whole conception of ‘Z,’ a spiritual objective, and the manner of reaching it a religious allegory?” (299)

Grann suggests that the success of Blavatsky and others like her was a response to the growing influence of science and industrialization. “The rise of science in the nineteenth century had had a paradoxical effect,” he writes:

while it undermined faith in Christianity and the literal word of the Bible, it also created an enormous void for someone to explain the mysteries of the universe that lay beyond microbes and evolution and capitalist greed… The new powers of science to harness invisible forces often made these beliefs seem more credible, not less. If phonographs could capture human voices, and if telegraphs could send messages from one continent to the other, then couldn’t science eventually peel back the Other World? (47)

Even Arthur Conan Doyle, who was a close friend of Fawcett and whose book The Lost World was inspired by Fawcett’s accounts of his expeditions in the Amazon, was an ardent supporter of investigations into the occult. Grann quotes him as saying, “I suppose I am Sherlock Holmes, if anybody is, and I say that the case for spiritualism is absolutely proved” (48).

But pseudoscience—equal parts fraud and self-delusion—was at least a century old by the time H.P. Blavatsky began peddling it, and, tragically, ominously, it’s alive and well today. In the 1780s, electro-magnetism was the invisible force whose nature was being brought to light by science. The German physician Franz Anton Mesmer, from whom we get the term “mesmerize,” took advantage of these discoveries by positing a force called “animal magnetism” that runs through the bodies of all living things. Mesmer spent most of the decade in Paris, and in 1784 King Louis XVI was persuaded to appoint a committee to investigate Mesmer’s claims. One of the committee members, Benjamin Franklin, you’ll recall, knew something about electricity. Mesmer in fact liked to use one of Franklin’s own inventions, the glass harmonica (not that type of harmonica), as a prop for his dramatic demonstrations. The chemist and pioneer of science Antoine Lavoisier was the lead investigator though. (Ten years after serving on the committee, Lavoisier would fall victim to the invention of yet another member, Dr. Guillotine.)

Mesmer claimed that illnesses were caused by blockages in the flow of animal magnetism through the body, and he carried around a stack of printed testimonials on the effectiveness of his cures. If the idea of energy blockage as the cause of sickness sounds familiar to you, so too will Mesmer’s methods for unblocking them. He, or one of his “adepts,” would establish some kind of physical contact so they could find the body’s magnetic poles. It usually involved prolonged eye contact and would eventually lead to a “crisis,” which meant the subject would fall back and begin to shake all over until she (they were predominantly women) lost consciousness. If you’ve seen scenes of faith healers in action, you have the general idea. After undergoing several exposures to this magnetic treatment culminating in crisis, the suffering would supposedly abate and the mesmerist would chalk up another cure. Tellingly, when Mesmer caught wind of some of the experimental methods the committee planned to use he refused to participate. But then a man named Charles Deslon, one of Mesmer’s chief disciples, stepped up.

The list of ways Lavoisier devised to test the effectiveness of Deslon’s ministrations is long and amusing. At one point, he blindfolded a woman Deslon had treated before, telling her she was being magnetized right then and there, even though Deslon wasn’t even in the room. The suggestion alone was nonetheless sufficient to induce a classic crisis. In another experiment, the men replaced a door in Franklin’s house with a paper partition and had a seamstress who was supposed to be especially sensitive to magnetic effects sit in a chair with its back against the paper. For half an hour, an adept on the other side of the partition attempted to magnetize her through the paper, but all the while she just kept chatting amiably with the gentlemen in the room. When the adept finally revealed himself, though, he was able to induce a crisis in her immediately. The ideas of animal magnetism and magnetic cures were declared a total sham.

Lafayette, who brought French reinforcements to the Americans in the early 1780s, hadn’t heard about the debunking and tried to introduce the practice of mesmerism to the newly born country. But another prominent student of the Enlightenment, Thomas Jefferson, would have none of it.

Madame Blavatsky was cagey enough never to allow the supernatural abilities she claimed to have be put to the test. But around the same time Fawcett was exploring the Amazon another of Conan Doyle’s close friends, the magician and escape artist Harry Houdini, was busy conducting explorations of his own into the realm of spirits. They began in 1913 when Houdini’s mother died and, grief-stricken, he turned to mediums in an effort to reconnect with her. What happened instead was that, one after another, he caught out every medium in some type of trickery and found he was able to explain the deceptions behind all the supposedly supernatural occurrences of the séances he attended. Seeing the spiritualists as fraudsters exploiting the pain of their marks, Houdini became enraged. He ended up attending hundreds of séances, usually disguised as an old lady, and as soon as he caught the medium performing some type of trickery he would stand up, remove the disguise, and proclaim, “I am Houdini, and you are a fraud.”

Houdini went on to write an exposé, A Magician among the Spirits, and he liked to incorporate common elements of séances into his stage shows to demonstrate how easy they were for a good magician to recreate. In 1922, two years before Fawcett disappeared with his son Jack while searching for Z, Scientific American Magazine asked Houdini to serve on a committee to further investigate the claims of spiritualists. The magazine even offered a cash prize to anyone who could meet some basic standards of evidence to establish the validity of their claims. The prize went unclaimed. After Houdini declared one of Conan Doyle's favorite mediums a fraud, the two men had a bitter falling out, the latter declaring the prior an enemy of his cause. (Conan Doyle was convinced Houdini himself must've had supernatural powers and was inadvertently using them to sabotage the mediums.) The James Randi Educational Foundation, whose founder also began as a magician but then became an investigator of paranormal claims, currently offers a considerably larger cash prize (a million dollars) to anyone who can pass some well-designed test and prove they have psychic powers. To date, a thousand applicants have tried to win the prize, but none have made it through preliminary testing.

So Percy Fawcett was searching, it seems, for two very different cities; one was based on evidence of a pre-Columbian society and the other was a product of his spiritual longing. Grann writes about a businessman who insists Fawcett disappeared because he actually reached this second version of Z, where he transformed into some kind of pure energy, just as James Redfield suggests happened to the entire Mayan civilization in his New Age novel The Celestine Prophecy. Apparently, you can take pilgrimages to a cave where Fawcett found this portal to the Other World. The website titled “The Great Web of Percy Harrison Fawcett” enjoins visitors: “Follow your destiny to Ibez where Colonel Fawcett lives an everlasting life.”

           Today’s spiritualists and pseudoscientists rely more heavily on deliberately distorted and woefully dishonest references to quantum physics than they do on magnetism. But the differences are only superficial. The fundamental shift that occurred with the advent of science was that ideas could now be divided—some with more certainty than others—into two categories: those supported by sound methods and a steadfast devotion to following the evidence wherever it leads and those that emerge more from vague intuitions and wishful thinking. No sooner had science begun to resemble what it is today than people started trying to smuggle their favorite superstitions across the divide.

Not much separates New Age thinking from spiritualism—or either of them from long-established religion. They all speak to universal and timeless human desires. Following the evidence wherever it leads often means having to reconcile yourself to hard truths. As Carl Sagan writes in his indispensable paean to scientific thinking, Demon-Haunted World,

Pseudoscience speaks to powerful emotional needs that science often leaves unfulfilled. It caters to fantasies about personal powers we lack and long for… In some of its manifestations, it offers satisfaction for spiritual hungers, cures for disease, promises that death is not the end. It reassures us of our cosmic centrality and importance… At the heart of some pseudoscience (and some religion also, New Age and Old) is the idea that wishing makes it so. How satisfying it would be, as in folklore and children’s stories, to fulfill our heart’s desire just by wishing. How seductive this notion is, especially when compared with the hard work and good luck usually required to achieve our hopes. (14)

As the website for one of the most recent New Age sensations, The Secret, explains, “The Secret teaches us that we create our lives with every thought every minute of every day.” (It might be fun to compare The Secret to Madame Blavatsky’s magnum opus The Secret Doctrine—but not my kind of fun.)

That spiritualism and pseudoscience satisfy emotional longings raises the question: what’s the harm in entertaining them? Isn’t it a little cruel for skeptics like Lavoisier, Houdini, and Randi to go around taking the wrecking ball to people’s beliefs, which they presumably depend on for consolation, meaning, and hope? Indeed, the wildfire of credulity, charlatanry, and consumerist epistemology—whereby you’re encouraged to believe whatever makes you look and feel the best—is no justification for hostility toward believers. The hucksters, self-deluded or otherwise, who profit from promulgating nonsense do however deserve, in my opinion, to be very publicly humiliated. Sagan points out too that when we simply keep quiet in response to other people making proclamations we know to be absurd, “we abet a general climate in which skepticism is considered impolite, science tiresome, and rigorous thinking somehow stuffy and inappropriate” (298). In such a climate,

Spurious accounts that snare the gullible are readily available. Skeptical treatments are much harder to find. Skepticism does not sell well. A bright and curious person who relies entirely on popular culture to be informed about something like Atlantis is hundreds or thousands of times more likely to come upon a fable treated uncritically than a sober and balanced assessment. (5)

Consumerist epistemology is also the reason why creationism and climate change denialism are immune from refutation—and is likely responsible for the difficulty we face in trying to bridge the political divide. No one can decide what should constitute evidence when everyone is following some inner intuitive light to the truth. On a more personal scale, you forfeit any chance you have at genuine discovery—either outward or inward—when you drastically lower the bar for acceptable truths to make sure all the things you really want to be true can easily clear it.

On the other hand, there are also plenty of people out there given to rolling their eyes anytime they’re informed of strangers’ astrological signs moments after meeting them (the last woman I met is a Libra). It’s not just skeptics and trained scientists who sense something flimsy and immature in the characters of New Agers and the trippy hippies. That’s probably why people are so eager to take on burdens and experience hardship in the name of their beliefs. That’s probably at least part of the reason too why people risk their lives exploring jungles and wildernesses. If a dude in a tie-dye shirt says he discovered some secret, sacred truth while tripping on acid, you’re not going to take him anywhere near as seriously as you do people like Joseph Conrad, who journeyed into the heart of darkness, or Percy Fawcett, who braved the deadly Amazon in search of ancient wisdom.

The story of the Fawcett mission undertaken in the name of exploration and scientific progress actually has a happy ending—one you don’t have to be a crackpot or a dupe to appreciate. Fawcett himself may not have had the benefit of modern imaging and surveying tools, but he was also probably too distracted by fantasies of White Lodges to see much of the evidence at his feet. David Grann made a final stop on his own Amazon journey to seek out the Kuikuro Indians and the archeologist who was staying with them, Michael Heckenberger. Grann writes,

Altogether, he had uncovered twenty pre-Columbian settlements in the Xingu, which had been occupied roughly between A.D. 800 and A.D. 1600. The settlements were about two to three miles apart and were connected by roads. More astounding, the plazas were laid out along cardinal points, from east to west, and the roads were positioned at the same geometric angles. (Fawcett said that Indians had told him legends that described “many streets set at right angles to one another.”) (313)

These were the types of settlements Fawcett had discovered real evidence for. They probably wouldn’t have been of much interest to spiritualists, but their importance to the fields of archeology and anthropology are immense. Grann records from his interview:

“Anthropologists,” Heckenberger said, “made the mistake of coming into the Amazon in the twentieth century and seeing only small tribes and saying, ‘Well, that’s all there is.’ The problem is that, by then, many Indian populations had already been wiped out by what was essentially a holocaust from European contact. That’s why the first Europeans in the Amazon described such massive settlements that, later, no one could ever find.” (317)

Carl Sagan describes a “soaring sense of wonder” as a key ingredient to both good science and bad. Pseudoscience triggers our wonder switches with heedless abandon. But every once in a while findings that are backed up with solid evidence are just as satisfying. “For a thousand years,” Heckenberger explains to Grann,

the Xinguanos had maintained artistic and cultural traditions from this highly advanced, highly structured civilization. He said, for instance, that the present-day Kuikuro village was still organized along the east and west cardinal points and its paths were aligned at right angles, though its residents no longer knew why this was the preferred pattern. Heckenberger added that he had taken a piece of pottery from the ruins and shown it to a local maker of ceramics. It was so similar to present-day pottery, with its painted exterior and reddish clay, that the potter insisted it had been made recently…. “To tell you the honest-to-God truth, I don’t think there is anywhere in the world where there isn’t written history where the continuity is so clear as right here,” Heckenberger said. (318)

[The PBS series "Secrets of the Dead" devoted a show to Fawcett.]

Also read

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And:

The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini

Read More
Dennis Junk Dennis Junk

The Storytelling Animal: a Light Read with Weighty Implications

The Storytelling Animal is not groundbreaking. But the style of the book contributes something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams, through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be. The effect is that we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe.

A review of Jonathan Gottschall's The Storytelling Animal: How Stories Make Us Human

Vivian Paley, like many other preschool and kindergarten teachers in the 1970s, was disturbed by how her young charges always separated themselves by gender at playtime. She was further disturbed by how closely the play of each gender group hewed to the old stereotypes about girls and boys. Unlike most other teachers, though, Paley tried to do something about it. Her 1984 book Boys and Girls: Superheroes in the Doll Corner demonstrates in microcosm how quixotic social reforms inspired by the assumption that all behaviors are shaped solely by upbringing and culture can be. Eventually, Paley realized that it wasn’t the children who needed to learn new ways of thinking and behaving, but herself. What happened in her classrooms in the late 70s, developmental psychologists have reliably determined, is the same thing that happens when you put kids together anywhere in the world. As Jonathan Gottschall explains,

Dozens of studies across five decades and a multitude of cultures have found essentially what Paley found in her Midwestern classroom: boys and girls spontaneously segregate themselves by sex; boys engage in more rough-and-tumble play; fantasy play is more frequent in girls, more sophisticated, and more focused on pretend parenting; boys are generally more aggressive and less nurturing than girls, with the differences being present and measurable by the seventeenth month of life. (39)

Paley’s study is one of several you probably wouldn’t expect to find discussed in a book about our human fascination with storytelling. But, as Gottschall makes clear in The Storytelling Animal: How Stories Make Us Human, there really aren’t many areas of human existence that aren’t relevant to a discussion of the role stories play in our lives. Those rowdy boys in Paley’s classes were playing recognizable characters from current action and sci-fi movies, and the fantasies of the girls were right out of Grimm’s fairy tales (it’s easy to see why people might assume these cultural staples were to blame for the sex differences). And the play itself was structured around one of the key ingredients—really the key ingredient—of any compelling story, trouble, whether in the form of invading pirates or people trying to poison babies.

The Storytelling Animal is the book to start with if you have yet to cut your teeth on any of the other recent efforts to bring the study of narrative into the realm of cognitive and evolutionary psychology. Gottschall covers many of the central themes of this burgeoning field without getting into the weedier territories of game theory or selection at multiple levels. While readers accustomed to more technical works may balk at wading through all the author’s anecdotes about his daughters, Gottschall’s keen sense of measure and the light touch of his prose keep the book from getting bogged down in frivolousness. This applies as well to the sections in which he succumbs to the temptation any writer faces when trying to explain one or another aspect of storytelling by making a few forays into penning abortive, experimental plots of his own.

None of the central theses of The Storytelling Animal is groundbreaking. But the style and layout of the book contribute something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion, the way most science books do. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams—which contra Freud are seldom centered on wish-fulfillment—through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be (or actually is, if you’ve read any of D.F.Wallace’s last novel about an IRS clerk). The effect is that instead of simply having a new idea to toss around we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe. And we appreciate just how integral story is to almost everything we do.

This gloss of Gottschall’s approach gives a sense of what is truly original about The Storytelling Animal—it doesn’t seal off narrative as discrete from other features of human existence but rather shows how stories permeate every aspect of our lives, from our dreams to our plans for the future, even our sense of our own identity. In a chapter titled “Life Stories,” Gottschall writes,

This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all of the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television, while they eat pork rinds dipped in Miracle Whip. (171)

If you find this observation a tad unsettling, imagine it situated on a page underneath a mug shot of John Wayne Gacy with a caption explaining how he thought of himself “more as a victim than as a perpetrator.” For the most part, though, stories follow an easily identifiable moral logic, which Gottschall demonstrates with a short plot of his own based on the hypothetical situations Jonathan Haidt designed to induce moral dumbfounding. This almost inviolable moral underpinning of narratives suggests to Gottschall that one of the functions of stories is to encourage a sense of shared values and concern for the wider community, a role similar to the one D.S. Wilson sees religion as having played, and continuing to play in human evolution.

Though Gottschall stays away from the inside baseball stuff for the most part, he does come down firmly on one issue in opposition to at least one of the leading lights of the field. Gottschall imagines a future “exodus” from the real world into virtual story realms that are much closer to the holodecks of Star Trek than to current World of Warcraft interfaces. The assumption here is that people’s emotional involvement with stories results from audience members imagining themselves to be the protagonist. But interactive videogames are probably much closer to actual wish-fulfillment than the more passive approaches to attending to a story—hence the god-like powers and grandiose speechifying.

William Flesch challenges the identification theory in his own (much more technical) book Comeuppance. He points out that films that have experimented with a first-person approach to camera work failed to capture audiences (think of the complicated contraption that filmed Will Smith’s face as he was running from the zombies in I am Legend). Flesch writes, “If I imagined I were a character, I could not see her face; thus seeing her face means I must have a perspective on her that prevents perfect (naïve) identification” (16). One of the ways we sympathize with one another, though, is to mirror them—to feel, at least to some degree, their pain. That makes the issue a complicated one. Flesch believes our emotional involvement comes not from identification but from a desire to see virtuous characters come through the troubles of the plot unharmed, vindicated, maybe even rewarded. Attending to a story therefore entails tracking characters' interactions to see if they are in fact virtuous, then hoping desperately to see their virtue rewarded.

Gottschall does his best to avoid dismissing the typical obsessive Larper (live-action role player) as the “stereotypical Dungeons and Dragons player” who “is a pimply, introverted boy who isn’t cool and can’t play sports or attract girls” (190). And he does his best to end his book on an optimistic note. But the exodus he writes about may be an example of another phenomenon he discusses. First the optimism:

Humans evolved to crave story. This craving has, on the whole, been a good thing for us. Stories give us pleasure and instruction. They simulate worlds so we can live better in this one. They help bind us into communities and define us as cultures. Stories have been a great boon to our species. (197)

But he then makes an analogy with food cravings, which likewise evolved to serve a beneficial function yet in the modern world are wreaking havoc with our health. Just as there is junk food, so there is such a thing as “junk story,” possibly leading to what Brian Boyd, another luminary in evolutionary criticism, calls a “mental diabetes epidemic” (198). In the context of America’s current education woes, and with how easy it is to conjure images of glazy-eyed zombie students, the idea that video games and shows like Jersey Shore are “the story equivalent of deep-fried Twinkies” (197) makes an unnerving amount of sense.

Here, as in the section on how our personal histories are more fictionalized rewritings than accurate recordings, Gottschall manages to achieve something the playful tone and off-handed experimentation don't prepare you for. The surprising accomplishment of this unassuming little book (200 pages) is that it never stops being a light read even as it takes on discoveries with extremely weighty implications. The temptation to eat deep-fried Twinkies is only going to get more powerful as story-delivery systems become more technologically advanced. Might we have already begun the zombie apocalypse without anyone noticing—and, if so, are there already heroes working to save us we won’t recognize until long after the struggle has ended and we’ve begun weaving its history into a workable narrative, a legend?

Also read:

WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

And:

HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT

Read More