Napoleon Chagnon's Crucible and the Ongoing Epidemic of Moralizing Hysteria in Academia

    When Arthur Miller adapted the script of The Crucible, his play about the Salem Witch Trials originally written in 1953, for the 1996 film version, he enjoyed additional freedom to work with the up-close visual dimensions of the tragedy. In one added scene, the elderly and frail George Jacobs, whom we first saw lifting one of his two walking sticks to wave an unsteady greeting to a neighbor, sits before a row of assembled judges as the young Ruth Putnam stands accusing him of assaulting her. The girl, ostensibly shaken from the encounter and frightened lest some further terror ensue, dramatically recounts her ordeal, saying,

He come through my window and then he lay down upon me. I could not take breath. His body crush heavy upon me, and he say in my ear, “Ruth Putnam, I will have your life if you testify against me in court.”

This quote she delivers in a creaky imitation of the old man’s voice. When one of the judges asks Jacobs what he has to say about the charges, he responds with the glaringly obvious objection: “But, your Honor, I must have these sticks to walk with—how may I come through a window?” The problem with this defense, Jacobs comes to discover, is that the judges believe a person can be in one place physically and in another in spirit. This poor tottering old man has no defense against so-called “spectral evidence.” Indeed, as judges in Massachusetts realized the year after Jacobs was hanged, no one really has any defense against spectral evidence. That’s part of the reason why it was deemed inadmissible in their courts, and immediately thereafter convictions for the crime of witchcraft ceased entirely. 

William Preston as George Jacobs
            Many anthropologists point to the low cost of making accusations as a factor in the evolution of moral behavior. People in small societies like the ones our ancestors lived in for millennia, composed of thirty or forty profoundly interdependent individuals, would have had to balance any payoff that might come from immoral deeds against the detrimental effects to their reputations of having those deeds discovered and word of them spread. As the generations turned over and over again, human nature adapted in response to the social enforcement of cooperative norms, and individuals came to experience what we now recognize as our moral emotions—guilt which is often preëmptive and prohibitive, shame, indignation, outrage, along with the more positive feelings associated with empathy, compassion, and loyalty.

The legacy of this process of reputational selection persists in our prurient fascination with the misdeeds of others and our frenzied, often sadistic, delectation in the spreading of salacious rumors. What Miller so brilliantly dramatizes in his play is the irony that our compulsion to point fingers, which once created and enforced cohesion in groups of selfless individuals, can in some environments serve as a vehicle for our most viciously selfish and inhuman impulses. This is why it is crucial that any accusation, if we as a society are to take it at all seriously, must provide the accused with some reliable means of acquittal. Charges that can neither be proven nor disproven must be seen as meaningless—and should even be counted as strikes against the reputation of the one who levels them. 

            While this principle runs into serious complications in situations with crimes that are as inherently difficult to prove as they are horrific, a simple rule proscribing any glib application of morally charged labels is a crucial yet all-too-popularly overlooked safeguard against unjust calumny. In this age of viral dissemination, the rapidity with which rumors spread coupled with the absence of any reliable assurances of the validity of messages bearing on the reputations of our fellow citizens demand that we deliberately work to establish as cultural norms the holding to account of those who make accusations based on insufficient, misleading, or spectral evidence—and the holding to account as well, to only a somewhat lesser degree, of those who help propagate rumors without doing due diligence in assessing their credibility.

Napoleon Chagnon with a Yanomamö man
            The commentary attending the publication of anthropologist Napoleon Chagnon’s memoir of his research with the Yanomamö tribespeople in Venezuela calls to mind the insidious “Teach the Controversy” PR campaign spearheaded by intelligent design creationists. Coming out against the argument that students should be made aware of competing views on the value of intelligent design inevitably gives the impression of close-mindedness or dogmatism. But only a handful of actual scientists have any truck with intelligent design, a dressed-up rehashing of the old God-of-the-Gaps argument based on the logical fallacy of appealing to ignorance—and that ignorance, it so happens, is grossly exaggerated. Teaching the controversy would therefore falsely imply epistemological equivalence between scientific views on evolution and those that are not-so-subtly religious. Likewise, in the wake of allegations against Chagnon about mistreatment of the people whose culture he made a career of studying, many science journalists and many of his fellow anthropologists still seem reluctant to stand up for him because they fear doing so would make them appear insensitive to the rights and concerns of indigenous peoples. Instead, they take refuge in what they hope will appear a balanced position, even though the evidence on which the accusations rested has proven to be entirely spectral.

John Horgan
Chagnon’s Noble Savages: My Life among Two Dangerous Tribes—the Yanomamö and the Anthropologists is destined to be one of those books that garners commentary by legions of outspoken scholars and impassioned activists who never find the time to actually read it. Science writer John Horgan, for instance, has published two blog posts on Chagnon in recent weeks, and neither of them features a single quote from the book. In the first, he boasts of his resistance to bullying, via email, by five prominent sociobiologists who had caught wind of his assignment to review Patrick Tierney’s book Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon and insisted that he condemn the work and discourage anyone from reading it. Against this pressure, Horgan wrote a positive review in which he repeats several horrific accusations that Tierney makes in the book before going on to acknowledge that the author should have worked harder to provide evidence of the wrongdoings he reports on.

But Tierney went on to become an advocate for Indian rights. And his book’s faults are outweighed by its mass of vivid, damning detail. My guess is that it will become a classic in anthropological literature, sparking countless debates over the ethics and epistemology of field studies.

Disgraced Author Patrick Tierney
Horgan probably couldn’t have known at the time (though those five scientists tried to warn him) that giving Tierney credit for prompting debates about Indian rights and ethnographic research methods was a bit like praising Abigail Williams, the original source of accusations of witchcraft in Salem, for sparking discussions about child abuse. But that he stands by his endorsement today, saying, “I have one major regret concerning my review: I should have noted that Chagnon is a much more subtle theorist of human nature than Tierney and other critics have suggested,” as balanced as that sounds, casts serious doubt on his scholarship, not to mention his judgment.

            What did Tierney falsely accuse Chagnon of? There are over a hundred specific accusations in the book (Chagnon says his friend William Irons flagged 106 [446]), but the most heinous whopper comes in the fifth chapter, titled “Outbreak.” In 1968, Chagnon was helping the geneticist James V. Neel collect blood samples from the Yanomamö—in exchange for machetes—so their DNA could be compared with that of people in industrialized societies. While they were in the middle of this project, a measles epidemic broke out, and Neel had discovered through earlier research that the Indians lacked immunity to this disease, so the team immediately began trying to reach all of the Yanomamö villages to vaccinate everyone before the contagion reached them. Most people who knew about the episode considered what the scientists did heroic (and several investigations now support this view). But Tierney, by creating the appearance of pulling together multiple threads of evidence, weaves together a much different story in which Neel and Chagnon are cast as villains instead of heroes. (The version of the book I’ll quote here is somewhat incoherent because it went through some revisions in attempts to deal with holes in the evidence that were already emerging pre-publication.)

First, Tierney misinterprets some passages from Neel’s books as implying an espousal of eugenic beliefs about the Indians, namely that by remaining closer to nature and thus subject to ongoing natural selection they retain all-around superior health, including better immunity. Next, Tierney suggests that the vaccine Neel chose, Edmonston B, which is usually administered with a drug called gamma globulin to minimize reactions like fevers, is so similar to the measles virus that in the immune-suppressed Indians it actually ended up causing a suite of symptoms that was indistinguishable from full-blown measles. The implication is clear. Tierney writes,

Chagnon and Neel described an effort to “get ahead” of the measles epidemic by vaccinating a ring around it. As I have reconstructed it, the 1968 outbreak had a single trunk, starting at the Ocamo mission and moving up the Orinoco with the vaccinators. Hundreds of Yanomami died in 1968 on the Ocamo River alone. At the time, over three thousand Yanomami lived on the Ocamo headwaters; today there are fewer than two hundred. (69)

At points throughout the chapter, Tierney seems to be backing off the worst of his accusations; he writes, “Neel had no reason to think Edmonston B could become transmissible. The outbreak took him by surprise.” But even in this scenario Tierney suggests serious wrongdoing: “Still, he wanted to collect data even in the midst of a disaster” (82).

Earlier in the chapter, though, Tierney makes a much more serious charge. Pointing to a time when Chagnon showed up at a Catholic mission after having depleted his stores of gamma globulin and nearly run out of Edmonston B, Tierney suggests the shortage of drugs was part of a deliberate plan. “There were only two possibilities,” he writes,

Either Chagnon entered the field with only forty doses of virus; or he had more than forty doses. If he had more than forty, he deliberately withheld them while measles spread for fifteen days. If he came to the field with only forty doses, it was to collect data on a small sample of Indians who were meant to receive the vaccine without gamma globulin. Ocamo was a good choice because the nuns could look after the sick while Chagnon went on with his demanding work. Dividing villages into two groups, one serving as a control, was common in experiments and also a normal safety precaution in the absence of an outbreak. (60)

Thus Tierney implies that Chagnon was helping Neel test his eugenics theory and in the process became complicit in causing an epidemic, maybe deliberately, that killed hundreds of people. Tierney claims he isn’t sure how much Chagnon knew about the experiment; he concedes at one point that “Chagnon showed genuine concern for the Yanomami,” before adding, “At the same time, he moved quickly toward a cover-up” (75).

            Near the end of his “Outbreak” chapter, Tierney reports on a conversation with Mark Papania, a measles expert at the Center for Disease Control in Atlanta. After running his hypothesis about how Neel and Chagnon caused the epidemic with the Edmonston B vaccine by Papania, Tierney claims he responded, “Sure, it’s possible.” He goes on to say that while Papania informed him there were no documented cases of the vaccine becoming contagious he also admitted that no studies of adequate sensitivity had been done. “I guess we didn’t look very hard,” Tierney has him saying (80). But evolutionary psychologist John Tooby got a much different answer when he called Papania himself. In a an article published on Slate—nearly three weeks before Horgan published his review, incidentally—Tooby writes that the epidemiologist had a very different attitude to the adequacy of past safety tests from the one Tierney reported:

it turns out that researchers who test vaccines for safety have never been able to document, in hundreds of millions of uses, a single case of a live-virus measles vaccine leading to contagious transmission from one human to another—this despite their strenuous efforts to detect such a thing. If attenuated live virus does not jump from person to person, it cannot cause an epidemic. Nor can it be planned to cause an epidemic, as alleged in this case, if it never has caused one before.

Tierney also cites Samuel Katz, the pediatrician who developed Edmonston B, at a few points in the chapter to support his case. But Katz responded to requests from the press to comment on Tierney’s scenario by saying,

the use of Edmonston B vaccine in an attempt to halt an epidemic was a justifiable, proven and valid approach. In no way could it initiate or exacerbate an epidemic. Continued circulation of these charges is not only unwarranted, but truly egregious.

Tooby included a link to Katz’s response, along with a report from science historian Susan Lindee of her investigation of Neel’s documents disproving many of Tierney’s points. It seems Horgan should’ve paid a bit more attention to those emails he was receiving.

James V. Neel with some Yanomamö men
Further investigations have shown that pretty much every aspect of Tierney’s characterization of Neel’s beliefs and research agenda was completely wrong. The report from a task force investigation by the American Society of Human Genetics gives a sense of how Tierney, while giving the impression of having conducted meticulous research, was in fact perpetrating fraud. The report states,

Tierney further suggests that Neel, having recognized that the vaccine was the cause of the epidemic, engineered a cover-up. This is based on Tierney’s analysis of audiotapes made at the time. We have reexamined these tapes and provide evidence to show that Tierney created a false impression by juxtaposing three distinct conversations recorded on two separate tapes and in different locations. Finally, Tierney alleges, on the basis of specific taped discussions, that Neel callously and unethically placed the scientific goals of the expedition above the humanitarian need to attend to the sick. This again is shown to be a complete misrepresentation, by examination of the relevant audiotapes as well as evidence from a variety of sources, including members of the 1968 expedition.

This report was published a couple years after Tierney’s book hit the shelves. But there was sufficient evidence available to anyone willing to do the due diligence in checking out the credibility of the author and his claims to warrant suspicion that the book’s ability to make it onto the shortlist for the National Book Award is indicative of a larger problem.

With the benefit of hindsight and a perspective from outside the debate (though I’ve been following the sociobiology controversy for a decade and a half, I wasn’t aware of Chagnon’s longstanding and personal battles with other anthropologists until after Tierney’s book was published) it seems to me that once Tierney had been caught misrepresenting the evidence in support of such an atrocious accusation his book should have been removed from the shelves, and all his reporting should have been dismissed entirely. Tierney himself should have been made to answer for his offense. But for some reason none of this happened.

Marshall Sahlins
The anthropologist Marshall Sahlins, for instance, to whom Chagnon has been a bête noire for decades, brushed off any concern for Tierney’s credibility in his review of Darkness in El Dorado, published a full month after Horgan’s, apparently because he couldn’t resist the opportunity to write about how much he hates his celebrated colleague. Sahlins’s review is titled “Guilty not as Charged,” which is already enough to cast doubt on his capacity for fairness or rationality. Here’s how he sums up the issue of Tierney’s discredited accusation in relation to the rest of the book:

The Kurtzian narrative of how Chagnon achieved the political status of a monster in Amazonia and a hero in academia is truly the heart of Darkness in El Dorado. While some of Tierney’s reporting has come under fire, this is nonetheless a revealing book, with a cautionary message that extends well beyond the field of anthropology. It reads like an allegory of American power and culture since Vietnam.

Sahlins apparently hasn’t read Conrad’s novel Heart of Darkness or he’d know Chagnon is no Kurtz. And Vietnam? The next paragraph goes into more detail about this “allegory,” as if Sahlins’s conscripting of him into service as a symbol of evil somehow establishes his culpability. To get an idea of how much Chagnon actually had to do with Vietnam, we can look at a passage early in Noble Savages about how disconnected from the outside world he was while doing his field work:

I was vaguely aware when I went into the Yanomamö area in late 1964 that the United States had sent several hundred military advisors to South Vietnam to help train the South Vietnamese army. When I returned to Ann Arbor in 1966 the United States had some two hundred thousand combat troops there. (36)

Barbara King
But Sahlins’s review, as bizarre as it is, is important because it’s representative of the types of arguments Chagnon’s fiercest anthropological critics make against his methods, his theories, but mainly against him personally. In another recent comment on how “The Napoleon Chagnon Wars Flare Up Again,” Barbara J. King betrays a disconcerting and unscholarly complacence with quoting other, rival anthropologists’ words as evidence of Chagnon’s own thinking. Alas, King too is weighing in on the flare-up without having read the book, or anything else by the author it seems. And she’s also at pains to appear fair and balanced, even though the sources she cites against Chagnon are neither, nor are they the least bit scientific. Of Sahlins’s review of Darkness in El Dorado, she writes,

The Sahlins essay from 2000 shows how key parts of Chagnon’s argument have been “dismembered” scientifically. In a major paper published in 1988, Sahlins says, Chagnon left out too many relevant factors that bear on Ya̧nomamö males’ reproductive success to allow any convincing case for a genetic underpinning of violence.

It’s a bit sad that King feels it’s okay to post on a site as popular as NPR and quote a criticism of a study she clearly hasn’t read—she could have downloaded the pdf of Chagnon’s landmark paper “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” for free. Did Chagnon claim in the study that it proved violence had a genetic underpinning? It’s difficult to tell what the phrase “genetic underpinning” even means in this context.

Jonathan Marks
To lend further support to Sahlins’s case, King selectively quotes another anthropologist, Jonathan Marks. The lines come from a rant on his blog (I urge you to check it out for yourself if you’re at all suspicious about the aptness of the term rant to describe the post) about a supposed takeover of anthropology by genetic determinism. But King leaves off the really interesting sentence at the end of the remark. Here’s the whole passage explaining why Marks thinks Chagnon is an incompetent scientist:

Let me be clear about my use of the word “incompetent”. His methods for collecting, analyzing and interpreting his data are outside the range of acceptable anthropological practices. Yes, he saw the Yanomamo doing nasty things. But when he concluded from his observations that the Yanomamo are innately and primordially “fierce” he lost his anthropological credibility, because he had not demonstrated any such thing. He has a right to his views, as creationists and racists have a right to theirs, but the evidence does not support the conclusion, which makes it scientifically incompetent.

What Marks is saying here is not that he has evidence of Chagnon doing poor field work; rather, Marks dismisses Chagnon merely because of his sociobiological leanings. Note too that the italicized words in the passage are not quotes. This is important because along with the false equation of sociobiology with genetic determinism this type of straw man underlies nearly all of the attacks on Chagnon. Finally, notice how Marks slips into the realm of morality as he tries to traduce Chagnon’s scientific credibility. In case you think the link with creationism and racism is a simple analogy—like the one I used myself at the beginning of this essay—look at how Marks ends his rant:

So on one side you’ve got the creationists, racists, genetic determinists, the Republican governor of Florida, Jared Diamond, and Napoleon Chagnon–and on the other side, you’ve got normative anthropology, and the mother of the President. Which side are you on?

How can we take this at all seriously? And why did King misleadingly quote, on a prominent news site, such a seemingly level-headed criticism which in context reveals itself as anything but level-headed? I’ll risk another analogy here and point out that Marks’s comments about genetic determinism taking over anthropology are similar in both tone and intellectual sophistication to Glenn Beck’s comments about how socialism is taking over American politics.

Elizabeth Povinelli
             King also links to a review of Noble Savages that was published in the New York Times in February, and this piece is even harsher to Chagnon. After repeating Tierney’s charge about Neel deliberately causing the 1968 measles epidemic and pointing out it was disproved, anthropologist Elizabeth Povinelli writes of the American Anthropological Association investigation that,

The committee was split over whether Neel’s fervor for observing the “differential fitness of headmen and other members of the Yanomami population” through vaccine reactions constituted the use of the Yanomamö as a Tuskegee-­like experimental population.

Since this allegation has been completely discredited by the American Society of Human Genetics, among others, Povinelli’s repetition of it is irresponsible, as was the Times failure to properly vet the facts in the article.

Try as I might to remain detached from either side as I continue to research this controversy (and I’ve never met any of these people), I have to say I found Povinelli’s review deeply offensive. The straw men she shamelessly erects and the quotes she shamelessly takes out of context, all in the service of an absurdly self-righteous and substanceless smear, allow no room whatsoever for anything answering to the name of compassion for a man who was falsely accused of complicity in an atrocity. And in her zeal to impugn Chagnon she propagates a colorful and repugnant insult of her own creation, which she misattributes to him. She writes,

Perhaps it’s politically correct to wonder whether the book would have benefited from opening with a serious reflection on the extensive suffering and substantial death toll among the Yanomamö in the wake of the measles outbreak, whether or not Chagnon bore any responsibility for it. Does their pain and grief matter less even if we believe, as he seems to, that they were brutal Neolithic remnants in a land that time forgot? For him, the “burly, naked, sweaty, hideous” Yanomamö stink and produce enormous amounts of “dark green snot.” They keep “vicious, underfed growling dogs,” engage in brutal “club fights” and—God forbid!—defecate in the bush. By the time the reader makes it to the sections on the Yanomamö’s political organization, migration patterns and sexual practices, the slant of the argument is evident: given their hideous society, understanding the real disaster that struck these people matters less than rehabilitating Chagnon’s soiled image.

In other words, Povinelli’s response to Chagnon’s “harrowing” ordeal, is to effectively say, Maybe you’re not guilty of genocide, but you’re still guilty for not quitting your anthropology job and becoming a forensic epidemiologist. Anyone who actually reads Noble Savages will see quite clearly the “slant” Povinelli describes, along with those caricatured “brutal Neolithic remnants,” must have flown in through her window right next to George Jacobs.

            Povinelli does characterize one aspect of Noble Savages correctly when she complains about its “Manichean rhetorical structure,” with the bad Rousseauian, Marxist, postmodernist cultural anthropologists—along with the corrupt and PR-obsessed Catholic missionaries—on one side, and the good Hobbesian, Darwinian, scientific anthropologists on the other, though it’s really just the scientific part he’s concerned with. I actually expected to find a more complicated, less black-and-white debate taking place when I began looking into the attacks on Chagnon’s work—and on Chagnon himself. But what I ended up finding was that Chagnon’s description of the division, at least with regard to the anthropologists (I haven’t researched his claims about the missionaries) is spot-on, and Povinelli’s repulsive review is a case in point.

E.O. Wilson
This isn’t to say that there aren’t legitimate scientific disagreements about sociobiology. In fact, Chagnon writes about how one of his heroes is “calling into question some of the most widely accepted views” as early as his dedication page, referring to E.O. Wilson’s latest book The Social Conquest of Earth. But what Sahlins, Marks, and Povinelli offer is neither legitimate nor scientific. These commenters really are, as Chagnon suggests, representative of a subset of cultural anthropologists completely given over to a moralizing hysteria. Their scholarship is as dishonest as it is defamatory, their reasoning rests on guilt by free-association and the tossing up and knocking down of the most egregious of straw men, and their tone creates the illusion of moral certainty coupled with a longsuffering exasperation with entrenched institutionalized evils. For these hysterical moralizers, it seems any theory of human behavior that involves evolution or biology represents the same kind of threat as witchcraft did to the people of Salem in the 1690s, or as communism did to McCarthyites in the 1950s. To combat this chimerical evil, the presumed righteous ends justify the deceitful means.

Terence Turner
The unavoidable conclusion with regard to the question of why Darkness in El Dorado wasn’t dismissed outright when it should have been is that even though it has been established that Chagnon didn’t commit any of the crimes Tierney accused him of, as far as his critics are concerned, he may as well have. Somehow cultural anthropologists have come to occupy a bizarre culture of their own in which charging a colleague with genocide doesn’t seem like a big deal. Before Tierney’s book hit the shelves, two anthropologists, Terence Turner and Leslie Sponsel, co-wrote an email to the American Anthropological Association which was later sent to several journalists. Turner and Sponsel later claimed the message was simply a warning about the “impending scandal” that would result from the publication of Darkness in El Dorado. But the hyperbole and suggestive language make it read more like a publicity notice than a warning. “This nightmarish story—a real anthropological heart of darkness beyond the imagining of even a Josef Conrad (though not, perhaps, a Josef Mengele)”—is it too much to ask of those who are so fond of referencing Joseph Conrad that they actually read his book?—“will be seen (rightly in our view) by the public, as well as most anthropologists, as putting the whole discipline on trial.” As it turned out, though, the only one who was put on trial, by the American Anthropological Association—though officially it was only an “inquiry”—was Napoleon Chagnon.

Leslie Sponsel
Chagnon’s old academic rivals, many of whom claim their problem with him stems from the alleged devastating impact of his research on Indians, fail to appreciate the gravity of Tierney’s accusations. Their blasé response to the author being exposed as a fraud gives the impression that their eagerness to participate in the pile-on has little to do with any concern for the Yanomamö people. Instead, they embraced Darkness in El Dorado because it provided good talking points in the campaign against their dreaded nemesis Napoleon Chagnon. Sahlins, for instance, is strikingly cavalier about the personal effects of Tierney’s accusations in the review cited by King and Horgan:

The brouhaha in cyberspace seemed to help Chagnon’s reputation as much as Neel’s, for in the fallout from the latter’s defense many academics also took the opportunity to make tendentious arguments on Chagnon’s behalf. Against Tierney’s brief that Chagnon acted as an anthro-provocateur of certain conflicts among the Yanomami, one anthropologist solemnly demonstrated that warfare was endemic and prehistoric in the Amazon. Such feckless debate is the more remarkable because most of the criticisms of Chagnon rehearsed by Tierney have been circulating among anthropologists for years, and the best evidence for them can be found in Chagnon’s writings going back to the 1960s.

Sahlins goes on to offer his own sinister interpretation of Chagnon’s writings, using the same straw man and guilt-by-free-association techniques common to anthropologists in the grip of moralizing hysteria. But I can’t help wondering why anyone would take a word he says seriously after he suggests that being accused of causing a deadly epidemic helped Neel’s and Chagnon’s reputations.

            Marshall Sahlins recently made news by resigning from the National Academy of Sciences in protest against the organization’s election of Chagnon to its membership and its partnerships with the military. In explaining his resignation, Sahlins insists that Chagnon, based on the evidence of his own writings, did serious harm to the people whose culture he studied. Sahlins also complains that Chagnon’s sociobiological ideas about violence are so wrongheaded that they serve to “discredit the anthropological discipline.” To back up his objections, he refers interested parties to that same review of Darkness in El Dorado King links to on her post. Though Sahlins explains his moral and intellectual objections separately, he seems to believe that theories of human behavior based on biology are inherently immoral, as if theorizing that violence has “genetic underpinnings” is no different from claiming that violence is inevitable and justifiable. This is why Sahlins can’t discuss Chagnon without reference to Vietnam. He writes in his review,

The ‘60s were the longest decade of the 20th century, and Vietnam was the longest war. In the West, the war prolonged itself in arrogant perceptions of the weaker peoples as instrumental means of the global projects of the stronger. In the human sciences, the war persists in an obsessive search for power in every nook and cranny of our society and history, and an equally strong postmodern urge to “deconstruct” it. For his part, Chagnon writes popular textbooks that describe his ethnography among the Yanomami in the 1960s in terms of gaining control over people.

Sahlins doesn’t provide any citations to back up this charge—he’s quite clearly not the least bit concerned with fairness or solid scholarship—and based on what Chagnon writes in Noble Savages this fantasy of “gaining control” originates in the mind of Sahlins, not in the writings of Chagnon.

For instance, Chagnon writes of being made the butt of an elaborate joke several Yanomamö conspired to play on him by giving him fake names for people in their village (like Hairy Cunt, Long Dong, and Asshole). When he mentions these names to people in a neighboring village, they think it’s hilarious. “My face flushed with embarrassment and anger as the word spread around the village and everybody was laughing hysterically.” And this was no minor setback: “I made this discovery some six months into my fieldwork!” (66) Contrary to the despicable caricature Povinelli provides as well, Chagnon writes admiringly of the Yanomamö’s “wicked humor,” and how “They enjoyed duping others, especially the unsuspecting and gullible anthropologist who lived among them” (67). Another gem comes from an episode in which he tries to treat a rather embarrassing fungal infection: “You can’t imagine the hilarious reaction of the Yanomamö watching the resident fieldworker in a most indescribable position trying to sprinkle foot powder onto his crotch, using gravity as a propellant” (143).

            The bitterness, outrage, and outright hatred directed at Chagnon, alongside the overt nonexistence of evidence that he’s done anything wrong, seem completely insane until you consider that this preeminent anthropologist falls afoul of all the –isms that haunt the fantastical armchair obsessions of postmodern pseudo-scholars. Chagnon stands as a living symbol of the white colonizer exploiting indigenous people and resources (colonialism); he propagates theories that can be read as supportive of fantasies about individual and racial superiority (Social Darwinism, racism); he reports on tribal warfare and cruelty toward women, with the implication that these evils are encoded in our genes (neoconservativism, sexism, biological determinism). It should be clear that all of this is nonsense: any exploitation is merely alleged and likely outweighed by efforts at vaccination against diseases introduced by missionaries and gold miners; sociobiology doesn’t focus on racial differences, and superiority is a scientifically meaningless term; and the fact that genes play a role in some behavior implies neither that the behavior is moral nor that it is inevitable. The truly evil –ism at play in the campaign against Chagnon is postmodernism—an ideology which functions as little more than a factory for the production of false accusations.

            There are two main straw men that are bound to be rolled out by postmodern critics of evolutionary theories of behavior in any discussion of morally charged topics. The first is the gene-for misconception. Every anthropologist, sociobiologist, and evolutionary psychologist knows that there is no gene for violence and warfare in the sense that would mean everyone born with a particular allele will inevitably grow up to be physically aggressive. Yet, in any discussion of the causes of violence, or any other issue in which biology is implicated, critics fall all over themselves trying to catch their opponents out for making this mistake, and they pretend by doing so they’re defeating an attempt to undermine efforts to make the world more peaceful. It so happens that scientists actually have discovered a gene variation, known popularly as “the warrior gene,” that increases the likelihood that an individual carrying it will engage in aggressive behavior—but only if that individual experiences a traumatic childhood. Having a gene variation associated with a trait only ever means someone is more likely to express that trait, and there will almost always be other genes and several environmental factors contributing to the overall likelihood.

You can be reasonably sure that if a critic is taking a sociobiologist or an evolutionary psychologist to task for suggesting a direct one-to-one correspondence between a gene and a behavior that critic is being either careless or purposely misleading. In trying to bring about a more peaceful world, it’s far more effective to study the actual factors that contribute to violence than it is to write moralizing criticisms of scientific colleagues. The charge that evolutionary approaches can only be used to support conservative or reactionary views of society isn’t just a misrepresentation of sociobiological theories; it’s also empirically false—surveys demonstrate that grad students in evolutionary anthropology are overwhelmingly liberal in their politics, just as liberal in fact as anthropology students in non-evolutionary concentrations.

Another thing anyone who has taken a freshman anthropology course knows, but that anti-evolutionary critics fall all over themselves taking sociobiologists to task for not understanding, is that people who live in foraging or tribal cultures cannot be treated as perfect replicas of our Pleistocene ancestors, or as Povinelli calls them “prehistoric time capsules.” Hunters and gatherers are not “living fossils,” because they’ve been evolving just as long as people in industrialized societies, their histories and environments are unique, and it’s almost impossible for them to avoid being impacted by outside civilizations. If you flew two groups of foragers from different regions each into the territory of the other, you would learn quite quickly that each group’s culture is intricately adapted to the environment it originally inhabited. This does not mean, however, that evidence about how foraging and tribal peoples live is irrelevant to questions about human evolution.

As different as those two groups are, they are both probably living lives much more similar to those of our ancestors than anyone in industrialized societies. What evolutionary anthropologists and psychologists tend to be most interested in are the trends that emerge when several of these cultures are compared to one another. The Yanomamö actually subsist largely on slash-and-burn agriculture, and they live in groups much larger than those of most foraging peoples. Their culture and demographic patterns may therefore provide clues to how larger and more stratified societies developed after millennia of evolution in small, mobile bands. But, again, no one is suggesting the Yanomamö are somehow interchangeable with the people who first made this transition to more complex social organization historically.

The prehistoric time-capsule straw man often goes hand-in-hand with an implication that the anthropologists supposedly making the blunder see the people whose culture they study as somehow inferior, somehow less human than people who live in industrialized civilizations. It seems like a short step from this subtle dehumanization to the kind of whole-scale exploitation indigenous peoples are often made to suffer. But the sad truth is there are plenty of economic, religious, and geopolitical forces working against the preservation of indigenous cultures and the protection of indigenous people’s rights to make scapegoating scientists who gather cultural and demographic information completely unnecessary. And you can bet Napoleon Chagnon is, if anything, more outraged by the mistreatment of the Yanomamö than most of the activists who falsely accuse him of complicity, because he knows so many of them personally. Chagnon is particularly critical of Brazilian gold miners and Salesian missionaries, both of whom it seems have far more incentive to disrespect the Yanomamö culture (by supplanting their religion and moving them closer to civilization) and ravage the territory they inhabit. The Salesians’ reprisals for his criticisms, which entailed pulling strings to keep him out of the territory and efforts to create a public image of him as a menace, eventually provided fodder for his critics back home as well. 

Thomas Gregor
In an article published in the journal American Anthropologist in 2004 titled Guilt by Association, about the American Anthropological Association’s compromised investigation of Tierney’s accusations against Chagnon, Thomas Gregor and Daniel Gross describe “chains of logic by which anthropological research becomes, at the end of an associative thread, an act of misconduct” (689). Quoting Defenders of the Truth, sociologist Ullica Segerstrale’s indispensable 2000 book on the sociobiology debate, Gregor and Gross explain that Chagnon’s postmodern accusers relied on a rhetorical strategy common among critics of evolutionary theories of human behavior—a strategy that produces something startlingly indistinguishable from spectral evidence. Segerstrale writes,

Ullica Segerstrale
In their analysis of their target’s texts, the critics used a method I call moral reading. The basic idea behind moral reading was to imagine the worst possible political consequences of a scientific claim. In this way, maximum moral guilt might be attributed to the perpetrator of this claim. (206)

She goes on to cite a “glaring” example of how a scholar drew an imaginary line from sociobiology to Nazism, and then connected it to fascist behavioral control, even though none of these links were supported by any evidence (207). Gregor and Gross describe how this postmodern version of spectral evidence was used to condemn Chagnon.

In the case at hand, for example, the Report takes Chagnon to task for an article in Science on revenge warfare, in which he reports that “Approximately 30% of Yanomami adult male deaths are due to violence”(Chagnon 1988:985). Chagnon also states that Yanomami men who had taken part in violent acts fathered more children than those who had not. Such facts could, if construed in their worst possible light, be read as suggesting that the Yanomami are violent by nature and, therefore, undeserving of protection. This reading could give aid and comfort to the opponents of creating a Yanomami reservation. The Report, therefore, criticizes Chagnon for having jeopardized Yanomami land rights by publishing the Science article, although his research played no demonstrable role in the demarcation of Yanomami reservations in Venezuela and Brazil. (689)

The task force had found that Chagnon was guilty—even though it was nominally just an “inquiry” and had no official grounds for pronouncing on any misconduct—of harming the Indians by portraying them negatively. Gregor and Gross, however, sponsored a ballot at the AAA to rescind the organization’s acceptance of the report; in 2005, it was voted on by the membership and passed by a margin of 846 to 338. “Those five years,” Chagnon writes of the time between that email warning about Tierney’s book and the vote finally exonerating him, “seem like a blurry bad dream” (450).

            Anthropological fieldwork has changed dramatically since Chagnon’s early research in Venezuela. There was legitimate concern about the impact of trading manufactured goods like machetes for information, and you can read about some of the fracases it fomented among the Yanomamö in Noble Savages. The practice is now prohibited by the ethical guidelines of ethnographic field research. The dangers to isolated or remote populations from communicable diseases must also be considered while planning any expeditions to study indigenous cultures. But Chagnon was entering the Ocamo region after many missionaries and just before many gold miners. And we can’t hold him accountable for disregarding rules that didn’t exist at the time. Sahlins, however, echoing Tierney’s perversion of Neel and Chagnon’s race to immunize the Indians so that the two men appeared to be the source of contagion, accuses Chagnon of causing much of the violence he witnessed and reported by spreading around his goods.

Hostilities thus tracked the always-changing geopolitics of Chagnon-wealth, including even pre-emptive attacks to deny others access to him. As one Yanomami man recently related to Tierney: “Shaki [Chagnon] promised us many things, and that’s why other communities were jealous and began to fight against us.”

Aside from the fact that some Yanomamö men had just returned from a raid the very first time he entered one of their villages, and the fact that the source of this quote has been discredited, Sahlins is also basing his elaborate accusation on some pretty paltry evidence.

            Sahlins also insists that the “monster in Amazonia” couldn’t possibly have figured out a way to learn the names and relationships of the people he studied without aggravating intervillage tensions (thus implicitly conceding those tensions already existed). The Yanomamö have a taboo against saying the names of other adults, similar to our own custom of addressing people we’ve just met by their titles and last names, but with much graver consequences for violations. This is why Chagnon had to confirm the names of people in one tribe by asking about them in another, the practice that led to his discovery of the prank that was played on him. Sahlins uses Tierney’s reporting as the only grounds for his speculations on how disruptive this was to the Yanomamö. And, in the same way he suggested there was some moral equivalence between Chagnon going into the jungle to study the culture of a group of Indians and the US military going into the jungles to engage in a war against the Vietcong, he fails to distinguish between the Nazi practice of marking Jews and Chagnon’s practice of writing numbers on people’s arms to keep track of their problematic names. Quoting Chagnon, Sahlins writes,

“I began the delicate task of identifying everyone by name and numbering them with indelible ink to make sure that everyone had only one name and identity.” Chagnon inscribed these indelible identification numbers on people’s arms—barely 20 years after World War II.

This juvenile innuendo calls to mind Jon Stewart’s observation that it’s not until someone in Washington makes the first Hitler reference that we know a real political showdown has begun (and Stewart has had to make the point a few times again since then).

One of the things that makes this type of trashy pseudo-scholarship so insidious is that it often creates an indelible impression of its own. Anyone who reads Sahlins’ essay could be forgiven for thinking that writing numbers on people might really be a sign that he was dehumanizing them. Fortunately, Chagnon’s own accounts go a long way toward dispelling this suspicion. In one passage, he describes how he made the naming and numbering into a game for this group of people who knew nothing about writing:

I had also noted after each name the item that person wanted me to bring on my next visit, and they were surprised at the total recall I had when they decided to check me. I simply looked at the number I had written on their arm, looked the number up in my field book, and then told the person precisely what he had requested me to bring for him on my next trip. They enjoyed this, and then they pressed me to mention the names of particular people in the village they would point to. I would look at the number on the arm, look it up in my field book, and whisper his name into someone’s ear. The others would anxiously and eagerly ask if I got it right, and the informant would give an affirmative quick raise of the eyebrows, causing everyone to laugh hysterically. (157)

Needless to say, this is a far cry from using the labels to efficiently herd people into cargo trains to transport them to concentration camps and gas chambers. Sahlins disgraces himself by suggesting otherwise and by not distancing himself from Tierney when it became clear that his atrocious accusations were meritless.

            Which brings us back to John Horgan. One week after the post in which he bragged about standing up to five email bullies who were urging him not to endorse Tierney’s book and took the opportunity to say he still stands by the mostly positive review, he published another post on Chagnon, this time about the irony of how close Chagnon’s views on war are to those of Margaret Mead, a towering figure in anthropology whose blank-slate theories sociobiologists often challenge. (Both of Horgan’s posts marking the occasion of Chagnon’s new book—neither of which quote from it—were probably written for publicity; his own book on war was published last year.) As I read the post, I came across the following bewildering passage: 

Alice Dreger
Chagnon advocates have cited a 2011 paper by bioethicist Alice Dreger as further “vindication” of Chagnon. But to my mind Dreger’s paper—which wastes lots of verbiage bragging about all the research that she’s done and about how close she has gotten to Chagnon–generates far more heat than light. She provides some interesting insights into Tierney’s possible motives in writing Darkness in El Dorado, but she leaves untouched most of the major issues raised by Chagnon’s career.

Horgan’s earlier post was one of the first things I’d read in years about Chagnon, and Tierney’s accusations against him. I read Alice Dreger’s report on her investigation of those accusations, and the “inquiry” by the American Anthropological Association that ensued from them, shortly afterward. I kept thinking back to Horgan’s continuing endorsement of Tieney’s book as I read the report because she cites several other reports that establish, at the very least, that there was no evidence to support the worst of the accusations. My conclusion was that Horgan simply hadn’t done his homework. How could he endorse a work featuring such horrific accusations if he knew most of them, the most horrific in particular, had been disproved? But with this second post he was revealing that he knew the accusations were false—and yet he still hasn’t recanted his endorsement.

            If you only read two supplements to Noble Savages, I recommend Dreger’s report and Emily Eakin’s profile of Chagnon in the New York Times. The one qualm I have about Eakin’s piece is that she too sacrifices the principle of presuming innocence in her effort to achieve journalistic balance, quoting Leslie Sponsel, one of the authors of the appalling email that sparked the AAA’s investigation of Chagnon, as saying, “The charges have not all been disproven by any means.” It should go without saying that the burden of proof is on the accuser. It should also go without saying that once the most atrocious of Tierney’s accusations were disproven the discussion of culpability should have shifted its focus away from Chagnon onto Tierney and his supporters. That it didn’t calls to mind the scene in The Crucible when an enraged John Proctor, whose wife is being arrested, shouts in response to an assurance that she’ll be released if she’s innocent—“If she is innocent! Why do you never wonder if Paris be innocent, or Abigail? Is the accuser always holy now? Were they born this morning as clean as God’s fingers?” (73). Aside from Chagnon himself, Dreger is about the only one who realized Tierney himself warranted some investigating.

            Eakin echoes Horgan a bit when she faults the “zealous tone” of Dreger’s report. Indeed, at one point, Dreger compares Chagnon’s trial to Galileo’s being called before the Inquisition. The fact is, though, there’s an important similarity. One of the most revealing discoveries of Dreger’s investigation was that the members of the AAA task force knew Tierney’s book was full of false accusations but continued with their inquiry anyway because they were concerned about the organization’s public image. In an email to the sociobiologist Sarah Blaffer Hrdy, Jane Hill, the head of the task force, wrote,

Burn this message. The book is just a piece of sleaze, that’s all there is to it (some cosmetic language will be used in the report, but we all agree on that). But I think the AAA had to do something because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations, and silence on the part of the AAA would have been interpreted as either assent or cowardice.

How John Horgan could have read this and still claimed that Dreger’s report “generates more heat than light” is beyond me. I can only guess that his judgment has been distorted by cognitive dissonance.

        To Horgan's other complaints, that she writes too much about her methods and admits to having become friends with Chagnon, she might respond that there is so much real hysteria surrounding this controversy, along with a lot of commentary reminiscent of the type of ridiculous rhetoric one hears on cable news, it was important to distinguish her report from all the groundless and recriminatory he-said-she-said. As for the friendship, it came about over the course of Dreger’s investigation. This is important because, for one, it doesn’t suggest any pre-existing bias, and two, one of the claims by critics of Chagnon’s work is that the violence he reported was either provoked by the man himself, or represented some kind of mental projection of his own bellicose character onto the people he was studying.
Tim Clary

Dreger’s friendship with Chagnon shows that he’s not the monster portrayed by those in the grip of moralizing hysteria. And if parts of her report strike many as sententious it’s probably owing to their unfamiliarity with how ingrained that hysteria has become. It seems odd that anyone would pronounce on the importance of evidence or fairness—but basic principles we usually take for granted where trammeled in the frenzy to condemn Chagnon. If his enemies are going to compare him to Mengele, then a comparison with Galileo seems less extreme. Dreger, it seems to me, deserves credit for bringing a sorely needed modicum of sanity to the discussion. And she deserves credit as well for being one of the only people commenting on the controversy who understands the devastating personal impact of such vile accusations. She writes,

Meanwhile, unlike Neel, Chagnon was alive to experience what it is like to be drawn-and-quartered in the international press as a Nazi-like experimenter responsible for the deaths of hundreds, if not thousands, of Yanomamö. He tried to describe to me what it is like to suddenly find yourself accused of genocide, to watch your life’s work be twisted into lies and used to burn you.

So let’s make it clear: the scientific controversy over sociobiology and the scandal over Tierney’s discredited book are two completely separate issues. In light of the findings from all the investigations of Tierney’s claims, we should all, no matter our theoretical leanings, agree that Darkness in El Dorado is, in the words of Jane Hill, who headed a task force investigating it, “just a piece of sleaze.” We should still discuss whether it was appropriate or advisable for Chagnon to exchange machetes for information—I’d be interested to hear what he has to say himself, since he describes all kinds of frustrations the practice caused him in his book. We should also still discuss the relative threat of contagion posed by ethnographers versus missionaries, weighed of course against the benefits of inoculation campaigns.

But we shouldn’t discuss any ethical or scientific matter with reference to Darkness in El Dorado or its disgraced author aside from questions like: Why was the hysteria surrounding the book allowed to go so far? Why were so many people willing to scapegoat Chagnon? Why doesn’t anyone—except Alice Dreger—seem at all interested in bringing Tierney to justice in some way for making such outrageous accusations based on misleading or fabricated evidence? What he did is far worse than what Jonah Lehrer or James Frey did, and yet both of those men have publically acknowledged their dishonesty while no one has put even the slightest pressure on Tierney to publically admit wrongdoing.

            There’s some justice to be found in how easy Tierney and all the self-righteous pseudo-scholars like Sahlins have made it for future (and present) historians of science to cast them as deluded and unscrupulous villains in the story of a great—but flawed, naturally—anthropologist named Napoleon Chagnon. There’s also justice to be found in how snugly the hysterical moralizers’ tribal animosity toward Chagnon, their dehumanization of him, fits within a sociobiological framework of violence and warfare. One additional bit of justice might come from a demonstration of how easily Tierney’s accusatory pseudo-reporting can be turned inside-out. Tierney at one point in his book accuses Chagnon of withholding names that would disprove the central finding of his famous Science paper, and reading into the fact that the ascendant theories Chagnon criticized were openly inspired by Karl Marx’s ideas, he writes,

Yet there was something familiar about Chagnon’s strategy of secret lists combined with accusations against ubiquitous Marxists, something that traced back to his childhood in rural Michigan, when Joe McCarthy was king. Like the old Yanomami unokais, the former senator from Wisconsin was in no danger of death. Under the mantle of Science, Tailgunner Joe was still firing away—undefeated, undaunted, and blessed with a wealth of off-spring, one of whom, a poor boy from Port Austin, had received a full portion of his spirit. (180)

Tierney had no evidence that Chagnon kept any data out of his analysis. Nor did he have any evidence regarding Chagnon’s ideas about McCarthy aside from what he thought he could divine from knowing where he grew up (he cited no surveys of opinions from the town either). His writing is so silly it would be laughable if we didn’t know about all the anguish it caused. Tierney might just as easily have tried to divine Chagnon’s feelings about McCarthyism based on his alma mater. It turns out Chagnon began attending classes at the University of Michigan, the school where he’d write the famous dissertation for his PhD that would become the classic anthropology text The Fierce People, just two decades after another famous alumnus, one who actually stood up to McCarthy at a time when he was enjoying the success of a historical play he'd written, an allegory on the dangers of moralizing hysteria, in particular the one we now call the Red Scare. His name was Arthur Miller. 

Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education

            Whenever he gets really drunk, my brother has the peculiar habit of reciting the plot of one or another of his favorite shows or books. His friends and I like to tease him about it—“Watch out, Dan’s drunk, nobody mention The Wire!”—and the quirk can certainly be annoying, especially if you’ve yet to experience the story first-hand. But I have to admit, given how blotto he usually is when he first sets out on one of his grand retellings, his ability to recall intricate plotlines right down to their minutest shifts and turns is extraordinary. One recent night, during a timeout in an epic shellacking of Notre Dame’s football team, he took up the tale of Django Unchained, which incidentally I’d sat next to him watching just the week before. Tuning him out, I let my thoughts shift to a post I’d read on The New Yorker’s cinema blog The Front Row.

            In “The Riddle of Tarantino,” film critic Richard Brody analyzes the director-screenwriter’s latest work in an attempt to tease out the secrets behind the popular appeal of his creations and to derive insights into the inner workings of his mind. The post is agonizingly—though also at points, I must admit, exquisitely—overwritten, almost a parody of the grandiose type of writing one expects to find within the pages of the august weekly. Bemused by the lavish application of psychoanalytic jargon, I finished the essay pitying Brody for, in all his writerly panache, having nothing of real substance to say about the movie or the mind behind it. I wondered if he knows the scientific consensus on Freud is that his influence is less in the line of, say, a Darwin or an Einstein than of an L. Ron Hubbard.
            What Brody and my brother have in common is that they were both moved enough by their cinematic experience to feel an urge to share their enthusiasm, complicated though that enthusiasm may have been. Yet they both ended up doing the story a disservice, succeeding less in celebrating the work than in blunting its impact. Listening to my brother’s rehearsal of the plot with Brody’s essay in mind, I wondered what better field there could be than psychology for affording enthusiasts discussion-worthy insights to help them move beyond simple plot references. How tragic, then, that the only versions of psychology on offer in educational institutions catering to those who would be custodians of art, whether in academia or on the mastheads of magazines like The New Yorker, are those in thrall to Freud’s cultish legacy.
There’s just something irresistibly seductive about the promise of a scientific paradigm that allows us to know more about another person than he knows about himself. In this spirit of privileged knowingness, Brody faults Django for its lack of moral complexity before going on to make a silly accusation. Watching the movie, you know who the good guys are, who the bad guys are, and who you want to see prevail in the inevitably epic climax. “And yet,” Brody writes,
the cinematic unconscious shines through in moments where Tarantino just can’t help letting loose his own pleasure in filming pain. In such moments, he never seems to be forcing himself to look or to film, but, rather, forcing himself not to keep going. He’s not troubled by representation but by a visual superego that restrains it. The catharsis he provides in the final conflagration is that of purging the world of miscreants; it’s also a refining fire that blasts away suspicion of any peeping pleasure at misdeeds and fuses aesthetic, moral, and political exultation in a single apotheosis.
The strained stateliness of the prose provides a ready distraction from the stark implausibility of the assessment. Applying Occam’s Razor rather than Freud’s at once insanely elaborate and absurdly reductionist ideology, we might guess that what prompted Tarantino to let the camera linger discomfortingly long on the violent misdeeds of the black hats is that he knew we in the audience would be anticipating that “final conflagration.”  The more outrageous the offense, the more pleasurable the anticipation of comeuppance—but the experimental findings that support this view aren’t covered in film or literary criticism curricula, mired as they are in century-old pseudoscience.
Maria Konnikova
            I’ve been eagerly awaiting the day when scientific psychology supplants psychoanalysis (as well as other equally, if not more, absurd ideologies) in academic and popular literary discussions. Coming across the blog Literally Psyched on Scientific American’s website about a year ago gave me a great sense of hope. The tagline, “Conceived in literature, tested in psychology,” as well as the credibility conferred by the host site, promised that the most fitting approach to exploring the resonance and beauty of stories might be undergoing a long overdue renaissance, liberated at last from the dominion of crackpot theorists. So when the author, Maria Konnikova, a doctoral candidate at Columbia, released her first book, I made a point to have Amazon deliver it as early as possible. Mastermind: How to Think Like Sherlock Holmes does indeed follow the conceived-in-literature-tested-in-psychology formula, taking the principles of sound reasoning expounded by what may be the most recognizable fictional character in history and attempting to show how modern psychology proves their soundness. In what she calls a “Prelude” to her book, Konnikova explains that she’s been a Holmes fan since her father read Conan Doyle’s stories to her and her siblings as children.
     The one demonstration of the detective’s abilities that stuck with Konnikova the most comes when he explains to his companion and chronicler Dr. Watson the difference between seeing and observing, using as an example the number of stairs leading up to their famous flat at 221B Baker Street. Watson, naturally, has no idea how many stairs there are because he isn’t in the habit of observing. Holmes, preternaturally, knows there are seventeen steps. Ever since being made aware of Watson’s—and her own—cognitive limitations through this vivid illustration (which had a similar effect on me when I first read “A Scandal in Bohemia” as a teenager), Konnikova has been trying to find the secret to becoming a Holmesian observer as opposed to a mere Watsonian seer. Already in these earliest pages, we encounter some of the principle shortcomings of the strategy behind the book. Konnikova wastes no time on the question of whether or not a mindset oriented toward things like the number of stairs in your building has any actual advantages—with regard to solving crimes or to anything else—but rather assumes old Sherlock is saying something instructive and profound.
            Mastermind is, for the most part, an entertaining read. Its worst fault in the realm of simple page-by-page enjoyment is that Konnikova often belabors points that upon reflection expose themselves as mere platitudes. The overall theme is the importance of mindfulness—an important message, to be sure, in this age of rampant multitasking. But readers get more endorsement than practical instruction. You can only be exhorted to pay attention to what you’re doing so many times before you stop paying attention to the exhortations. The book’s problems in both the literary and psychological domains, however, are much more serious. I came to the book hoping it would hold some promise for opening the way to more scientific literary discussions by offering at least a glimpse of what they might look like, but while reading I came to realize there’s yet another obstacle to any substantive analysis of stories. Call it the TED effect. For anything to be read today, or for anything to get published for that matter, it has to promise to uplift readers, reveal to them some secret about how to improve their lives, help them celebrate the horizonless expanse of human potential.
Naturally enough, with the cacophony of competing information outlets, we all focus on the ones most likely to offer us something personally useful. Though self-improvement is a worthy endeavor, the overlooked corollary to this trend is that the worthiness intrinsic to enterprises and ideas is overshadowed and diminished. People ask what’s in literature for me, or what can science do for me, instead of considering them valuable in their own right—and instead of thinking, heaven forbid, we may have a duty to literature and science as institutions serving as essential parts of the foundation of civilized society.
            In trying to conceive of a book that would operate as a vehicle for her two passions, psychology and Sherlock Holmes, while at the same time catering to readers’ appetite for life-enhancement strategies and spiritual uplift, Konnikova has produced a work in the grip of a bewildering and self-undermining identity crisis. The organizing conceit of Mastermind is that, just as Sherlock explains to Watson in the second chapter of A Study in Scarlet, the brain is like an attic. For Konnikova, this means the mind is in constant danger of becoming cluttered and disorganized through carelessness and neglect. That this interpretation wasn’t what Conan Doyle had in mind when he put the words into Sherlock’s mouth—and that the meaning he actually had in mind has proven to be completely wrong—doesn’t stop her from making her version of the idea the centerpiece of her argument. “We can,” she writes,
learn to master many aspects of our attic’s structure, throwing out junk that got in by mistake (as Holmes promises to forget Copernicus at the earliest opportunity), prioritizing those things we want to and pushing back those that we don’t, learning how to take the contours of our unique attic into account so that they don’t unduly influence us as they otherwise might. (27)
This all sounds great—a little too great—from a self-improvement perspective, but the attic metaphor is Sherlock’s explanation for why he doesn’t know the earth revolves around the sun and not the other way around. He states quite explicitly that he believes the important point of similarity between attics and brains is their limited capacity. “Depend upon it,” he insists, “there comes a time when for every addition of knowledge you forget something that you knew before.” Note here his topic is knowledge, not attention.
            It is possible that a human mind could reach and exceed its storage capacity, but the way we usually avoid this eventuality is that memories that are seldom referenced are forgotten. Learning new facts may of course exhaust our resources of time and attention. But the usual effect of acquiring knowledge is quite the opposite of what Sherlock suggests. In the early 1990’s, a research team led by Patricia Alexander demonstrated that having background knowledge in a subject area actually increased participants’ interest in and recall for details in an unfamiliar text. One of the most widely known replications of this finding was a study showing that chess experts have much better recall for the positions of pieces on a board than novices. However, Sherlock was worried about information outside of his area of expertise. Might he have a point there?
The problem is that Sherlock’s vocation demands a great deal of creativity, and it’s never certain at the outset of a case what type of knowledge may be useful in solving it. In the story “The Lion’s Mane,” he relies on obscure information about a rare species of jellyfish to wrap up the mystery. Konnikova cites this as an example of “The Importance of Curiosity and Play.” She goes on to quote Sherlock’s endorsement for curiosity in The Valley of Fear: “Breadth of view, my dear Mr. Mac, is one of the essentials of our profession. The interplay of ideas and the oblique uses of knowledge are often of extraordinary interest” (151). How does she account for the discrepancy? Could Conan Doyle’s conception of the character have undergone some sort of evolution? Alas, Konnikova isn’t interested in questions like that. “As with most things,” she writes about the earlier reference to the attic theory, “it is safe to assume that Holmes was exaggerating for effect” (150). I’m not sure what other instances she may have in mind—it seems to me that the character seldom exaggerates for effect. In any case, he was certainly not exaggerating his ignorance of Copernican theory in the earlier story.
If Konnikova were simply privileging the science at the expense of the literature, the measure of Mastermind’s success would be in how clearly the psychological theories and findings are laid out. Unfortunately, her attempt to stitch science together with pronouncements from the great detective often leads to confusing tangles of ideas. Following her formula, she prefaces one of the few example exercises from cognitive research provided in the book with a quote from “The Crooked Man.” After outlining the main points of the case, she writes, 
How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision. (169)
So far she hasn’t gone beyond the obvious. But she does go on to cite a truly remarkable finding that emerged from research by Amos Tversky and Daniel Kahneman in the early 1980’s. People who read a description of a man named Bill suggesting he lacks imagination tended to feel it was less likely that Bill was an accountant than that he was an accountant who plays jazz for a hobby—even though the two points of information in that second description make in inherently less likely than the one point of information in the first. The same result came when people were asked whether it was more likely that a woman named Linda was a bank teller or both a bank teller and an active feminist. People mistook the two-item choice as more likely. Now, is this experimental finding an example of how people fail to sift crucial from incidental facts?
Daniel Kahneman
            The findings of this study are now used as evidence of a general cognitive tendency known as the conjunction fallacy. In his book Thinking, Fast and Slow, Kahneman explains how more detailed descriptions (referring to Tom instead of Bill) can seem more likely, despite the actual probabilities, than shorter ones. He writes,
The judgments of probability that our respondents offered, both in the Tom W and Linda problems, corresponded precisely to judgments of representativeness (similarity to stereotypes). Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. (159)
So people are confused because the less probable version is actually easier to imagine. But here’s how Konnikova tries to explain the point by weaving it together with Sherlock’s ideas:
Holmes puts it this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories in our minds. (173)
But Sherlock is not referring to our minds’ tendency to mistake coherence for probability, the tendency that has us seeing more detailed and hence less probable stories as more likely. How could he have been? Instead, he’s talking about the importance of independently assessing the facts instead of passively accepting the assessments of others. Konnikova is fudging, and in doing so she’s shortchanging the story and obfuscating the science.
            As the subtitle implies, though, Mastermind is about how to think; it is intended as a self-improvement guide. The book should therefore be judged based on the likelihood that readers will come away with a greater ability to recognize and avoid cognitive biases, as well as the ability to sustain the conviction to stay motivated and remain alert. Konnikova emphasizes throughout that becoming a better thinker is a matter of determinedly forming better habits of thought. And she helpfully provides countless illustrative examples from the Holmes canon, though some of these precepts and examples may not be as apt as she’d like. You must have clear goals, she stresses, to help you focus your attention. But the overall purpose of her book provides a great example of a vague and unrealistic end-point. Think better? In what domain? She covers examples from countless areas, from buying cars and phones, to sizing up strangers we meet at a party. Sherlock, of course, is a detective, so he focuses his attention of solving crimes. As Konnikova dutifully points out, in domains other than his specialty, he’s not such a mastermind.
            What Mastermind works best as is a fun introduction to modern psychology. But it has several major shortcomings in that domain, and these same shortcomings diminish the likelihood that reading the book will lead to any lasting changes in thought habits. Concepts are covered too quickly, organized too haphazardly, and no conceptual scaffold is provided to help readers weigh or remember the principles in context. Konnikova’s strategy is to take a passage from Conan Doyle’s stories that seems to bear on noteworthy findings in modern research, discuss that research with sprinkled references back to the stories, and wrap up with a didactic and sententious paragraph or two. Usually, the discussion begins with one of Watson’s errors, moves on to research showing we all tend to make similar errors, and then ends admonishing us not to be like Watson. Following Kahneman’s division of cognition into two systems—one fast and intuitive, the other slower and demanding of effort—Konnikova urges us to get out of our “System Watson” and rely instead on our “System Holmes.” “But how do we do this in practice?” she asks near the end of the book,
How do we go beyond theoretically understanding this need for balance and open-mindedness and applying it practically, in the moment, in situations where we might not have as much time to contemplate our judgments as we do in the leisure of our reading?
The answer she provides: “It all goes back to the very beginning: the habitual mindset that we cultivate, the structure that we try to maintain for our brain attic no matter what” (240). Unfortunately, nowhere in her discussion of built-in biases and the correlates to creativity did she offer any step-by-step instruction on how to acquire new habits. Konnikova is running us around in circles to hide the fact that her book makes an empty promise.
Tellingly, Kahneman, whose work on biases Konnikova cites on several occasions, is much more pessimistic about our prospects for achieving Holmesian thought habits. In the introduction to Thinking, Fast and Slow, he says his goal is merely to provide terms and labels for the regular pitfalls of thinking to facilitate more precise gossiping. He writes,
Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and home. (3)
The worshipful attitude toward Sherlock in Mastermind is designed to pander to our vanity, and so the suggestion that we need to rely on others to help us think is too mature to appear in its pages. The closest Konnikova comes to allowing for the importance of input and criticism from other people is when she suggests that Watson is an indispensable facilitator of Sherlock’s process because he “serves as a constant reminder of what errors are possible” (195), and because in walking him through his reasoning Sherlock is forced to be more mindful. “It may be that you are not yourself luminous,” Konnikova quotes from The Hound of the Baskervilles, “but you are a conductor of light. Some people without possessing genius have a remarkable power of stimulating it. I confess, my dear fellow, that I am very much in your debt” (196).
            That quote shows one of the limits of Sherlock’s mindfulness that Konnikova never bothers to address. At times throughout Mastermind, it’s easy to forget that we probably wouldn’t want to live the way Sherlock is described as living. Want to be a great detective? Abandon your spouse and your kids, move into a cheap flat, work full-time reviewing case histories of past crimes, inject some cocaine, shoot holes in the wall of your flat where you’ve drawn a smiley face, smoke a pipe until the air is unbreathable, and treat everyone, including your best (only?) friend with casual contempt. Conan Doyle made sure his character casts a shadow. The ideal character Konnikova holds up, with all his determined mindfulness, often bears more resemblance to Kwai Chang Caine from Kung Fu. This isn’t to say that Sherlock isn’t morally complex—readers love him because he’s so clearly a good guy, as selfish and eccentric as he may be. Konnikova cites an instance in which he holds off on letting the police know who committed a crime. She quotes:
Once that warrant was made out, nothing on earth would save him. Once or twice in my career I feel that I have done more real harm by my discovery of the criminal than ever he had done by his crime. I have learned caution now, and I had rather play tricks with the law of England than with my own conscience. Let us know more before we act.
But Konnikova isn’t interested in morality, complex or otherwise, no matter how central moral intuitions are to our enjoyment of fiction. The lesson she draws from this passage shows her at her most sententious and platitudinous:
You don’t mindlessly follow the same preplanned set of actions that you had determined early on. Circumstances change, and with them so does the approach. You have to think before you leap to act, or judge someone, as the case may be. Everyone makes mistakes, but some may not be mistakes as such, when taken in the context of the time and the situation. (243)
Hard to disagree, isn’t it?
            To be fair, Konnikova does mention some of Sherlock’s peccadilloes in passing. And she includes a penultimate chapter titled “We’re Only Human,” in which she tells the story of how Conan Doyle was duped by a couple of young girls into believing they had photographed some real fairies. She doesn’t, however, take the opportunity afforded by this episode in the author’s life to explore the relationship between the man and his creation. She effectively says he got tricked because he didn’t do what he knew how to do, it can happen to any of us, so be careful you don’t let it happen to you. Aren’t you glad that’s cleared up? She goes on to end the chapter with an incongruous lesson about how you should think like a hunter. Maybe we should, but how exactly, and when, and at what expense, we’re never told.
            Konnikova clearly has a great deal of genuine enthusiasm for both literature and science, and despite my disappointment with her first book I plan to keep following her blog. I’m even looking forward to her next book—confident she’ll learn from the negative reviews she’s bound to get on this one. The tragic blunder she made in eschewing nuanced examinations of how stories work, how people relate to characters, or how authors create them for a shallow and one-dimensional attempt at suggesting a 100 year-old fictional character somehow divined groundbreaking research findings from the end of the Twentieth and beginning of the Twenty-First Centuries calls to mind an exchange you can watch on YouTube between Neil Degrasse Tyson and Richard Dawkins. Tyson, after hearing Dawkins speak in the way he’s known to, tries to explain why many scientists feel he’s not making the most of his opportunities to reach out to the public.
You’re professor of the public understanding of science, not the professor of delivering truth to the public. And these are two different exercises. One of them is putting the truth out there and they either buy your book or they don’t. That’s not being an educator; that’s just putting it out there. Being an educator is not only getting the truth right; there’s got to be an act of persuasion in there as well. Persuasion isn’t “Here’s the facts—you’re either an idiot or you’re not.” It’s “Here are the facts—and here is a sensitivity to your state of mind.” And it’s the facts and the sensitivity when convolved together that creates impact. And I worry that your methods, and how articulately barbed you can be, ends up being simply ineffective when you have much more power of influence than is currently reflected in your output.
Dawkins begins his response with an anecdote that shows that he’s not the worst offender when it comes to simple and direct presentations of the facts.
A former and highly successful editor of New Scientist Magazine, who actually built up New Scientist to great new heights, was asked “What is your philosophy at New Scientist?” And he said, “Our philosophy at New Scientist is this: science is interesting, and if you don’t agree you can fuck off.”
I know the issue is a complicated one, but I can’t help thinking Tyson-style persuasion too often has the opposite of its intended impact, conveying as it does the implicit message that science has to somehow be sold to the masses, that it isn’t intrinsically interesting. At any rate, I wish that Konnikova hadn’t dressed up her book with false promises and what she thought would be cool cross-references. Sherlock Holmes is interesting. Psychology is interesting. If you don’t agree, you can fuck off.

Freud: The Falsified Cipher

Warhol's Freud
[As I'm hard at work on a story, I thought I'd post an essay from my first course as a graduate student on literary criticism. It was in the fall of 2009, and I was shocked and appalled that not only were Freud's ideas still being taught but there was no awareness whatsoever that psychology had moved beyond them. This is my attempt at righting the record while keeping my tone in check.]

The matter of epistemology in literary criticism is closely tied to the question of what end the discipline is supposed to serve. How critics decide what standard of truth to adhere to is determined by the role they see their work playing, both in academia and beyond. Freud stands apart as a literary theorist, professing in his works a commitment to scientific rigor in a field that generally holds belief in even the possibility of objectivity as at best naïve and at worst bourgeois or fascist. For the postmodernists, both science and literature are suspiciously shot through with the ideological underpinnings of capitalist European male hegemony, which they take as their duty to undermine. Their standard of truth, therefore, seems to be whether a theory or application effectively exposes one or another element of that ideology to “interrogation.” Admirable as the values underlying this patently political reading of texts are, the science-minded critic might worry lest such an approach merely lead straight back to the a priori assumptions from which it set forth. Now, a century after Freud revealed the theory and practice of psychoanalysis, his attempt to interpret literature scientifically seems like one possible route of escape from the circularity (and obscurantism) of postmodernism. Unfortunately, Freud’s theories have suffered multiple devastating empirical failures, and Freud himself has been shown to be less a committed scientist than an ingenious fabulist, but it may be possible to salvage from the failures of psychoanalysis some key to a viable epistemology of criticism.

            A text dating from early in the development of psychoanalysis shows both the nature of Freud’s methods and some of the most important substance of his supposed discoveries. Describing his theory of the Oedipus complex in The Interpretation of Dreams, Freud refers vaguely to “observations on normal children,” to which he compares his experiences with “psychoneurotics” to arrive at his idea that both display, to varying degrees, “feelings of love and hatred to their parents” (920). There is little to object to in this rather mundane observation, but Freud feels compelled to write that his

discovery is confirmed by a legend…a legend whose profound
and universal power to move can only be understood if the
hypothesis I have put forward in regard to the psychology of
children has an equally universal validity. (920)

He proceeds to relate the Sophocles drama from which his theory gets its name. In the story, Oedipus is tricked by fate into killing his father and marrying his mother. Freud takes this as evidence that the love and hatred he has observed in children are of a particular kind. According to his theory, any male child is fated to “direct his first sexual impulse towards his mother” and his “first murderous wish against his father” (921). But Freud originally poses this idea as purely hypothetical. What settles the issue is evidence he gleans from dream interpretations. “Our dreams,” he writes, “convince us that this is so” (921). Many men, it seems, confided to him that they dreamt of having sex with their mothers and killing their fathers.

            Freud’s method, then, was to seek a thematic confluence between men’s dreams, the stories they find moving, and the behaviors they display as children, which he knew mostly through self-reporting years after the fact. Indeed, the entire edifice of psychoanalysis is purported to have been erected on this epistemic foundation. In a later essay on “The Uncanny,” Freud makes the sources of his ideas even more explicit. “We know from psychoanalytic experience,” he writes, “that the fear of damaging or losing one’s eyes is a terrible one in children” (35). A few lines down, he claims that, “A study of dreams, phantasies and myths has taught us that anxiety about one’s eyes…is a substitute for the dread of being castrated” (36). Here he’s referring to another facet of the Oedipus complex which theorizes that the child keeps his sexual desire for his mother in check because of the threat of castration posed by his jealous father. It is through this fear of his father, which transforms into grudging respect, and then into emulation, that the boy learns his role as a male in society. And it is through the act of repressing his sexual desire for his mother that he first develops his unconscious, which will grow into a general repository of unwanted desires and memories (Eagleton 134).

            But what led Freud to this theory of repression, which suggests that we have the ability to willfully forget troubling incidents and drive urges to some portion of our minds to which we have no conscious access? He must have arrived at an understanding of this process in the same stroke that led to his conclusions about the Oedipus complex, because, in order to put forth the idea that as children we all hated one parent and wanted to have sex with the other, he had to contend with the fact that most people find the idea repulsive. What accounts for the dramatic shift between childhood desires and those of adults? What accounts for our failure to remember the earlier stage? The concept of repression had to be firmly established before Freud could make such claims. Of course, he could have simply imported the idea from another scientific field, but there is no evidence he did so. So it seems that he relied on the same methods—psychoanalysis, dream interpretation, and the study of myths and legends—to arrive at his theories as he did to test them. Inspiration and confirmation were one and the same.

            Notwithstanding Freud’s claim that the emotional power of the Oedipus legend “can only be understood” if his hypothesis about young boys wanting to have sex with their mothers and kill their fathers has “universal validity,” there is at least one alternative hypothesis which has the advantage of not being bizarre. It could be that the point of Sophocles’s drama was that fate is so powerful it can bring about exactly the eventualities we most desire to avoid. What moves audiences and readers is not any sense of recognition of repressed desires, but rather compassion for the man who despite, even because of, his heroic efforts fell into this most horrible of traps. (Should we assume that the enduring popularity of W.W. Jacobs’s story, “The Monkey’s Paw,” which tells a similar fated story about a couple who inadvertently wish their son dead, proves that all parents want to kill their children?) The story could be moving because it deals with events we would never want to happen. It is true however that this hypothesis fails to account for why people enjoy watching such a tragedy being enacted—but then so does Freud’s. If we have spent our conscious lives burying the memory of our childhood desires because they are so unpleasant to contemplate, it makes little sense that we should find pleasure in seeing those desires acted out on stage. And assuming this alternative hypothesis is at least as plausible as Freud’s, we are left with no evidence whatsoever to support his theory of repressed childhood desires.

            To be fair, Freud did look beyond the dreams and myths of men of European descent to test the applicability of his theories. In his book Totem and Taboo he inventories “savage” cultures and adduces the universality among them of a taboo against incest as further proof of the Oedipus complex. He even goes so far as to cite a rival theory put forth by a contemporary:

            Westermarck has explained the horror of incest on the
            ground that “there is an innate aversion to sexual
            intercourse between persons living very closely together
            from early youth, and that, as such persons are in most cases
            related by blood, this feeling would naturally display itself
            in custom and law as a horror of intercourse between near
            kin.” (152)

To dismiss Westermarck’s theory, Freud cites J. G. Frazer, who argues that laws exist only to prevent us from doing things we would otherwise do or prod us into doing what we otherwise would not. That there is a taboo against incest must therefore signal that there is no innate aversion, but rather a proclivity, for incest. Here it must be noted that the incest Freud had in mind includes not just lust for the mother but for sisters as well. “Psychoanalysis has taught us,” he writes, again vaguely referencing his clinical method, “that a boy’s earliest choice of objects for his love is incestuous and that those objects are forbidden ones—his mother and sister” (22). Frazer’s argument is compelling, but Freud’s test of the applicability of his theories is not the same as a test of their validity (though it seems customary in literary criticism to conflate the two).

Edvard Westermarck
            As linguist and cognitive neuroscientist Steven Pinker explains in How the Mind Works, in tests of validity Westermarck beats Freud hands down. Citing the research of Arthur Wolf, he explains that without setting out to do so, several cultures have conducted experiments on the nature of incest aversion. Israeli kibbutzim, in which children grew up in close proximity to several unrelated agemates, and the Chinese and Taiwanese practice of adopting future brides for sons and raising them together as siblings are just two that Wolf examined. When children from the kibbutzim reached sexual maturity, even though there was no discouragement from adults for them to date or marry, they showed a marked distaste for each other as romantic partners. And compared to more traditional marriages, those in which the bride and groom grew up in conditions mimicking siblinghood were overwhelmingly “unhappy, unfaithful, unfecund, and short” (459). The effect of proximity in early childhood seems to apply to parents as well, at least when it comes to fathers’ sexual feelings for their daughters. Pinker cites research that shows the fathers who sexually abuse their daughters tend to be the ones who have spent the least time with them as infants, while the stepdads who actually do spend a lot of time with their stepdaughters are no more likely to abuse them than their biological parents. These studies not only favor Westermarck’s theory; they also provide a counter to Frazer’s objection to it. Human societies are so complex that we often grow up in close proximity with people who are unrelated, or don’t grow up with people who are, and therefore it is necessary for there to be a cultural proscription—a taboo—against incest in addition to the natural mechanism of aversion.

Frederick Crews
            Among biologists and anthropologists, what is now called the Westermarck effect has displaced Freud’s Oedipus complex as the best explanation for incest avoidance. Since Freud’s theory of childhood sexual desires has been shown to be false, the question arises of where this leaves his concept of repression. According to literary critic—and critic of literary criticism—Frederick Crews, repression came to serve in the 1980’s and 90’s a role equivalent to the “spectral evidence” used in the Salem witch trials. Several psychotherapists latched on to the idea that children can store reliable information in their memories, especially when that information is too terrible for them to consciously handle. And the testimony of these therapists has led to many convictions and prison sentences. But the evidence for this notion of repression is solely clinical—modern therapists base their conclusions on interactions with patients, just as Freud did. Unfortunately, researchers outside the clinical setting are unable to find any phenomenon answering to the description of repressed but retrievable memories. Crews points out that there are plenty of people who are known to have survived traumatic experiences: “Holocaust survivors make up the most famous class of such subjects, but whatever group or trauma is chosen, the upshot of well-conducted research is always the same” (158). That upshot:

            Unless a victim received a physical shock to the brain or
            was so starved or sleep deprived as to be thoroughly
            disoriented at the time, those experiences are typically
            better remembered than ordinary ones. (159, emphasis in

It seems here, as with incest aversion, Freud got the matter exactly wrong—and with devastating fallout for countless families and communities. But Freud was sketchy when it came to whether or not it was memories of actual events that were repressed or just fantasies. The crux of his argument was that we repress unacceptable and inappropriate drives and desires.

            And the concept of repressed desires is integral to the use of psychoanalysis in literary criticism. In The Interpretation of Dreams, Freud distinguishes between the manifest content of dreams and their latent content. Having been exiled from consciousness, troublesome desires press against the bounds of the ego, Freud’s notional agent in charge of tamping down uncivilized urges. In sleep, the ego relaxes, allowing the desires of the id, from whence all animal drives emerge, an opportunity for free play. Even in dreams, though, full transparency of the id would be too disconcerting for the conscious mind to accept, so the ego disguises all the elements which surface with a kind of code. Breaking this code is the work of psychoanalytic dream interpretation. It is also the basis for Freud’s analysis of myths and the underlying principle of Freudian literary criticism. (In fact, the distinction between manifest and latent content is fundamental to many schools of literary criticism, though they each have their own version of the true nature of the latent content.) Science writer Steven Johnson compares Freud’s conception of repressed impulses to compressed gas seeping through the cracks of the ego’s defenses, emerging as slips of the tongue or baroque dream imagery. “Build up enough pressure in the chamber, though, and the whole thing explodes—into uncontrolled hysteria, anxiety, madness” (191). The release of pressure, as it were, through dreams and through various artistic media, is sanity-saving.
Steven Johnson
            Johnson’s book, Mind Wide Open: Your Brain and the Neuroscience of Everyday life, takes the popular currency of Freud’s ideas as a starting point for his exploration of modern science. The subtitle is a homage to Freud’s influential work The Psychopathology of Everyday Life. Perhaps because he is not a working scientist, Johnson is able to look past the shaky methodological foundations of psychoanalysis and examine how accurately its tenets map onto the modern findings of neuroscience. Though he sees areas of convergence, like the idea of psychic conflict and that of the unconscious in general, he has to admit in his conclusion that “the actual unconscious doesn’t quite look like the one Freud imagined” (194). Rather than a repository of repressed fantasies, the unconscious is more of a store of implicit, or procedural, knowledge. Johnson explains, “Another word for unconscious is ‘automated’—the things you do so well you don’t even notice doing them” (195). And what happens to all the pressurized psychic energy resulting from our repression of urges? “This is one of those places,” Johnson writes, “where Freud’s metaphoric scaffolding ended up misleading him” (198). Instead of a steam engine, neuroscientists view the brain as type of ecosystem, with each module competing for resources; if the module goes unused—the neurons failing to fire—then the strength of their connections diminishes.

            What are the implications of this new conception of how the mind works for the interpretation of dreams and works of art? Without the concept of repressed desires, is it still possible to maintain a distinction between the manifest and latent content of mental productions? Johnson suggests that there are indeed meaningful connections that can be discovered in dreams and slips of the tongue. To explain them, he points again to the neuronal ecosystem, and to the theory that “Neurons that fire together wire together.” He writes:

            These connections are not your unconscious speaking in
            code. They’re much closer to free-associating. These
            revelations aren’t the work of some brilliant cryptographer
            trying to get a message to the frontlines without enemy
            detection. They’re more like echoes, reverberations. One
            neuronal group fires, and a host of others join in the
            chorus. (200-201)

Mind Wide Open represents Johnson’s attempt to be charitable to the century-old, and now popularly recognized, ideas of psychoanalysis. But in this description of the shortcomings of Freud’s understanding of the unconscious and how it reveals itself, he effectively discredits the epistemological underpinnings of any application of psychoanalysis to art. It’s not only the content of the unconscious that Freud got outrageously wrong, but the very nature of its operations. And if Freud could so confidently look into dreams and myths and legends and find in them material that simply wasn’t there, it is cause for us to marvel at the power of his preconceptions to distort his perceptions.

            Ultimately, psychoanalysis failed to move from the realm of proto-science to that of methodologically well founded science, and got relegated rather to the back channel of pseudoscience by the hubris of its founder. And yet, if Freud had relied on good science, his program of interpreting literature in terms of the basic themes of human nature, and even his willingness to let literature inform his understanding of those themes, may have matured into a critical repertoire free of the obscurantist excesses and reality-denying absurdities of postmodernism. (Anthropologist Clifford Geertz once answered a postmodernist critic of his work by acknowledging that perfect objectivity is indeed impossible, but then so is a perfectly germ-free operating room; that shouldn’t stop us from trying to be as objective and as sanitary as our best methods allow.)
            Critics could feasibly study the production of novels by not just one or a few authors, but a large enough sample—possibly extending across cultural divides—to analyze statistically. They could pose questions systematically to even larger samples of readers. And they could identify the themes in any poem or novel which demonstrate the essential (in the statistical sense) concerns of humanity that have been studied by behavioral scientists, themes like status-seeking, pair-bonding, jealousy, and even the overwhelming strength of the mother-infant bond. “The human race has produced only one successfully validated epistemology,” writes Frederick Crews (362). That epistemology encompasses a great variety of specific research practices, but they all hold as inviolable the common injunction “to make a sharp separation between hypothesis and evidence” (363). Despite his claims to scientific legitimacy, Freud failed to distinguish himself from other critical theorists by relying too much on his own intuitive powers, a reliance that all but guarantees succumbing to the natural human tendency to discover in complex fields precisely what you’ve come to them seeking. 

Also read Absurdities and Atrocities in Literary Criticism

A Crash Course in Multilevel Selection Theory part 2: Steven Pinker Falls Prey to the Averaging Fallacy Sober and Wilson Tried to Warn Him about

 Read Part 1               If you were a woman applying to graduate school at the University of California at Berkeley in 1973, you would have had a 35 percent chance of being accepted. If you were a man, your chances would have been significantly better. Forty-four percent of male applicants got accepted that year. Apparently, at this early stage of the feminist movement, even a school as notoriously progressive as Berkeley still discriminated against women. But not surprisingly, when confronted with these numbers, the women of the school were ready to take action to right the supposed injustice. After a lawsuit was filed charging admissions offices with bias, however, a department-by-department examination was conducted which produced a curious finding: not a single department admitted a significantly higher percentage of men than women. In fact, there was a small but significant trend in the opposite direction—a bias against men.
What this means is that somehow the aggregate probability of being accepted into grad school was dramatically different from the probabilities worked out through disaggregating the numbers with regard to important groupings, in this case the academic departments housing the programs assessing the applications. This discrepancy called for an explanation, and statisticians had had one on hand since 1951.
This paradoxical finding fell into place when it was noticed that women tended to apply to departments with low acceptance rates. To see how this can happen, imagine that 90 women and 10 men apply to a department with a 30 percent acceptance rate. This department does not discriminate and therefore accepts 27 women and 3 men. Another department, with a 60 percent acceptance rate, receives applications from 10 women and 90 men. This department doesn’t discriminate either and therefore accepts 6 women and 54 men. Considering both departments together, 100 men and 100 women applied, but only 33 women were accepted, compared with 57 men. A bias exists in the two departments combined, despite the fact that it does not exist in any single department, because the departments contribute unequally to the total number of applicants who are accepted. (25)
This is how the counterintuitive statistical phenomenon known as Simpson’s Paradox is explained by philosopher Elliott Sober and biologist David Sloan Wilson in their 1998 book Unto Others: The Evolution and Psychology of Unselfish Behavior, in which they argue that the same principle can apply to the relative proliferation of organisms in groups with varying percentages of altruists and selfish actors. In this case, the benefit to the group of having more altruists is analogous to the higher acceptance rates for grad school departments which tend to receive a disproportionate number of applications from men. And the counterintuitive outcome is that, in an aggregated population of groups, altruists have an advantage over selfish actors—even though within each of those groups selfish actors outcompete altruists.  
            Sober and Wilson caution that this assessment is based on certain critical assumptions about the population in question. “This model,” they write, “requires groups to be isolated as far as the benefits of altruism are concerned but nevertheless to compete in the formation of new groups” (29). It also requires that altruists and nonaltruists somehow “become concentrated in different groups” (26) so the benefits of altruism can accrue to one while the costs of selfishness accrue to the other. One type of group that follows this pattern is a family, whose members resemble each other in terms of their traits—including a propensity for altruism—because they share many of the same genes. In humans, families tend to be based on pair bonds established for the purpose of siring and raising children, forming a unit that remains stable long enough for the benefits of altruism to be of immense importance. As the children reach adulthood, though, they disperse to form their own family groups. Therefore, assuming families live in a population with other families, group selection ought to lead to the evolution of altruism.
(pg 24) Darker area represents altruists and shrinks in
both groups--but notice the right circle gets bigger.
            Sober and Wilson wrote Unto Others to challenge the prevailing approach to solving mysteries in evolutionary biology, which was to focus strictly on competition between genes. In place of this exclusive attention on gene selection, they advocate a pluralistic approach that takes into account the possibility of selection occurring at multiple levels, from genes to individuals to groups. This is where the term multilevel selection comes from. In certain instances, focusing on one level instead of another amounts to a mere shift in perspective. Looking at families as groups, for instance, leads to many of the same conclusions as looking at them in terms of vehicles for carrying genes. William D. Hamilton, whose thinking inspired both Richard Dawkins’ Selfish Gene and E.O. Wilson’s Sociobiology, long ago explained altruism within families by setting forth the theory of kin selection, which posits that family members will at times behave in ways that benefit each other even at their own expense because the genes underlying the behavior don’t make any distinction between the bodies which happen to be carrying copies of themselves. Sober and Wilson write,
As we have seen, however, kin selection is a special case of a more general theory—a point that Hamilton was among the first to appreciate. In his own words, “it obviously makes no difference if altruists settle with altruists because they are related… or because they recognize fellow altruists as such, or settle together because of some pleiotropic effect of the gene on habitat preference.” We therefore need to evaluate human social behavior in terms of the general theory of multilevel selection, not the special case of kin selection. When we do this, we may discover that humans, bees, and corals are all group-selected, but for different reasons. (134)
A general proclivity toward altruism based on section at the level of family groups may look somewhat different from kin-selected altruism targeted solely at those who are recognized as close relatives. For obvious reasons, the possibility of group selection becomes even more important when it comes to explaining the evolution of altruism among unrelated individuals.
Elliott Sober
            We have to bear in mind that Dawkins’s selfish genes are only selfish with regard to concerning themselves with nothing but ensuring their own continued existence—by calling them selfish he never meant to imply they must always be associated with selfishness as a trait of the bodies they provide the blueprints for. Selfish genes, in other words, can sometimes code for altruistic behavior, as in the case of kin selection. So the question of what level selection operates on is much more complicated than it would be if the gene-focused approach predicted selfishness while the multilevel approach predicted altruism. But many strict gene selection advocates argue that because selfish gene theory can account for altruism in myriad ways there’s simply no need to resort to group selection. Evolution is, after all, changes over time in gene frequencies. So why should we look to higher levels?
David Sloan Wilson
            Sober and Wilson demonstrate that if you focus on individuals in their simple model of predominantly altruistic groups competing against predominantly selfish groups you will conclude that altruism is adaptive because it happens to be the trait that ends up proliferating. You may add the qualifier that it’s adaptive in the specified context, but the upshot is that from the perspective of individual selection altruism outcompetes selfishness. The problem is that this is the same reasoning underlying the misguided accusations against Berkley; for any individual in that aggregate population, it was advantageous to be a male—but there was never any individual selection pressure against females. Sober and Wilson write,
The averaging approach makes “individual selection” a synonym for “natural selection.” The existence of more than one group and fitness differences between the groups have been folded into the definition of individual selection, defining group selection out of existence. Group selection is no longer a process that can occur in theory, so its existence in nature is settled a priori. Group selection simply has no place in this semantic framework. (32)
Thus, a strict focus on individuals, though it may appear to fully account for the outcome, necessarily obscures a crucial process that went into producing it. The same logic might be applicable to any analysis based on gene-level accounting. Sober and Wilson write that
if the point is to understand the processes at work, the resultant is not enough. Simpson’s paradox shows how confusing it can be to focus only on net outcomes without keeping track of the component causal factors. This confusion is carried into evolutionary biology when the separate effects of selection within and between groups are expressed in terms of a single quantity. (33)
They go on to label this approach “the averaging fallacy.” Acknowledging that nobody explicitly insists that group selection is somehow impossible by definition, they still find countless instances in which it is defined out of existence in practice. They write,
Even though the averaging fallacy is not endorsed in its general form, it frequently occurs in specific cases. In fact, we will make the bold claim that the controversy over group selection and altruism in biology can be largely resolved simply by avoiding the averaging fallacy. (34)
            Unfortunately, this warning about the averaging fallacy continues to go unheeded by advocates of strict gene selection theories. Even intellectual heavyweights of the caliber of Steven Pinker fall into the trap. In a severely disappointing essay published just last month at Edge.org called “The False Allure of Group Selection,” Pinker writes
If a person has innate traits that encourage him to contribute to the group’s welfare and as a result contribute to his own welfare, group selection is unnecessary; individual selection in the context of group living is adequate. Individual human traits evolved in an environment that includes other humans, just as they evolved in environments that include day-night cycles, predators, pathogens, and fruiting trees.
Steven Pinker
Multilevel selectionists wouldn’t disagree with this point; they would readily explain traits that benefit everyone in the group at no cost to the individuals possessing them as arising through individual selection. But Pinker here shows his readiness to fold the process of group competition into some generic “context.” The important element of the debate, of course, centers on traits that benefit the group at the expense of the individual. Pinker writes,
Except in the theoretically possible but empirically unlikely circumstance in which groups bud off new groups faster than their members have babies, any genetic tendency to risk life and limb that results in a net decrease in individual inclusive fitness will be relentlessly selected against. A new mutation with this effect would not come to predominate in the population, and even if it did, it would be driven out by any immigrant or mutant that favored itself at the expense of the group.
But, as Sober and Wilson demonstrate, those self-sacrificial traits wouldn’t necessarily be selected against in the population. In fact, self-sacrifice would be selected for if that population is an aggregation of competing groups. Pinker fails to even consider this possibility because he’s determined to stick with the definition of natural selection as occurring at the level of genes.
            Indeed, the centerpiece of Pinker’s argument against group selection in this essay is his definition of natural selection. Channeling Dawkins, he writes that evolution is best understood as competition between “replicators” to continue replicating. The implication is that groups, and even individuals, can’t be the units of selection because they don’t replicate themselves. He writes,
The theory of natural selection applies most readily to genes because they have the right stuff to drive selection, namely making high-fidelity copies of themselves. Granted, it's often convenient to speak about selection at the level of individuals, because it’s the fate of individuals (and their kin) in the world of cause and effect which determines the fate of their genes. Nonetheless, it’s the genes themselves that are replicated over generations and are thus the targets of selection and the ultimate beneficiaries of adaptations.
The underlying assumption is that, because genes rely on individuals as “vehicles” to replicate themselves, individuals can sometimes be used as shorthand for genes when discussing natural selection. Since gene competition within an individual would be to the detriment of all the genes that individual carries and strives to pass on, the genes collaborate to suppress conflicts amongst themselves. The further assumption underlying Pinker’s and Dawkins’s reasoning is that groups make for poor vehicles because suppressing within group conflict would be too difficult. But, as Sober and Wilson write,
This argument does not evaluate group selection on a trait-by-trait basis. In addition, it begs the question of how individuals became such good vehicles of selection in the first place. The mechanisms that currently limit within-individual selection are not a happy coincidence but are themselves adaptions that evolved by natural selection. Genomes that managed to limit internal conflict presumably were more fit than other genomes, so these mechanisms evolve by between-genome selection. Being a good vehicle as Dawkins defines it is not a requirement for individual selection—it’s a product of individual selection. Similarly, groups do not have to be elaborately organized “superorganisms” to qualify as a unit of selection with respect to particular traits. (97)
The idea of a “trait-group” is exemplified by the simple altruistic group versus selfish group model they used to demonstrate the potential confusion arising from Simpson’s paradox. As long as individuals with the altruism trait interact with enough regularity for the benefits to be felt, they can be defined as a group with regard to that trait.
            Pinker makes several other dubious points in his essay, most of them based on the reasoning that group selection isn’t “necessary” to explain this or that trait, only justifying his prejudice in favor of gene selection with reference to the selfish gene definition of evolution. Of course, it may be possible to imagine gene-level explanations to behaviors humans engage in predictably, like punishing cheaters in economic interactions even when doing so means the punisher incurs some cost to him or herself. But Pinker is so caught up with replicators he overlooks the potential of this type of punishment to transform groups into functional vehicles. As Sober and Wilson demonstrate, group competition can lead to the evolution of altruism on its own. But once altruism reaches a certain threshold group selection can become even more powerful because the altruistic group members will, by definition, be better at behaving as a group. And one of the mechanisms we might expect to evolve through an ongoing process of group selection would operate to curtail within group conflict and exploitation. The costly punishment Pinker dismisses as possibly explicable through gene selection is much more likely to havearisen through group selection. Sober and Wilson delight in the irony that, “The entire language of social interactions among individuals in groups has been burrowed to describe genetic interactions within individuals; ‘outlaw’ genes, ‘sheriff’ genes, ‘parliaments’ of genes, and so on” (147).
            Unto Others makes such a powerful case against strict gene-level explanations and for the potentially crucial role of group selection that anyone who undertakes to argue that the appeal of multilevel selection theory is somehow false without even mentioning it risks serious embarrassment. Published fourteen years ago, it still contains a remarkably effective rebuttal to Pinker’s essay:  
In short, the concept of genes as replicators, widely regarded as a decisive argument against group selection, is in fact totally irrelevant to the subject. Selfish gene theory does not invoke any processes that are different from the ones described in multilevel selection theory, but merely looks at the same processes in a different way. Those benighted group selectionists might be right in every detail; group selection could have evolved altruists that sacrifice themselves for the benefit of others, animals that regulate their numbers to avoid overexploiting their resources, and so on. Selfish gene theory calls the genes responsible for these behaviors “selfish” for the simple reason that they evolved and therefore replicated more successfully than other genes. Multilevel selection theory, on the other hand, is devoted to showing how these behaviors evolve. Fitness differences must exist somewhere in the biological hierarchy—between individuals within groups, between groups in the global population, and so on. Selfish gene theory can’t even begin to explore these questions on the basis of the replicator concept alone. The vehicle concept is its way of groping toward the very issues that multilevel selection theory was developed to explain. (88)
Sober and Wilson, in opening the field of evolutionary studies to forces beyond gene competition, went a long way toward vindicating Stephen Jay Gould, who throughout his career held that selfish gene theory was too reductionist—he even incorporated their arguments into his final book. But Sober and Wilson are still working primarily in the abstract realm of evolutionary modeling, although in the second half of Unto Others they cite multiple psychological and anthropological sources. A theorist even more after Gould’s own heart, one who synthesizes both models and evidence from multiple fields, from paleontology to primatology to ethnography, into a hypothetical account of the natural history of human evolution, from the ancestor we share with the great apes to modern nomadic foragers and beyond, is the anthropologist Christopher Boehm, whose work we’ll be exploring in part 3.
Read Part 1 of A Crash Course in Multilevel Selection Theory: The Goundwork Laid by Dawkins and Gould
And Part 3: The People Who Evolved Our Genes for Us: Christopher Boehm on Moral Origins.

The Mental Illness Zodiac: Why the DSM 5 Won't Be Anything But More Pseudoscience

            Thinking you can diagnose psychiatric disorders using checklists of symptoms means taking for granted a naïve model of the human mind and human behavior. How discouraging to those in emotional distress, or to those doubting their own sanity, that the guides they turn to for help and put their faith in to know what’s best for them embrace this model. The DSM has taken it for granted since its inception, and the latest version, the DSM 5, due out next year, despite all the impediments to practical usage it does away with, despite all the streamlining, and despite all the efforts to adhere to common sense, only perpetuates the mistake. That the diagnostic categories are necessarily ambiguous and can’t be tied to any objective criteria like biological markers has been much discussed, as have the corruptions of the mental health industry, including pharmaceutical companies’ reluctance to publish failed trials for their blockbuster drugs, and clinical researchers who make their livings treating the same disorders they lobby to have included in the list of official diagnoses. Indeed, there’s good evidence that prognoses for mental disorders have actually gotten worse over the past century. What’s not being discussed, however, is the propensity in humans to take on roles, to play parts, even tragic ones, even horrific ones, without being able to recognize they’re doing so.

            In his lighthearted, mildly satirical but severely important book on self-improvement 59 Seconds: Change Your Life in Under a Minute, psychologist Richard Wiseman describes an experiment he conducted for the British TV show The People Watchers. A group of students spending an evening in a bar with their friends was given a series of tests, and then they were given access to an open bar. The tests included memorizing a list of numbers, walking along a line on the floor, and catching a ruler dropped by experimenters as quickly as possible. Memory, balance, and reaction time—all areas our performance diminishes in predictably as we drink. The outcomes of the tests were well in-keeping with expectation as they were repeated over the course of the evening. All the students did progressively worse the more they drank. And the effects of the alcohol were consistent throughout the entire group of students. It turns out, however, that only half of them were drinking alcohol.

At the start of the study, Wiseman had given half the participants a blue badge and the other half a red badge. The bartenders poured regular drinks for everyone with red badges, but for those with blue ones they made drinks which looked, smelled, and tasted like their alcoholic counterparts but were actually non-alcoholic. Now, were the students with the blue badges faking their drunkenness? They may have been hamming it for the cameras, but that would be true of the ones who were actually drinking too. What they were doing instead was taking on the role—you might even say taking on the symptoms—of being drunk. As Wiseman explains,

Our participants believed that they were drunk, and so they thought and acted in a way that was consistent with their beliefs. Exactly the same type of effect has emerged in medical experiments when people exposed to fake poison ivy developed genuine rashes, those given caffeine-free coffee became more alert, and patients who underwent a fake knee operation reported reduced pain from their “healed” tendons. (204)

After being told they hadn’t actually consumed any alcohol, the students in the blue group “laughed, instantly sobered up, and left the bar in an orderly and amused fashion.” But not all the natural role-playing humans engage in is this innocuous and short-lived.

            In placebo studies like the one Wiseman conducted, participants are deceived. You could argue that actually drinking a convincing replica of alcohol or taking a realistic-looking pill is the important factor behind the effects. People who seek treatment for psychiatric disorders aren’t tricked in this way; so what would cause them to take on the role associated with, say, depression, or bipolar? But plenty of research shows that pills or potions aren’t necessary. We take on different roles in different settings and circumstances all the time. We act much differently at football games and rock concerts than we do at work or school. These shifts are deliberate, though, and we’re aware of them, at least to some degree, when they occur. But many cues are more subtle. It turns out that just being made aware of the symptoms of a disease can make you suspect that you have it. What’s called Medical Student Syndrome afflicts those studying both medical and psychiatric diagnoses. For the most part, you either have a biological disease or you don’t, so the belief that you have one is contingent on the heightened awareness that comes from studying the symptoms. But is there a significant difference between believing you’re depressed and having depression? There answer, according to check-list diagnosis, is no. 

            In America, we all know the symptoms of depression because we’re bombarded with commercials, like the one that uses squiggly circle faces to explain that it’s caused by a deficit of the neurotransmitter serotonin—a theory that had already been ruled out by the time that commercial began to air. More insidious though are the portrayals of psychiatric disorders in movies, TV series, or talk shows—more insidious because they embed the role-playing instructions in compelling stories. These shows profess to be trying to raise awareness so more people will get help to end their suffering. They profess to be trying to remove the stigma so people can talk about their problems openly. They profess to be trying to help people cope. But, from a perspective of human behavior that acknowledges the centrality of role-playing to our nature, all these shows are actually doing is shilling for the mental health industry, and they are probably helping to cause much of the suffering they claim to be trying to assuage.

            Multiple Personality Disorder, or Dissociative Identity Disorder as it’s now called, was an exceedingly rare diagnosis until the late 1970s and early 1980s when its incidence spiked drastically. Before the spike, there were only ever around a hundred cases. Between 1985 and 1995, there were around 40,000 new cases. What happened? There was a book and a miniseries called Sybil starring Sally Field that aired in 1977. Much of the real-life story on which Sybil was based has been cast into doubt through further investigation (or has been shown to be completely fabricated). But if you’re one to give credence to the validity of the DID diagnosis (and you shouldn’t), then we can look at another strange behavioral phenomenon whose incidence spiked after a certain movie hit the box offices in the 1970’s. Prior to the release of The Exorcist, the Catholic church had pretty much consigned the eponymous ritual to the dustbins of history. Lately, though, they’ve had to dust it off. The Skeptic’s Dictionary says of a TV series devoted to the exorcism ritual, or the play rather, on the Sci-Fi channel,

The exorcists' only prop is a Bible, which is held in one hand while they talk down the devil in very dramatic episodes worthy of Jerry Springer or Jenny Jones. The “possessed” could have been mentally ill, actors, mentally ill actors, drug addicts, mentally ill drug addicts, or they may have been possessed, as the exorcists claimed. All the participants shown being exorcized seem to have seen the movie “The Exorcist” or one of the sequels. They all fell into the role of husky-voiced Satan speaking from the depths, who was featured in the film. The similarities in speech and behavior among the “possessed” has led some psychologists such as Nicholas Spanos to conclude that both “exorcist” and “possessed” are engaged in learned role-playing.

If people can somehow inadvertently fall into the role of having multiple personalities or being possessed by demons, it’s not hard to imagine them hearing about, say, bipolar, briefly worrying that they may have some of the symptoms, and then subsequently taking on the role, even the identity of someone battling bipolar disorder.

            Psychologist Dan McAdams theorizes that everyone creates his or her own “personal myth,” which serves to give life meaning and trajectory. The character we play in our own myth is what we recognize as our identity, what we think of when we try to answer the question “Who am I?” in all its profundity. But, as McAdams explains in The Stories We Live By: Personal Myths and the Making of the Self,

Stories are less about facts and more about meanings. In the subjective and embellished telling of the past, the past is constructed—history is made. History is judged to be true or false not solely with respect to its adherence to empirical fact. Rather, it is judged with respect to such narrative criteria as “believability” and “coherence.” There is a narrative truth in life that seems quite removed from logic, science, and empirical demonstration. It is the truth of a “good story.” (28-9)
Dan McAdams

The problem when it comes to diagnosing psychiatric disorders is that the checklist approach tries to use objective, scientific criteria, when the only answers they’ll ever get will be in terms of narrative criteria. But why, if people are prone to taking on roles, wouldn’t they take on something pleasant, like kings or princesses?

            Since our identities are made up of the stories we tell about ourselves—even to ourselves—it’s important that those stories be compelling. And if nothing ever goes wrong in the stories we tell, well, they’d be pretty boring. As Jonathan Gottschall writes in The Storytelling Animal: How Stories Make Us Human,

This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television. (171)

Listen to the ways talk show hosts like Oprah talk about mental disorders, and count how many times in an episode she congratulates the afflicted guests for their bravery in keeping up the struggle. Sometimes, the word hero is even bandied about. Troublingly, the people who cast themselves as heroes spreading awareness, countering stigmas, and helping people cope even like to do really counterproductive things like publishing lists of celebrities who supposedly suffer from the disorder in question. Think you might have bipolar? Kay Redfield Jameson thinks you’re in good company. In her book Touched By Fire, she suggests everyone from rocker Curt Cobain to fascist Mel Gibson is in that same boat-full of heroes.

            The reason medical researchers insist a drug must not only be shown to make people feel better but must also be shown to work better than a placebo is that even a sham treatment will make people report feeling better between 60 and 90% of the time, depending on several well-documented factors. What psychiatrists fail to acknowledge is that the placebo dynamic can be turned on its head—you can give people illnesses, especially mental illnesses, merely by suggesting they have the symptoms—or even by increasing their awareness of and attention to those symptoms past a certain threshold. If you tell someone a fact about themselves, they’ll usually believe it, especially if you claim a test, or an official diagnostic manual allowed you to determine the fact. This is how frauds convince people they’re psychics. An experiment you can do yourself involves giving horoscopes to a group of people and asking how true they ring. After most of them endorse their reading, reveal that you changed the labels and they all in fact read the wrong sign’s description.  

            Psychiatric diagnoses, to be considered at all valid, would need to be double-blind, just like drug trials: the patient shouldn’t know the diagnosis being considered; the rater shouldn’t know the diagnosis being considered; only a final scorer, who has no contact with the patient, should determine the diagnosis. The categories themselves are, however, equally problematic. In order to be properly established as valid, they need to have predictive power. Trials would have to be conducted in which subjects assigned to the prospective categories using double-blind protocols were monitored for long periods of time to see if their behavior adheres to what’s expected of the disorder. For instance, bipolar is supposedly marked by cyclical mood swings. Where are the mood diary studies? (The last time I looked for them was six months ago, so if you know of any, please send a link.) Smart phones offer all kinds of possibilities for monitoring and recording behaviors. Why aren’t they being used to do actual science on mental disorders?

            To research the role-playing dimension of mental illness, one (completely unethical) approach would be to design from scratch a really bizarre disorder, publicize its symptoms, maybe make a movie starring Mel Gibson, and monitor incidence rates. Let’s call it Puppy Pregnancy Disorder. We all know dog saliva is chock-full of gametes, right? So, let’s say the disorder is caused when a canine, in a state of sexual arousal of course, bites the victim, thus impregnating her—or even him. Let’s say it affects men too. Wouldn’t that be funny? The symptoms would be abdominal pain, and something just totally out there, like, say, small pieces of puppy feces showing up in your urine. Now, this might be too outlandish, don’t you think? There’s no way we could get anyone to believe this. Unfortunately, I didn’t really make this up. And there are real people in India who believe they have Puppy Pregnancy Disorder

Intuition vs. Science: What's Wrong with Your Thinking, Fast and Slow

From Completely Useless to Moderately Useful
            In 1955, a twenty-one-year-old Daniel Kahneman was assigned the formidable task of creating an interview procedure to assess the fitness of recruits for the Israeli army. Kahneman’s only qualification was his bachelor’s degree in psychology, but the state of Israel had only been around for seven years at the time so the Defense Forces were forced to satisfice. In the course of his undergraduate studies, Kahneman had discovered the writings of a psychoanalyst named Paul Meehl, whose essays he would go on to “almost memorize” as a graduate student. Meehl’s work gave Kahneman a clear sense of how he should go about developing his interview technique.

If you polled psychologists today to get their predictions for how successful a young lieutenant inspired by a book written by a psychoanalyst would be in designing a personality assessment protocol—assuming you left out the names—you would probably get some dire forecasts. But Paul Meehl wasn’t just any psychoanalyst, and Daniel Kahneman has gone on to become one of the most influential psychologists in the world. The book whose findings Kahneman applied to his interview procedure was Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence, which Meehl lovingly referred to as “my disturbing little book.” Kahneman explains,

Meehl reviewed the results of 20 studies that had analyzed whether clinical predictions based on the subjective impressions of trained professionals were more accurate than statistical predictions made by combining a few scores or ratings according to a rule. In a typical study, trained counselors predicted the grades of freshmen at the end of the school year. The counselors interviewed each student for forty-five minutes. They also had access to high school grades, several aptitude tests, and a four-page personal statement. The statistical algorithm used only a fraction of this information: high school grades and one aptitude test. (222)

Daniel Kahneman
The findings for this prototypical study are consistent with those arrived at by researchers over the decades since Meehl released his book:

The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment. No exception has been convincingly documented. (223)       

            Kahneman designed the interview process by coming up with six traits he thought would have direct bearing on a soldier’s success or failure, and he instructed the interviewers to assess the recruits on each dimension in sequence. His goal was to make the process as systematic as possible, thus reducing the role of intuition. The response of the recruitment team will come as no surprise to anyone: “The interviewers came close to mutiny” (231). They complained that their knowledge and experience were being given short shrift, that they were being turned into robots. Eventually, Kahneman was forced to compromise, creating a final dimension that was holistic and subjective. The scores on this additional scale, however, seemed to be highly influenced by scores on the previous scales.

When commanding officers evaluated the new recruits a few months later, the team compared the evaluations with their predictions based on Kahneman’s six scales. “As Meehl’s book had suggested,” he writes, “the new interview procedure was a substantial improvement over the old one… We had progressed from ‘completely useless’ to ‘moderately useful’” (231).   

Amos Tversky
            Kahneman recalls this story at about the midpoint of his magnificent, encyclopedic book Thinking, Fast and Slow. This is just one in a long series of run-ins with people who don’t understand or can’t accept the research findings he presents to them, and it is neatly woven into his discussions of those findings. Each topic and each chapter feature a short test that allows you to see where you fall in relation to the experimental subjects. The remaining thread in the tapestry is the one most readers familiar with Kahneman’s work most anxiously anticipated—his friendship with AmosTversky, with whom he shared the Nobel prize in economics in 2002.

Most of the ideas that led to experiments that led to theories which made the two famous and contributed to the founding of an entire new field, behavioral economics, were borne of casual but thrilling conversations both found intrinsically rewarding in their own right. Reading this book, as intimidating as it appears at a glance, you get glimmers of Kahneman’s wonder at the bizarre intricacies of his own and others’ minds, flashes of frustration at how obstinately or casually people avoid the implications of psychology and statistics, and intimations of the deep fondness and admiration he felt toward Tversky, who died in 1996 at the age of 59.

Pointless Punishments and Invisible Statistics

            When Kahneman begins a chapter by saying, “I had one of the most satisfying eureka experiences of my career while teaching flight instructors in the Israeli Air Force about the psychology of effective training” (175), it’s hard to avoid imagining how he might have relayed the incident to Amos years later. It’s also hard to avoid speculating about what the book might’ve looked like, or if it ever would have been written, if he were still alive. The eureka experience Kahneman had in this chapter came about, as many of them apparently did, when one of the instructors objected to his assertion, in this case that “rewards for improved performance work better than punishment of mistakes.” The instructor insisted that over the long course of his career he’d routinely witnessed pilots perform worse after praise and better after being screamed at. “So please,” the instructor said with evident contempt, “don’t tell us that reward works and punishment does not, because the opposite is the case.” Kahneman, characteristically charming and disarming, calls this “a joyous moment of insight” (175).

            The epiphany came from connecting a familiar statistical observation with the perceptions of an observer, in this case the flight instructor. The problem is that we all have a tendency to discount the role of chance in success or failure. Kahneman explains that the instructor’s observations were correct, but his interpretation couldn’t have been more wrong.

Francis Galton, who first
described regression to the mean
What he observed is known as regression to the mean, which in that case was due to random fluctuations in the quality of performance. Naturally, he only praised a cadet whose performance was far better than average. But the cadet was probably just lucky on that particular attempt and therefore likely to deteriorate regardless of whether or not he was praised. Similarly, the instructor would shout into the cadet’s earphones only when the cadet’s performance was unusually bad and therefore likely to improve regardless of what the instructor did. The instructor had attached a causal interpretation to the inevitable fluctuations of a random process. (175-6)

The roster of domains in which we fail to account for regression to the mean is disturbingly deep. Even after you’ve learned about the phenomenon it’s still difficult to recognize the situations you should apply your understanding of it to. Kahneman quotes statistician David Freedman to the effect that whenever regression becomes pertinent in a civil or criminal trial the side that has to explain it will pretty much always lose the case. Not understanding regression, however, and not appreciating how it distorts our impressions has implications for even the minutest details of our daily experiences. “Because we tend to be nice to other people when they please us,” Kahneman writes, “and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty” (176). Probability is a bitch.

The Illusion of Skill in Stock-Picking

            Probability can be expensive too. Kahneman recalls being invited to give a lecture to advisers at an investment firm. To prepare for the lecture, he asked for some data on the advisers’ performances and was given a spreadsheet for investment outcomes over eight years. When he compared the numbers statistically, he found that none of the investors was consistently more successful than the others. The correlation between the outcomes from year to year was nil. When he attended a dinner the night before the lecture “with some of the top executives of the firm, the people who decide on the size of bonuses,” he knew from experience how tough a time he was going to have convincing them that “at least when it came to building portfolios, the firm was rewarding luck as if it were a skill.” Still, he was amazed by the execs’ lack of shock:

We all went on calmly with our dinner, and I have no doubt that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions—and thereby threaten people’s livelihood and self-esteem—are simply not absorbed. (216)

The scene that follows echoes the first chapter of Carl Sagan’s classic paean to skepticism Demon-Haunted World, where Sagan recounts being bombarded with questions about science by a driver who was taking him from the airport to an auditorium where he was giving a lecture. He found himself explaining to the driver again and again that what he thought was science—Atlantis, aliens, crystals—was, in fact, not. "As we drove through the rain," Sagan writes, "I could see him getting glummer and glummer. I was dismissing not just some errant doctrine, but a precious facet of his inner life" (4). In Kahneman’s recollection of his drive back to the airport after his lecture, he writes of a conversation he had with his own driver, one of the execs he’d dined with the night before. 

He told me, with a trace of defensiveness, “I have done very well for the firm and no one can take that away from me.” I smiled and said nothing. But I thought, “Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it? (216)

Blinking at the Power of Intuitive Thinking

Malcolm Gladwell
            It wouldn’t surprise Kahneman at all to discover how much stories like these resonate. Indeed, he must’ve considered it a daunting challenge to conceive of a sensible, cognitively easy way to get all of his vast knowledge of biases and heuristics and unconscious, automatic thinking into a book worthy of the science—and worthy too of his own reputation—while at the same time tying it all together with some intuitive overarching theme, something that would make it read more like a novel than an encyclopedia. Malcolm Gladwell faced a similar challenge in writing Blink: the Power of Thinking without Thinking, but he had the advantages of a less scholarly readership, no obligation to be comprehensive, and the freedom afforded to someone writing about a field he isn’t one of the acknowledged leaders and creators of. Ultimately, Gladwell’s book painted a pleasing if somewhat incoherent picture of intuitive thinking. The power he refers to in the title is over the thoughts and actions of the thinker, not, as many must have presumed, to arrive at accurate conclusions.

It’s entirely possible that Gladwell’s misleading title came about deliberately, since there’s a considerable market for the message that intuition reigns supreme over science and critical thinking. But there are points in his book where it seems like Gladwell himself is confused. Robert Cialdini, Steve Marin, and Noah Goldstein cover some of the same research Kahneman and Gladwell do, but their book Yes!: 50 Scientifically Proven Ways to be Persuasive is arranged in a list format, with each chapter serving as its own independent mini-essay.

Robert Cialdini
Early in Thinking, Fast and Slow, Kahneman introduces us to two characters, System 1 and System 2, who pass the controls of our minds back and forth between themselves according the expertise and competency demanded by current exigency or enterprise. System 1 is the more intuitive, easygoing guy, the one who does what Gladwell refers to as “thin-slicing,” the fast thinking of the title. System 2 works deliberately and takes effort on the part of the thinker. Most people find having to engage their System 2—multiply 17 by 24—unpleasant to one degree or another.

The middle part of the book introduces readers to two other characters, ones whose very names serve as a challenge to the field of economics. Econs are the beings market models and forecasts are based on. They are rational, selfish, and difficult to trick. Humans, the other category, show inconsistent preferences, changing their minds depending on how choices are worded or presented, are much more sensitive to the threat of loss than the promise of gain, are sometimes selfless, and not only can be tricked with ease but routinely trick themselves. Finally, Kahneman introduces us to our “Two Selves,” the two ways we have of thinking about our lives, either moment-to-moment—experiences he, along with Mihaly Csikzentmihhalyi (author of Flow) pioneered the study of—or in abstract hindsight. It’s not surprising at this point that there are important ways in which the two selves tend to disagree.

Intuition and Cerebration

 The Econs versus Humans distinction, with its rhetorical purpose embedded in the terms, is plenty intuitive. The two selves idea, despite being a little too redolent of psychoanalysis, also works well. But the discussions about System 1 and System 2 are never anything but ethereal and abstruse. Kahneman’s stated goal was to discuss each of the systems as if they were characters in a plot, but he’s far too concerned with scientifically precise definitions to run with the metaphor. The term system is too bloodless and too suggestive of computer components; it’s too much of the realm of System 2 to be at all satisfying to System 1. The collection of characteristics Thinking links to the first system (see a list below) is lengthy and fascinating and not easily summed up or captured in any neat metaphor. But we all know what Kahneman is talking about. We could use mythological figures, perhaps Achilles or Orpheus for System 1 and Odysseus or Hephaestus for System 2, but each of those characters comes with his own narrative baggage. Not everyone’s System 1 is full of rage like Achilles, or musical like Orpheus. Maybe we could assign our System 1s idiosyncratic totem animals.

Mihaly Csikzentmihalyi
But I think the most familiar and the most versatile term we have for System 1 is intuition. It is a hairy and unpredictable beast, but we all recognize it. System 2 is actually the harder to name because people so often mistake their intuitions for logical thought. Kahneman explains why this is the case—because our cognitive resources are limited our intuition often offers up simple questions as substitutes from more complicated ones—but we must still have a term that doesn’t suggest complete independence from intuition and that doesn’t imply deliberate thinking operates flawlessly, like a calculator. I propose cerebration. The cerebral cortex rests on a substrate of other complex neurological structures. It’s more developed in humans than in any other animal. And the way it rolls trippingly off the tongue is as eminently appropriate as the swish of intuition. Both terms work well as verbs too. You can intuit, or you can cerebrate. And when your intuition is working in integrated harmony with your cerebration you are likely in the state of flow Csikzentmihalyi pioneered the study of.

While Kahneman’s division of thought into two systems never really resolves into an intuitively manageable dynamic, something he does throughout the book, which I initially thought was silly, seems now a quite clever stroke of brilliance. Kahneman has no faith in our ability to clean up our thinking. He’s an expert on all the ways thinking goes awry, and even he catches himself making all the common mistakes time and again. In the introduction, he proposes a way around the impenetrable wall of cognitive illusion and self-justification. If all the people gossiping around the water cooler are well-versed in the language describing biases and heuristics and errors of intuition, we may all benefit because anticipating gossip can have a profound effect on behavior. No one wants to be spoken of as the fool.

Kahneman writes, “it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own.” It’s not easy to tell from his straightforward prose, but I imagine him writing lines like that with a wry grin on his face. He goes on,

Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and at home. (3)

So we encourage the education of others to trick ourselves into trying to be smarter in their eyes. Toward that end, Kahneman ends each chapter with a list of sentences in quotation marks—lines you might overhear passing that water cooler if everyone where you work read his book.  I think he’s overly ambitious. At some point in the future, you may hear lines like “They’re counting on denominator neglect” (333) in a boardroom—where people are trying to impress colleagues and superiors—but I seriously doubt you’ll hear it in the break room. Really, what he’s hoping is that people will start talking more like behavioral economists. Though some undoubtedly will, Thinking, Fast and Slow probably won’t ever be as widely read as, say, Freud’s lurid pseudoscientific On the Interpretation of Dreams. That’s a tragedy.

Still, it’s pleasant to think about a group of friends and colleagues talking about something other than football and American Idol.

Characteristics of System 1 (105): Try to come up with a good metaphor.

·         generates impressions, feelings, and inclinations; when endorsed by System 2 these become beliefs, attitudes, and intentions
·         operates automatically and quickly, with little or no effort, and no sense of voluntary control
·         can be programmed by System 2 to mobilize attention when particular patterns are detected (search)
·         executes skilled responses and generates skilled intuitions, after adequate training
·         creates a coherent pattern of activated ideas in associative memory
·         links a sense of cognitive ease to illusions of truth, pleasant feelings, and reduced vigilance
·         distinguishes the surprising from the normal
·         infers and invents causes and intentions
·         neglects ambiguity and suppresses doubt
·         is biased to believe and confirm
·         exaggerates emotional consistency (halo effect)
·         focuses on existing evidence and ignores absent evidence (WYSIATI)
·         generates a limited set of basic assessments
·         represents sets by norms and prototypes, does not integrate
·         matches intensities across scales (e.g., size and loudness)
·         computes more than intended (mental shotgun)
·         sometimes substitutes an easier question for a difficult one (heuristics)
·         is more sensitive to changes than to states (prospect theory)
·         overweights low probabilities.
·         shows diminishing sensitivity to quantity (psychophysics)
·         responds more strongly to losses than to gains (loss aversion)
·         frames decision problems narrowly, in isolation from one another

Why I Am Not a Feminist—and You Shouldn’t Be Either part 1: Earnings

From a Georgetown University study called "Education, Occupation, and Lifetime Earnings"
           In order to establish beyond all doubt the continuing criticality of the battle for women’s equality, feminists rely heavily on data demonstrating an earnings discrepancy between genders. Women make less money in America, and therefore women are not yet equal. If women aren’t making as much as men who work in the same industry, if women aren’t making as much as men with the same education level, isn’t that an injustice? So how can I claim something is wrong with feminism, a movement seeking equal rights and equal treatment and equal pay for half the population of the country?

            There’s a point at which dwelling on the crimes committed against a group of people becomes a subtle form of bigotry toward other groups. Jews like to rehearse their long history of persecution for a reason. Focusing on anti-Semitism can bolster solidarity among Jews—if for no other reason than that it fosters suspicion of gentiles. This is not to minimize the true horrors and hatreds faced by God’s chosen people, but rather to point out that no matter how horrible their past is it doesn’t justify atrocities against other groups of people.

            I’m not writing merely to bemoan male-bashing, and I'm not suggesting feminists are guilty of atrocities (though a case could be made that they are). I’m writing because the good cause of equal rights and equal pay shades with distressing frequency into sloppy thinking and unscientific, perfervid preaching. Feminism has become a free-floating ideology, a cause inspiring blind frenzies and impassioned pronouncements about mysterious evils unlikely to exist in the world of living, breathing humans. And, yes, it is unfair to men, mean to boys, and counterproductive to women.

            I am an advocate of universal human rights, and many of my positions overlap with those of feminists. A pregnant woman has the right to choose whether or not to carry her baby to term. Any type of legal or educational enforcement of gender roles is a violation of the right of individuals to choose their own lifestyles, educational trajectories, careers, and the nature of their relationships. But this freedom in regard to gender roles also means that girls and boys, women and men, have just as much of a right to choose to be traditional or stereotypical in any of these domains. Any law or educational policy that goes after any aspect of gender freely chosen or naturally occurring is just as much of an injustice as one that forces individuals to take on roles that don’t fit them.
From a 2011 Gallup Report
          If it were true that the figures showing earnings discrepancies in fact represented compelling evidence of hiring or promoting biases favoring men, I would support the cause of reform—not in the name of women’s rights, but in the name of human rights, in the name of fairness. As stark an image as they paint, however, the results of the studies these figures come from are no more proof of bias than a study showing boys win more often in school sports would be proof of cheating. Just as you would have to address the question of how many girls are even playing sports, you have to ask how many women are applying for top-paying positions. Fortunately, several studies have looked at the application and hiring process directly—at least in academic fields.
From a CDC 2011 Report
            Before discussing those results, though, I’d like to point out (only somewhat flippantly) that earnings aren’t the only area in which reliable gender differences occur. Men have more heart attacks than women. And men tend to die at an earlier age than women, heart disease being the single most common cause of death. One of the main concerns of feminists is the so-called objectification of women and, more specifically, the theory that media portrayals of underweight actresses and models instill in young girls the conviction that they must be dangerously skinny to be attractive. Might it also be the case that media portrayals of extremely wealthy men instill in boys the notion that in order to be attractive they must make extremely large incomes, incomes they go to dangerous lengths to secure, say, by working long hours, spending little time with family and friends, ignoring their health, stressing themselves out, and working themselves into early graves.

            A 2010 study published in the Proceedings of the National Academy of Sciences by Daniel Kahneman and Angus Deaton begins its discussion of results thus:

            More money does not necessarily buy more happiness, but 
            less money is associated with emotional pain. Perhaps 
            $75,000 is a threshold beyond which further increases in 
            income no longer improve individuals’ ability to do what 
            matters most to their emotional well-being, such as spending 
            time with people they like, avoiding pain and disease, and 
            enjoying leisure. According to the ACS, mean (median) US 
            household income was $71,500 ($52,000) in 2008, and about 
            a third of households were above the $75,000 threshold. It 
            also is likely that when income rises beyond this value, the 
            increased ability to purchase positive experiences is 
            balanced, on average, by some negative effects. recent 
            psychological study using priming methods provided 
            suggestive evidence of a possible association between high 
            income and a reduced ability to savor small pleasures. (4)

Perhaps a monomaniacal lusting after money is a pathology, one that men suffer from in much greater numbers than women. But my point isn’t that I think we should try to do something to protect these men from harm; it’s rather that income is not necessarily an absolute good. So why should it be a benchmark for women’s rights that they make dollar for dollar what men make? We have to at least consider the possibility that women have it as good or better than men already today.

            Still, if a woman wants to go toe-to-toe with her male counterparts to see who can earn more, there should be no institutional barriers hampering her ability to compete. Before we look at those earnings charts and imagine sinister cabals of Scotch-swigging conspirators, however, we must determine whether or not the numbers result from choices freely made by women. “Gender Differences at Critical Transitions in the Careers of Science, Math, and Engineering Faculty” is the 2010 report of a task force established to investigate this very question. The main finding:

For the most part, male and female faculty in science, engineering, and mathematics have enjoyed comparable opportunities within the university, and gender does not appear to have been a factor in a number of important career transitions and outcomes. (153)

How does the study account for the underrepresentation of women in these fields? “Women accounted for about 17 percent of applications for both tenure-track and tenured positions in the departments surveyed” (154). So the plain fact is that women apply for these positions less frequently. Could it be because they despair of their chances for getting an interview? It turns out that “The percentage of women who were interviewed for tenure-track or tenured positions was higher than the percentage of women who applied” (157), which does sound a bit like discrimination—against men. And it gets better (or worse): “For all disciplines the percentage of tenure-track women who received the first job offer was greater than the percentage in the interview pool” (157). Fewer women applying to positions in these fields, not discriminatory hiring or promoting, explains their underrepresentation.

            Reviewing this and several other research programs, Stephen Ceci and Wendy Williams, in a report likewise published in the Proceedings of the National Academy of Sciences titled "Understanding current causes of women's underrepresentation in science", explain that 

            Despite frequent assertions that women’s current       
            underrepresentation in math-intensive fields is caused by sex 
            discrimination by grant agencies, journal reviewers, and 
            search committees, the evidence shows women fare as well 
            as men in hiring, funding, and publishing (given comparable 
            resources). That women tend to occupy positions offering 
            fewer resources is not due to women being bypassed in 
            interviewing and hiring or being denied grants and journal 
            publications because of their sex. It is due primarily to 
            factors surrounding family formation and childrearing, 
            gendered expectations, lifestyle choices, and career 
            preferences—some originating before or during adolescence 
            —and secondarily to sex differences at the extreme right tail 
            of mathematics performance on tests used as gateways to 
            graduate school admission. As noted, women in 
            math-intensive fields are interviewed and hired slightly in 
            excess of their representation among PhDs applying for 
            tenure-track positions. The primary factors in women’s 
            underrepresentation are preferences and choices—both freely 
            made and constrained: “Women choose at a young age not to 
            pursue math-intensive careers, with few adolescent girls 
            expressing desires to be engineers or physicists, preferring 
            instead to be medical doctors, veterinarians, biologists, 
            psychologists, and lawyers. Females make this choice 
            despite earning higher math and science grades than males 
            throughout schooling”. (5)

These "math-intensive" fields (Wallstreet?) are central to our economy and accordingly tend to mean higher pay for those who chose them. Since the study that compared incomes by gender and education level failed to account for what field the education or the career was in, the differences in fields chosen probably explains the difference in pay. The PNAS study authors cite a Government Accountability Office report whose findings accorded well with this explanation. Ceci and Williams write that

            the GAO report mentions studies of pay differentials,  
            demonstrating that nearly all current salary differences 
            can be accounted for by factors other than 
            discrimination, such as women being disproportionately 
            employed at teaching-intensive institutions paying less 
            and providing less time for research. (4)

Conservatives are fond of the principle that equality of opportunity doesn’t mean equality of outcome. Though they are demonstrably wrong when it comes to economic inequality in general (since inequality and mobility are negatively correlated), the principle is completely sound. I have no doubt that some men are barring the doors of employment to some women in America today. There are probably places where the reverse is true as well. But feminism is a body of facile assumptions that leads to ready conclusions of questionable validity. The assumption of discrimination when faced with earnings discrepancies is just one example.

Feminism is the political and social effort to attain equality between the sexes. While this sounds perfectly innocuous, even admirable, it frames relations between women and men as fundamentally antagonistic; it’s us versus them. Even a whiff of tribalism tends to make otherwise admirable efforts take tragic turns. How many relationships have been undermined by the idea that difference means inequality means oppression, by the notion that within every man lurks the impulse to dehumanize and dominate women.

In future posts, I’m going to look at the faulty assumptions inspired by feminism in the realms of sex and attraction—i.e. the bizarre notion of objectification—and in the upbringing of children, where so much pointless hand-wringing takes place over whether gender stereotypes are being subtly imposed. For now, I’m going to close with some questions from a graduate level textbook, Theory into Practice: An Introduction to Literary Criticism by Ann Dobie. They’re from a section devoted to helping burgeoning scholars learn to write feminist essays about literature. The idea is to pose these questions to yourself as you’re reading. See if you can spot the assumptions. See if you think they’re valid or fair.

-What stereotypes of women do you find? Are they oversimplified, demeaning, untrue? For example, are all blondes understood to be dumb?
-Examine the roles women play in a work. Are they minor, supportive, powerless, obsequious? Or are they independent and influential?
-How do the male characters talk about the female characters?
-How do the male characters treat the female characters?
-How do the female characters act toward the male characters?
-Who are the socially and politically powerful characters?
-What attitudes toward women are suggested by the answers to these questions?
-Do the answers to these questions indicate that the work lends itself more naturally to a study of differences between the male and female characters, a study of power imbalances between the sexes, or a study of unique female experience? (121-2)

In case you missed it, let me quote from the first page of the chapter: "The premise that unites those who call themselves feminist critics is the assumption that Western culture is fundamentally patriarchal, creating an imbalance of power which marginalizes women and their work" (104). While I acknowledge the assumption was historically justified, I have a feeling people will keep making it long after its promise of a better tomorrow is exhausted.
Read part 2: The Objectionable Concept of Objectification
and part 3: Engendering Gender Madness
Read my response to commenters.

Gravitating Toward Tribal: The Danger of Free-Floating Ideologies

Image from the movie Zardoz. Courtesy of Thersic.com
          Ideologies are usually conceived through a coupling of comfortable tradition with a calculation of self-interest. But they can also be borne of good faith efforts at understanding. More important than their origin and development is the degree to which they are grounded. If you work out a comprehensive and adequately complex ideology that serves to explain an otherwise incomprehensible phenomenon and possibly even offers some guidance for dealing with an otherwise chaotic and frightening dynamic, you’ve created a theory that will appeal to human minds desperate for understanding and a sense, no matter how meager, of control. But does the ideology match up with reality? That’s an entirely different question.

            Free-floating ideologies, those that persist solely owing to the comforts they provide and the conveniences they secure, survive confrontations with reality and subsist despite vast lacuna in empirical support because human perception operates through a process of cross-referencing sensory inputs with prior knowledge. What we see is largely determined by what we’re looking for, and how we see it by what we believe about it. Patterns arising in what ought to be random incidents often sustain beliefs—even though in most contexts humans are terrible at calculating probabilities. A natural confirmation bias has us perceiving and remembering all the times predictions arising from our theories come to fruition while missing or forgetting all the times they fail. We tend to enjoy the company of like-minded others, and rather idiotically have our convictions bolstered by their common acceptance by those with whom we’ve chosen to associate.

            Unmoored ideologies gravitate toward certain predictable tracks in human cognition. We like to think there’s some sort of agency behind everything, an intelligence governing the universe. To think that no one’s in charge of all the swirling and colliding galaxies is variously unsettling and terrifying to us. So we take in the sublime beauty of quiet sunsets and wonder at the beneficence of the creator. Or we note coincidences in our lives, the way they fall together in a meaningful, beneficial way, and we feel a need to express gratitude to the guiding divinity. This is mostly innocent. Though it can lead to complacence and willful ignorance of entire regions where this supposedly beneficent guide has deigned never to set foot, and it can add an extra layer of grief in response to catastrophe, the comfort of believing in an invisible protector and guide has little immediate cost.

            Much more worrying is the gravitation of free-floating ideologies toward tribalism. The pseudo-scientific cult that has arisen around certain varieties of psychotherapy has bequeathed to our culture the horrifying belief that an unknown portion of the population, predominantly male, can induce the modern equivalent of demonic possession, severe psychological trauma, through an inverted laying-on of hands. The ideology has made monsters of men. The fetishizing of free markets likewise entails a belief in a loathsome variety of sub-humans. The economy, true believers assert, is a battle between the makers and the moochers, the producers and the parasites. As a conservative friend put in, in a discussion of healthcare reform, “Giving insurance to the slugs will just make them bigger slugs.”

            If you challenge someone’s beliefs by suggesting theirs is an ideology divorced from reality, as everyone does who advocates for one set of beliefs in opposition to another, the proper response is to insist that the ideology emerged from an awareness of facts through inductive reasoning. But sunsets, no matter how sublime, don’t really provide any evidence for the existence of an intelligent agency behind the curtain of the cosmos. Troubled young women with histories of abuse don’t prove that sexual experiences in childhood cause a wild assortment of psychological maladjustments. And the higher incarceration rate for impoverished groups doesn’t in any way establish some fundamental divide between good and bad types of people.

            Once ideologies reach a certain stage of development, they become all but immune to contradictory evidence. When the facts cooperate, they are trumpeted. When they don’t, the devout have recourse to principles. I’ve referred advocates of particular varieties of psychotherapy to evidence that they’re ineffective. In response, I didn’t get references to other bodies of evidence supporting the beliefs and practices in question; rather, I got an explanation of how the therapeutic techniques were supposed to work. Present a free market purist with evidence that market competition doesn’t led to innovation, or leads to detrimental innovations, and you’ll likely get a lecture explaining the principles behind how it’s supposed to work, according to the free market ideology, rather than evidence that it does, in fact, work in the theorized way. This convenient toggling back and forth between inductive and deductive reasoning literally allows us to explain away disconnects between our ideologies and the world.

            It is the tendency of free-floating ideologies toward tribalism that leads me to advocate a strict adherence to science in matters of public concern. It wasn’t merely coincidence that the enlightenment represented the inception of both the traditions of science and universal human rights, which have suffered through a traumatic childhood of their own, and are now living out a tumultuous adolescence. The tendency toward tribalism is also why I’m wary of commercial fiction which almost invariably makes characters represent ideas and personal qualities, only to pit the good guys against the bad. J.K. Rowling can claim all she wants that the Harry Potter books teach kids the evils of bigotry, but any work with goodies and baddies taps into tribal instincts. Literary fiction, on the other hand, at its best, is an exercise in empathy.

Beliefs that Make You Feel Good Make You Look Good Too—But You’re a Total Asshole if You Let That Influence You

Imagine you are among a group of around thirty people on an island and over the past few weeks you’ve learned of the presence of another group living on the same island, one which has been showing signs of hostility toward your own group. Because of your wisdom, your group has appointed you the task of convening a selective gathering to devise a strategy for dealing with the looming threat. Among your group there happen to be several people with military training as well as some with experience in diplomacy. There are also individuals claiming psychic powers and religious authority. You understand that the composition of the gathering will be among the most important factors determining the consensus strategy it will arrive at. Who do you invite to participate? Who do you exclude?

(Full disclosure: the first strategy that occurs to me is to find a way to get the rival group’s attention and then execute the psychics and religious authorities for them to witness, letting them know afterward this treatment is what they can expect from us should they decide to continue their hostility.)

Beliefs have consequences. A psychic in our hypothetical group may be convinced that he’s seen the future and in it the home group stands victorious, having suffered no casualties, over the rival group. This vision allows an otherwise outvoted military aggressor to persuade everyone else a violent raid is the best course of action. A religious leader may feel it incumbent on her to serve as a missionary to the savages. This may lead to an attempt at diplomacy which backfires by offending the rival group’s own religious sensibilities. The fate of the home group is at stake. Whose opinions do you seek?

This imaginary scenario is meant to illustrate the point that an individual’s beliefs inevitably contribute to the culture and ultimately influence the fate of societies. While it is true that the larger the society the smaller the impact of any one person’s ideas, it is likewise the case that through a mechanism called social proof the stated ideas of individuals have multiplier effects far beyond what any one person believes. Social norms are a major determiner of what people accept as true. And many people may not question pieces of conventional wisdom simply because it has never occurred to them to do so—at least not until they encounter someone who espouses wisdom of an unconventional strain.

This point may seem obvious enough, and yet it represents a major departure from the dominant approach to considering beliefs in American culture. When confronted with a new idea Americans automatically and unconsciously apply a rigid formula to assessing its merits: they ask, first, how would believing this idea make me feel, and, second, how would believing this idea make me look to others? The order of these questions may be reversed, but no other questions ever enter the equation. The foundation of our culture is an ethic of consumerism, and so people decide what to believe exactly the same way they decide what music they want to claim as their favorite, and the same way they decide what type of t-shirt they’ll wear to advertise their personal style.

Savvy marketers, public relations experts, and profiteering charlatan shitbags are well aware of the extent to which consumerism determines our beliefs and behaviors. There’s no shortage of people in this country who will have nothing to do with politics because the topic is just not sexy at all; they know politicians are considered dishonest, petty, and even corrupt. Who would want to associate themselves with that? This general distaste for government and its policy disputes derives much of its fuel from each party’s attempts to brand the other in as off-putting a way as possible. I haven’t seen a survey that establishes the link, but I’d wager where people fall on the political spectrum is largely determined by whether they'd find it less acceptable to be thought of as naïve and effete or to be thought of as callous and lacking in compassion.

I try, as much as possible, to adhere to the Enlightenment values of devotion to science and championing of universal human rights. When people of the consumerist mindset discuss their beliefs with me, they are often baffled as to why I would insist on scientific skepticism with regard to supernatural ideas and pop culture myths. Science is so dry and mechanical. So, when I tell people what I believe, I usually get one of three responses: the first is to assume that my knowledge about research on some issue must be completely independent of my beliefs, because beliefs are personal and science is not. “Okay, you’ve told me what you know about the results of some experiments. But what do you really believe?”

The second response, equally in keeping with the consumerist ethic, is to assume that anyone so devoted to science must be a dry and mechanical person, the type who is incapable of tapping into his intuition, who insists on cold hard facts and bloodless statistics. After all, the reasoning goes, this guy chose his beliefs based on how he wanted to represent himself, so if he’s spouting off stats and experimental results he must have a pretty limited and robotic personality. It should go without saying—but unfortunately it doesn’t—that this reasoning is based on a gross misunderstanding of science and statistics alike. But the other mistake implicit in this response is that people can only decide what to believe according to how they want to represent themselves to others.

And yet it’s the third response that’s the most troubling. When you listen to someone’s beliefs about, say, supply-side economics, or religion, or alternative medicine and then start going into detail about why those beliefs are almost certainly wrong, many people will immediately conclude that there’s an ulterior motive behind your scientific skepticism. Because you have such a strong tendency to reject other people’s beliefs, they reason, you must simply be the type of person who enjoys making other people feel and look stupid. It’s not enough to wear your own favorite brand of t-shirt; you have to ridicule other people’s fashion sense. People who respond this way—you know who you are—can be counted on to violently assert themselves when you challenge them. They take your arguments very personally.

The true reason I’m devoted to science, though, is that I take responsibility for the consequences of my beliefs. What you believe has a direct impact on the culture around you, and an indirect impact on the course of society at large. If you like the fit of supply-economics, if you explain to anyone who’ll listen how wealth at the top trickles down, and if you vote for conservative politicians, then you’re responsible for the results, positive or negative, of the implementation of those policies. In point of fact, the most reliable outcome of these policies is greater income inequality, which is associated with a host of societal ills from increased violent crime to higher infant mortality. I would argue that those signing on to the conservative agenda after these facts were established are complicit in the perpetuation of these social problems.

The position you take on any issue with broader social implications inevitably becomes more than a personal choice. And it’s more difficult than you may assume to come up with issues that don’t have broader social implications. Where, for instance, was your t-shirt made? What were the conditions the people who made it were working under? What effects did its manufacture have on the surrounding ecosystems? The plain fact is that any pure application of the consumerist ethic, whether to your choice of clothing or to what religion or political party you support, is profoundly irresponsible.

In my first novel, which I just recently completed, the characters address issues concerning recovered memories of child abuse. This is a topic I began researching as an undergrad studying psychology. It turns out the best research rules out the theory of repressed trauma with a high degree of certainty. Now, it shouldn’t require any great deal of trust on your part to believe I have no desire to associate myself in any way with the issue of child abuse, especially in any way that entails a risk of being perceived as wanting to defend or advocate it. But there are men in prison today convicted solely on the basis of evidence from recovered memories. If I simply towed the conventional line and neglected to thoroughly research the issue, or worse, if I ignored the products of that research, I would be complicit in the imprisonment of innocent men. This complicity extends to the seemingly innocent act of remaining silent when others around me are expressing views I know to be in error.

The tendency to rely on pure consumerism to assess ideas and to fail to take responsibility for their consequences is a trap all too easy to fall into. I can almost guarantee the shirt on your back right now was made in a third world country under conditions you’d literally kill to keep your own children safe from. But most Americans are blithely ignorant of this. And I can attest it is exceedingly difficult and prohibitively expensive to limit your purchases to products made under more humane conditions. Manufacturers depend on American consumers being ignorant and irresponsible. And yet, under some circumstances, people’s reasoning becomes eminently more practical. When your child gets sick, the sexiness of holistic medicine doesn’t lure you away from doctors trained in scientific medicine—though you may backslide if that first visit fails to cure them.

But how, you may ask, do you express your individuality if you are so committed to science? Alternatively, how can others assess your personality through your beliefs if they’re all based on some scientist’s research? Well, even if research were to prove somehow that it’s better to be extroverted than introverted, people have little control over such things. So it is with most personality traits. Science may also offer some hints about characteristics I ought to look for in a romantic partner, but ultimately which woman I pair up with will be determined by factors beyond the scope of any research project. Not every personal decision you make has wider societal consequences. Anyway, there’s plenty of room for individuality even for those of us thoroughly committed to taking responsibility for our actions and beliefs.

More of a Near Miss--Response to "Collision"

The documentary Collision is an attempt at irony. The title is spelled on the box with a bloody slash for the i coming in the middle of the word. The film opens with hard rock music and Christopher Hitchens dropping the gauntlet: "One of us will have to admit he's wrong. And I think it should be him." There are jerky closeups and dramatic pullaways. The whole thing is made to resemble one of those pre-event commercials on pay-per-view for boxing matches or UFC's.

The big surprise, which I don't think I'm ruining, is that evangelical Christian Douglas Wilson and anti-theist Christopher Hitchens--even in the midst of their heated disagreement--seem to like and respect each other. At several points they stop debating and simply chat with one another. They even trade Wodehouse quotes (and here I thought you had to be English to appreciate that humor). Some of the best scenes have the two men disagreeing without any detectable bitterness, over drinks in a bar, as they ride side by side in car, and each even giving signs of being genuinely curious about what the other is saying. All this bonhomie takes place despite the fact that neither changes his position at all over the course of their book tour.

I guess for some this may come as a surprise, but I've been arguing religion and science and politics with people I like, or even love, since I was in my early teens. One of the things that got me excited about the movie was that my oldest brother, a cancer biologist whose professed Christianity I suspect is a matter of marital expediency (just kidding), once floated the idea of collaborating on a book similar to Wilson and Hitchens's. So I was more disappointed than pleasantly surprised that the film focused more on the two men's mutual respect than on the substance of the debate.

There were some parts of the argument that came through though. The debate wasn't over whether God exists but whether belief in him is beneficial to the world. Either the director or the editors seemed intent on making the outcome an even wash. Wilson took on Hitchens's position that morality is innate, based on an evolutionary need for "human solidarity," by pointing out, validly, that so is immorality and violence. He suggested that Hitchens's own morality was in fact derived from Christianity, even though Hitchens refuses to acknowledge as much. If both morality and its opposite come from human nature, Wilson argues, then you need a third force to compel you in one direction over the other. Hitchens, if he ever answered this point, wasn't shown doing so in the documentary. He does point out, though, that Christianity hasn't been any better historically at restricting human nature to acting on behalf of its better angels.

Wilson's argument is fundamentally postmodern. He explains at one point that he thinks rationalists giving reasons for their believing what they do is no different from him quoting a Bible verse to explain his belief in the Bible. All epistemologies are circular. None are to be privileged. This is nonsense. And it would have been nice to see Hitchens bring him to task for it. For one thing, the argument is purely negative--it attempts to undermine rationalism but offers no positive arguments on behalf of Christianity. To the degree that it effectively casts doubt on nonreligious thinking, it cast the same amount of doubt on religion. For another, the analogy strains itself to the point of absurdity. Reason supporting reason is a whole different animal from the Bible supporting the Bible for the same reason that a statement arrived at by deduction is different from a statement made at random. Two plus two equals four isn't the same as there's an invisible being in the sky and he's pissed.

Of course, two plus two equals four is tautological. It's circular. But science isn't based on rationalism alone; it's rationalism cross-referenced with empiricism. If Wilson's postmodern arguments had any validity (and they don't) they still don't provide him with any basis for being a Christian as opposed to an atheist as opposed to a Muslim as opposed to a drag queen. But science offers a standard of truth.

Wilson's other argument, that you need some third factor beyond good instincts and bad instincts to be moral, is equally lame. Necessity doesn't establish validity. As one witness to the debate in a bar points out, an argument from practicality doesn't serve to prove a position is true. What I wish Hitchens had pointed out, though, is that the third factor need not be divine authority. It can just as easily be empathy. And what about culture? What about human intentionality? Can't we look around, assess the state of the world, realize our dependence on other humans in an increasingly global society, and decide to be moral? I'm a moral being because I was born capable of empathy, and because I subscribe to Enlightenment principles of expanding that empathy and affording everyone on Earth a set of fundamental human rights. And, yes, I think the weight of the evidence suggests that religion, while it serves to foster in-group cooperation, also inspires tribal animosity and war. It needs to be done away with.

One last note: Hitchens tries to illustrate our natural impulse toward moral behavior by describing an assault on a pregnant woman. "Who wouldn't be appalled?" Wilson replies, "Planned Parenthood." I thought Hitchens of all people could be counted on to denounce such an outrage. Instead, he limply says, "Don't be flippant," then stands idly, mutely, by as Wilson explains how serious he is. It's a perfect demonstration of Hitchens's correctness in arguing that Christianity perverts morality that a man as intelligent as Wilson doesn't see that comparing a pregnant woman being thrown down and kicked in the stomach to abortion is akin to comparing violent rape to consensual sex. He ought to be ashamed--but won't ever be. I think Hitchens ought to be ashamed for letting him say it unchallenged (unless the challenge was edited out).

The McLaughlin Rorschach

I really like watching The McLaughlin Group Friday nights on PBS because I'm always curious about how the two sides are going to incorporate the week's events into their partisan narratives. But every once in a while I sit and think about how much political philosophies resemble literary theories.
To wit, you can apply psychoanalytic theories to any literary text. In fact, you can apply any number of psychoanalytic theories in any number of ways to any literary text. Think of the famous phallic symbol. The critic predicts it'll be there and then he scours the pages until he finds it. He then comes away convinced that he's somehow tested and proven his theory. But if you look at the infinite range of items in literature that have been called phallic symbols the so-called test looks pretty pathetic: cars, buildings, shoes, hairstyles.
To appreciate this, imagine I hold up an entire deck of cards and tell you to pick one. You pull out the three of hearts and show it to me. I say, "I knew you were going to pick that card," and to prove it, I pull another three of hearts out of my shoe, suggesting I placed it there so as to be able to prove I'd predicted which card you'd picked. It's magic! Except in reality I have all fifty-two cards stashed somewhere on my person, in my pockets, in each shoe, under my shirt, etc. So no matter what you'd picked I was prepared to "prove" I'd predicted it before hand. Now, literary critics probably aren't aware they're playing a trick like this, adjusting their predictions and perceptions to make their theories fit the texts; in fact, they're very likely tricking themselves more than anyone else.

Watching the various panel members on McLaughlin--Monica Crowley is by far my favorite for obvious reasons--it's clear they're playing pretty much the same game. For the conservatives, the Deepwater Horizon shows the dangers of regulation, as big businesses capture the regulatory bodies and form "cozy" relationships with them. For liberals, the oil spill shows the dangers of deregulation, as businesses are allowed to cut corners and endanger everyone.
Political philosophy is not science. There is more involved in it than simply arriving at the truth--itself never a simple endeavor, especially when thousands of people are involved. But there seems to me to be an astonishing flippancy in politics when it comes to epistemology. People get initiated into this or that tradition, and from that point on their views are decided. It's not only like literary theories; it's also a lot like religions. Now you can make the argument that religion is an entirely personal matter--but politics affects everyone. Why isn't there a greater push to inject epistemic rigor into policy discussions? Why are we content to allow the postmodern state of affairs in which every view is taken to be as worthy as another?
Social sciences are notoriously complex, and there's a limit to their practical implementation. But to go from that to throwing up your hands and saying anything goes--or its functional equivalent, throwing up your hands and hoping the free market fixes everything--is to embrace a false dichotomy. We still have to use the evidence we have. We still should be trying to get more. We can't let our politics be determined as randomly as our preferences for fast food or hair styles.

The Biggest Guilt Trip in History

What is the point of a crucifix? It's supposed to remind us of the good news, that Jesus saved us from sin, right? But wait, aren't we all still sinners in the eyes of God? So what does it mean that Jesus saved us? Supposedly, Jesus saved us from an eternity of hellfire by acting as an intermediary between us and God, introducing the concept of forgiveness (suggesting the omniscient one didn't already know about it).
Forgiveness for what exactly though? The answer we're given is that we require forgiveness not so much for what we've done but for what we are. This is the concept of original sin. In effect, we're all guilty at birth because Adam and Eve ate the fruit of the tree of knowledge of good and evil. The original pair were disobedient, so Jesus had to be tortured and crucified, and it's all our fault.
The response of many to criticism of the old testament as profoundly immoral--misogyny, infanticide, genocide, etc--is that the new testament is in fact the basis of Christian morality. But this isn't much of a defense. The new testament is the height of sadomasochistic hokeyiness. It doesn't hold up to even the flimsiest test of logic.
In practical terms, though, the message is quite effective. As little catholic boys and girls we were told again and again the good (?) news that Jesus died for our sins. So we were to believe that as six- and seven-year-olds we had behaved so abominably that someone had to undergo torture and death to make amends. Of course, Jesus is dead, sort of, and so we can't repay our life debt to him. Instead, we owe our lives, by an accounting of simple reciprocity, to the institution that represents him, the church.
My ex-girlfriend often told me that I had ruined her life; the corollary would be that I owed her my life in return. She owned me. This is how guilt trips function. People have an innate capacity to harbor feelings of guilt and unworthiness that are ready-made for this type of exploitation. This basic manipulativeness, and not any element of truth, is most likely what accounts for the staying power of Christianity relative to other religions.

Absurdities and Atrocities in Literary Criticism

All literary theories (except formalism) share one common attraction—they speak to the universal fantasy of being able to know more about someone than that person knows about him- or herself. If you happen to be a feminist critic for instance, then you will examine some author’s work and divine his or her attitude toward women. Because feminist theory insists that all or nearly all texts exemplify patriarchy if they’re not enacting some sort of resistance to it, the author in question will invariably be exposed as either a sexist or a feminist, regardless of whether or not that author intended to make any comment about gender. The author may complain of unfair treatment; indeed, there really is no clearer instance of unchecked confirmation bias. The important point, though, is that the writer of the text supposedly knows little or nothing about how the work functions in the wider culture, what really inspired it at an unconscious level, and what readers will do with it. Substitute bourgeois hegemony for patriarchy in the above formula and you have Marxist criticism. Deconstruction exposes hidden hierarchies. New Historicism teases out dominant and subversive discourses. And none of them flinches at objections from authors that their work has been completely misunderstood.

This has led to a sad, self-righteous state of affairs in English departments. The first wrong turn was taken by Freud when he introduced the world to the unconscious and subsequently failed to come up with a method that could bring its contents to light with any reliability whatsoever. It’s hard to imagine how he could’ve been more wrong about the contents of the human mind. As Voltaire said, “He who can make you believe absurdities can make you commit atrocities.” No sooner did Freud start writing about the unconscious than he began arguing that men want to kill their fathers and have sex with their mothers. Freud and his followers were fabulists who paid lip service to the principles of scientific epistemology even as they flouted them. But then came the poststructuralists to muddy the waters even more. When Derrida assured everyone that meaning derived from the play of signifiers, which actually meant meaning is impossible, and that referents—to the uninitiated, referents mean the real world—must be dismissed as having any part to play, he was sounding the death knell for any possibility of a viable epistemology. And if truth is completely inaccessible, what’s the point of even trying to use sound methods? Anything goes.

Since critics like to credit themselves with having good political intentions like advocating for women and minorities, they are quite adept at justifying their relaxing of the standards of truth. But just as Voltaire warned, once those standards are relaxed, critics promptly turn around and begin making accusations of sexism and classism and racism. And, since the accusations aren’t based on any reasonable standard of evidence, the accused have no recourse to counterevidence. They have no way of defending themselves. Presumably, their defense would be just another text the critics could read still more evidence into of whatever crime they’re primed to find.

The irony here is that the scientific method was first proposed, at least in part, as a remedy for confirmation bias, as can be seen in this quote from Francis Bacon’s 1620 treatise Novum Organon:

" The human understanding is no dry light, but receives infusion from the will and affections; whence proceed sciences which may be called “sciences as one would.” For what a man had rather were true he more readily believes. Therefore he rejects difficult things from impatience of research; sober things, because they narrow hope; the deeper things of nature, from superstition; the light of experience, from arrogance and pride; things commonly believed, out of deference to the opinion of the vulgar. Numberless in short are the ways, and sometimes imperceptible, in which the affections color and infect the understanding."

Poststructuralists believe that everything we see is determined by language, which encapsulates all of culture, so our perceptions are hopelessly distorted. What can be done then to arrive at the truth? Well, nothing—all truth is constructed. All that effort scientists put into actually testing their ideas is a waste of time. They’re only going to “discover” what they already know.
But wait: if poststructuralism posits that discovery is impossible, how do its adherents account for airplanes and nuclear power? Just random historical fluctuations, I suppose.

The upshot is that, having declared confirmation bias inescapable, critics embraced it as their chief method. You have to accept their relaxed standard of truth to accept their reasoning about why we should do away with all standards of truth. And you just have to hope like hell they never randomly decide to set their sights on you or your work. We’re lucky as hell the legal system doesn’t work like this. And we can thank those white boys of the enlightenment for that.

Poststructuralism: Banal When It's Not Busy Being Absurd

            Reading the chapter in one of my textbooks on Poststructualism, I keep wondering why this paradigm has taken such a strong hold of scholars' minds in the humanities. In a lot of ways, the theories that fall under its aegis are really simple--overly simple in fact. The structuralism that has since been posted was the linguistic theory of Ferdinand de Saussure, who held that words derive their meanings from their relations to other, similar words. Bat means bat because it doesn't mean cat. Simple enough, but Saussure had to gussy up his theory by creating a more general category than "words," which he called signs. And, instead of talking about words and their meanings, he asserts that every sign is made up of a signifier (word) and a signified (concept or meaning).

            What we don't see much of in Saussure's formulation of language is its relation to objects, actions, and experiences. These he labeled referents, and he doesn't think they play much of a role. And this is why structuralism is radical. The common-sense theory of language is that a word's meaning derives from its correspondence to the object it labels. Saussure flipped this understanding on its head, positing a top-down view of language. What neither Saussure nor any of his acolytes seemed to notice is that structuralism can only be an incomplete description of where meaning comes from because, well, it doesn't explain where meaning comes from--unless all the concepts, the signifieds are built into our brains. (Innate!)

            Saussure's top-down theory of language has been, unbeknownst to scholars in the humanities, thoroughly discredited by research in developmental psychology going back to Jean Piaget that shows children's language acquisition begins very concretely and only later in life enables them to deal in abstractions. According to our best evidence, the common-sense, bottom-up theory of language is correct. But along came Jacques Derrida to put the post to structuralism--and make it even more absurd. Derrida realized that if words' meanings come from their relation to similar words then discerning any meaning at all from any given word is an endlessly complicated endeavor. Bat calls to mind not just cat, but also mat, and cad, and cot, ad infinitum. Now, it seems to me that this is a pretty effective refutation of Saussure's theory. But Derrida didn't scrap the faulty premise, but instead drew an amazing conclusion from it: that meaning is impossible.

            Now, to round out the paradigm, you have to import some Marxism. Logically speaking, such an importation is completely unjustified; in fact, it contradicts the indeterminacy of meaning, making poststructuralism fundamentally unsound. But poststructuralists believe all ideas are incoherent, so this doesn't bother them. The Marxist element is the idea that there is always a more powerful group who's foisting their ideology on the less powerful. Derrida spoke of binaries like man and woman--a man is a man because he's not a woman--and black and white--blacks are black because they're not white. We have to ignore the obvious objection that some things can be defined according their own qualities without reference to something else. Derrida's argument is that in creating these binaries to structure our lives we always privilege one side over the other (men and whites of course--even though both Saussure and Derrida were both). So literary critics inspired by Derrida "deconstruct" texts to expose the privileging they take for granted and perpetuate. This gives wonks the gratifying sense of being engaged in social activism.

            Is the fact that these ideas are esoteric what makes them so appealing to humanities scholars, the conviction that they have this understanding that supposedly discredits what the hoi polloi, or even what scientists and historians and writers of actual literature know? Really poststructuralism is nonsense on stilts riding a unicycle. It's banal in that it takes confirmation bias as a starting point, but it's absurd in that it insists this makes knowledge impossible. The linguist founders were armchair obscurantists whose theories have been disproved. But because of all the obscurantism learning the banalities and catching out the absurdities takes a lot of patient reading. So is the effort invested in learning the ideas a factor in making them hard to discount outright? After all, that would mean a lot of wasted effort.

Also read:

Putting Down the Pen: How School Teaches Us the Worst Possible Way to Read Literature

From Rags to Republican

One of the dishwashers at the restaurant where I work likes to light-heartedly discuss politics with me. “How are things this week on the left?” he might ask. Not even in his twenties yet, he can impressively explain why it’s wrong to conflate communism with Stalinism. He believes the best government would be a communist one, but until we figure out how to establish it, our best option is to go republican. He loves Rush Limbaugh. One day I was talking about disparities in school funding when he began telling about why he doesn’t think that sort of thing is important. “I did horribly in school, but I decided I wanted to learn on my own.”

He went on to tell me about a terrible period he went through growing up, after his parents got divorced and his mother was left nearly destitute. The young dishwater had pulled himself up by his own bootstraps. The story struck me because about two weeks earlier I’d been discussing politics with a customer in the dinning room who told a remarkably similar one. He was eating with his wife and their new baby. When I disagreed with him that Obama’s election was a national catastrophe he began an impromptu lecture on conservative ideology. I interrupted him, saying, “I understand top-down economics; I just don’t agree with it.” But when I started to explain the bottom-up theory, he interrupted me with a story about how his mom was on food stamps and they had nothing when he was a kid, and yet here he is, a well-to-do father (he even put a number on his prosperity). “I’m walking proof that it is possible.”

I can go on and on with more examples. It seems like the moment anyone takes up the mantle of economic conservatism for the first time he (usually males) has to put together one of these rags-to-riches stories. I guess I could do it too, with just a little exaggeration. “My first memories are of living in government subsidized apartments, and my parents argued about money almost every day of my life when I was a kid, and then they got divorced and I was devastated—I put on weight until I was morbidly obese and I went to a psychologist for depression because I missed a month of school in fourth grade.” (Actually, that’s not exaggerated at all.)

The point we’re supposed to take away is that hardship is good and that no matter how bad being poor may appear it’s nothing a good work ethic can’t fix. Invariably, the Horatio Alger proceeds to the non sequitur that his making it out of poverty means it’s a bad idea for us as a society to invest in programs to help the poor. Push him by asking what if the poverty he experienced wasn’t as bad as the worst poverty in the country, or where that work ethic that saved him came from, and he’ll most likely shift gears and start explaining that becoming a productive citizen is a matter of incentives.

The logic runs: if you give money to people who aren’t working, you’re taking away the main incentive they had to get off their asses and go to work. Likewise, if you take money away from the people who have earned it by taxing them, you’re giving them a disincentive to continue being productive. This a folksy version of a Skinner Box: you get the pigeons to do whatever tricks you want by rewarding them with food pellets when they get close to performing them correctly—“successive approximations” of the behavior—and punishing them by not giving them food pellets when they go astray. What’s shocking is that this is as sophisticated as the great Reagan Revolution ever got. It’s a psychological theory that was recognized as too simplistic in the 1950’s writ large to explain the economy. What if people can make money in ways other than going to work, say, by selling drugs? The conservatives’ answer—more police, harsher punishments. But what if money isn’t the only reward people respond to? And what if prison doesn’t work like it’s supposed to?

The main appeal, I think, to Skinner Box Economics is that it says, in effect, don’t worry about having more than other people because you’ve earned what you have. You deserve it. What a relief to hear that we have more because we’re just better people. We needn’t work ourselves up over the wretched plight of the have-nots; if they really wanted to, they could have everything we have. To keep this line of reasoning afloat you need to buoy it up with a bit of elitism: so maybe offering everyone the same incentives won’t make everyone rich, but the smartest and most industrious people will be alright. If you’re doing alright, then you must be smart and industrious. And if you’re filthy rich, say, Wall Street banker rich, then, well, you must be one amazing S.O.B. How much money you have becomes an index of how virtuous you are as a person. And some people are so amazing in fact that the worst thing society can do is hold them back in any way, because their prosperity is so awesome it benefits everyone—it trickles down. There you have it, a rationale for letting rich people do whatever they want, and leaving poor people to their own devices to pull up their own damn bootstraps. This is the thinking that has led to even our democratic president believing that he needs to pander to Wall Street to save the economy. This is conservatism. And it’s so silly no adult should entertain it for more than a moment.

A philosophy that further empowers the powerful, that justifies the holding of power over the masses of the less powerful, ought to be appealing to anyone who actually has power. But it’s remarkable how well these ideas trickle down to the rest of us. One way to account for the assimilation of Skinner Box Economics among the middle class is that it is the middle class; people in it still have to justify being more privileged than those in the lower classes. But the real draw probably has little to do with any recognition of one’s actual circumstances; it relies rather on a large-scale obliviousness of them. Psychologists have been documenting for years the power of two biases we all fall prey to that have bearing on our economic thinking: the first is the self-serving bias, according to which we take credit any time we succeed at something but point to forces beyond our control whenever we fail. One of the best examples of the self-serving bias is research showing that the percentage of people who believe themselves to be better-than-average drivers is in the nineties—even among those who’ve recently been at fault in a traffic accident. (Sounds like Wall Street.) The second bias, which is the flipside of the first, is the fundamental attribution error, according to which we privilege attributions of persistent character traits to other people in explaining their behavior at the expense of external, situational factors—when someone cuts us off while we’re driving we immediately conclude that person is a jerk, even though we attribute the same type of behavior in ourselves to our being late for a meeting.

Any line of thinking that leads one away from the comforting belief in his or her own infinite capacity for self-determination will inevitably fail to take hold in the minds of those who rely on intuition as a standard of truth. That’s why the conservative ideology is such an incoherent mess: on the one hand, you’re trying to create a scientific model for how the economy works (or doesn’t), but on the other you’re trying not only to leave intact people’s faith in free will but also to bolster it, to elevate it to the status of linchpin to the entire worldview. But free will and determinism don’t mix, and unless you resort to religious concepts of non-material souls there’s no place to locate free will in the natural world. The very notion of free will is self-serving to anyone at all successful in his or her life—and that’s why self-determination, in the face of extreme adversity, is fetishized by the right. That’s why every conservative has a rags-to-riches story to offer as proof of the true nature of economic forces.

The real wonder of the widespread appeal of conservatism is the enormous capacity it suggests we all have for taking our advantages for granted. Most people bristle when you even use the words advantage or privilege—as if you’re undermining their worth or authenticity as a person. But the advantages middle class people enjoy are glaring and undeniable. Sure, many of us were raised by single mothers who went through periods of hardship. I’d say most of us, though, had grandparents around who were willing to lend a helping hand here and there. And even if these grandparents didn’t provide loans or handouts they did provide the cultural capital that makes us recognizable to other middle class people as part of the tribe. What makes conservative rags-to-riches stories impossible prima facie is that the people telling them know the plot elements so well, meaning someone taught them the virtue of self-reliance, and they tell them in standard American English, with mouths full of shiny, straight teeth, in accents that belie the story’s gist. It may not seem, in hindsight, that they were comfortably ensconced in the middle class, but at the very least they were surrounded by middle class people, and benefiting from their attention.

You might be tempted to conclude that the role of contingency is left out of conservative ideology, but that’s not really the case. Contingency in the form of bad luck is incorporated into conservative thinking in the form of the very narratives of triumph over adversity that are offered as proof of the fatherly wisdom of the free market. In this way, the ideology is inextricably bound to the storyteller’s authenticity as a person. I suffered and toiled, the storyteller reasons, and therefore my accomplishments are genuine, my character strong. The corollary to this personal investment in what is no longer merely an economic theory is that any dawning awareness of people in worse circumstances than those endured and overcome by the authentic man or woman will be resisted as a threat to that authenticity. If they were to accept that they had it better or easier than some, then their victories would be invalidated. They are thus highly motivated to discount, or simply not to notice contingencies like generational or cultural advantages.

I’ve yet to hear a rags-to-riches story that begins with a malnourished and overstressed mother giving birth prematurely to a cognitively impaired and immuno-compromised baby, and continues with a malnourished and neglected childhood in underperforming schools where not a teacher nor a classmate can be found who places any real value on education, and ends with the hard-working, intelligent person you see in front of you, who makes a pretty decent income and is raising a proud, healthy family. Severely impoverished people live a different world, and however bad we middle-class toilers think we’ve had it we should never be so callous and oblivious to claim we’ve seen and mastered that world. But Skinner Box Economics doesn’t just fail because some of us are born less able to perform successive approximations of the various tricks of productivity; it fails because it’s based on an inadequate theory of human motivation. Rewards and punishments work to determine our behavior to be sure, but the only people who sit around calculating outcomes and navigating incentives and disincentives with a constant eye toward the bottom line are the rich executives who benefit most from a general acceptance of supply-side economics.

The main cultural disadvantage for people growing up in poor families in poor neighborhoods is that the individuals who are likely to serve as role models there will seldom be the beacons of middle-class virtue we stupidly expect our incentive structure to produce. When I was growing up, I looked up to my older brothers, and wanted to do whatever they were doing. And I looked up to an older neighbor kid, whose influence led me to race bikes at local parks. Later my role models were Jean Claude Van Damme and Arnold Schwarzenegger, so I got into martial arts and physical fitness. Soon thereafter, I began to idolize novelists and scientists. Skinnerian behaviorism has been supplanted in the social sciences by theories emphasizing the importance of observational learning, as well as the undeniable role of basic drives like the one for status-seeking. Primatologist Frans de Waal, for instance, has proposed a theory for cultural transmission—in both apes and humans—called BIOL, for bonding and identification based observational learning. What this theory suggests is that our personalities are largely determined by a proclivity for seeking out high-status individuals whom we admire, assimilating their values and beliefs, and emulating their behavior. Absent a paragon of the Calvinist work ethic, no amount of incentives is going to turn a child into the type of person who tells conservative rags-to-riches stories.

The thing to take away from these stories is usually that there is a figure or two who perform admirably in them—the single mom, the determined dad, the charismatic teacher. And the message isn’t about economics at all but about culture and family. Conservatives tout the sanctity of family and the importance of good parenting but when they come face-to-face with the products of poor parenting they see only the products of bad decisions. Middle class parents go to agonizing lengths to ensure their children grow up in good neighborhoods and attend good schools but suggest to them that how well someone behaves is a function of how much they have—how much love and attention, how much healthy food and access to doctors, how much they can count on their struggles being worthwhile—and those same middle class parents will warn you about the dangers of making excuses.

The real proof of how well conservative policies work is not to be found in anecdotes, no matter how numerous; it’s in measures of social mobility. The story these measures tell about the effects of moving farther to the right as a country contrast rather starkly with all the rags-to-Republican tales of personal heroism. But then numbers aren’t really stories; there’s no authenticity and self-congratulation to be gleaned from statistics; and if it’s really true that we owe our prosperity to chance, well, that’s just depressing—and discouraging. We can take some encouragement for our stories of hardship though. We just have to take note of how often the evidence they provide for poverty—food stamps, rent-controlled housing—are in fact government programs to aid the impoverished. They must be working.