READING SUBTLY

This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
 

Dennis Junk Dennis Junk

The Idiocy of Outrage: Sam Harris's Run-ins with Ben Affleck and Noam Chomsky

Too often, we’re convinced by the passion with which someone expresses an idea—and that’s why there’s so much outrage in the news and in other media. Sam Harris has had a particularly hard time combatting outrage with cogent arguments, because he’s no sooner expressed an idea than his interlocutor is aggressively misinterpreting it.

        Every time Sam Harris engages in a public exchange of ideas, be it a casual back-and-forth or a formal debate, he has to contend with an invisible third party whose obnoxious blubbering dispels, distorts, or simply drowns out nearly every word he says. You probably wouldn’t be able to infer the presence of this third party from Harris’s own remarks or demeanor. What you’ll notice, though, is that fellow participants in the discussion, be they celebrities like Ben Affleck or eminent scholars like Noam Chomsky, respond to his comments—even to his mere presence—with a level of rancor easily mistakable for blind contempt. This reaction will baffle many in the audience. But it will quickly dawn on anyone familiar with Harris’s ongoing struggle to correct pernicious mischaracterizations of his views that these people aren’t responding to Harris at all, but rather to the dimwitted and evil caricature of him promulgated by unscrupulous journalists and intellectuals.

In his books on religion and philosophy, Harris plies his unique gift for cutting through unnecessary complications to shine a direct light on the crux of the issue at hand. Topics that other writers seem to go out of their way to make abstruse he manages to explore with jolting clarity and refreshing concision. But this same quality to his writing which so captivates his readers often infuriates academics, who feel he’s cheating by breezily refusing to represent an issue in all its grand complexity while neglecting to acknowledge his indebtedness to past scholars. That he would proceed in such a manner to draw actual conclusions—and unorthodox ones at that—these scholars see as hubris, made doubly infuriating by the fact that his books enjoy such a wide readership outside of academia. So, whether Harris is arguing on behalf of a scientific approach to morality or insisting we recognize that violent Islamic extremism is motivated not solely by geopolitical factors but also by straightforward readings of passages in Islamic holy texts, he can count on a central thread of the campaign against him consisting of the notion that he’s a journeyman hack who has no business weighing in on such weighty matters.

Sam Harris

Philosophers and religious scholars are of course free to challenge Harris’s conclusions, and it’s even possible for them to voice their distaste for his style of argumentation without necessarily violating any principles of reasoned debate. However, whenever these critics resort to moralizing, we must recognize that by doing so they’re effectively signaling the end of any truly rational exchange. For Harris, this often means a substantive argument never even gets a chance to begin. The distinction between debating morally charged topics on the one hand, and condemning an opponent as immoral on the other, may seem subtle, or academic even. But it’s one thing to argue that a position with moral and political implications is wrong; it’s an entirely different thing to become enraged and attempt to shout down anyone expressing an opinion you deem morally objectionable. Moral reasoning, in other words, can and must be distinguished from moralizing. Since the underlying moral implications of the issue are precisely what are under debate, giving way to angry indignation amounts to a pulling of rank—an effort to silence an opponent through the exercise of one’s own moral authority, which reveals a rather embarrassing sense of one’s own superior moral standing.

Unfortunately, it’s far too rarely appreciated that a debate participant who gets angry and starts wagging a finger is thereby demonstrating an unwillingness or an inability to challenge a rival’s points on logical or evidentiary grounds. As entertaining as it is for some to root on their favorite dueling demagogue in cable news-style venues, anyone truly committed to reason and practiced in its application realizes that in a debate the one who loses her cool loses the argument. This isn’t to say we should never be outraged by an opponent’s position. Some issues have been settled long enough, their underlying moral calculus sufficiently worked through, that a signal of disgust or contempt is about the only imaginable response. For instance, if someone were to argue, as Aristotle did, that slavery is excusable because some races are naturally subservient, you could be forgiven for lacking the patience to thoughtfully scrutinize the underlying premises. The problem, however, is that prematurely declaring an end to the controversy and then moving on to blanket moral condemnation of anyone who disagrees has become a worryingly common rhetorical tactic. And in this age of increasingly segmented and polarized political factions it’s more important than ever that we check our impulse toward sanctimony—even though it’s perhaps also harder than ever to do so.

Once a proponent of some unpopular idea starts to be seen as not merely mistaken but dishonest, corrupt, or bigoted, then playing fair begins to seem less obligatory for anyone wishing to challenge that idea. You can learn from casual Twitter browsing or from reading any number of posts on Salon.com that Sam Harris advocates a nuclear first strike against radical Muslims, supported the Bush administration’s use of torture, and carries within his heart an abiding hatred of Muslim people, all billion and a half of whom he believes are virtually indistinguishable from the roughly 20,000 militants making up ISIS. You can learn these things, none of which is true, because some people dislike Harris’s ideas so much they feel it’s justifiable, even imperative, to misrepresent his views, lest the true, more reasonable-sounding versions reach a wider receptive audience. And it’s not just casual bloggers and social media mavens who feel no qualms about spreading what they know to be distortions of Harris’s views; religious scholar Reza Aslan and journalist Glenn Greenwald both saw fit to retweet the verdict that he is a “genocidal fascist maniac,” accompanied by an egregiously misleading quote as evidence—even though Harris had by then discussed his views at length with both of these men.

  It’s easy to imagine Ben Affleck doing some cursory online research to prep for his appearance on Real Time with Bill Maher and finding plenty of savory tidbits to prejudice him against Harris before either of them stepped in front of the cameras. But we might hope that a scholar of Noam Chomsky’s caliber wouldn’t be so quick to form an opinion of someone based on hearsay. Nonetheless, Chomsky responded to Harris’s recent overture to begin an email exchange to help them clear up their misconceptions about each other’s ideas by writing: “Perhaps I have some misconceptions about you. Most of what I’ve read of yours is material that has been sent to me about my alleged views, which is completely false”—this despite Harris having just quoted Chomsky calling him a “religious fanatic.” We must wonder, where might that characterization have come from if he’d read so little of Harris’s work?

  Political and scholarly discourse would benefit immensely from a more widespread recognition of our natural temptation to recast points of intellectual disagreement as moral offenses, a temptation which makes it difficult to resist the suspicion that anyone espousing rival beliefs is not merely mistaken but contemptibly venal and untrustworthy. In philosophy and science, personal or so-called ad hominem accusations and criticisms are considered irrelevant and thus deemed out of bounds—at least in principle. But plenty of scientists and academics of every stripe routinely succumb to the urge to moralize in the midst of controversy. Thus begins the lamentable process by which reasoned arguments are all but inevitably overtaken by competing campaigns of character assassination. In service to these campaigns, we have an ever growing repertoire of incendiary labels with ever lengthening lists of criteria thought to reasonably warrant their application, so if you want to discredit an opponent all that’s necessary is a little creative interpretation, and maybe some selective quoting.

The really tragic aspect of this process is that as scrupulous and fair-minded as any given interlocutor may be, it’s only ever a matter of time before an unpopular message broadcast to a wider audience is taken up by someone who feels duty-bound to kill the messenger—or at least to besmirch the messenger’s reputation. And efforts at turning thoughtful people away from troublesome ideas before they ever even have a chance to consider them all too often meet with success, to everyone’s detriment. Only a small percentage of unpopular ideas may merit acceptance, but societies can’t progress without them.

Once we appreciate that we’re all susceptible to this temptation to moralize, the next most important thing for us to be aware of is that it becomes more powerful the moment we begin to realize ours are the weaker arguments. People in individualist cultures already tend to more readily rate themselves as exceptionally moral than as exceptionally intelligent. Psychologists call this tendency the Muhammed Ali effect (because the famous boxer once responded to a journalist’s suggestion that he’d purposely failed an Army intelligence test by quipping, “I only said I was the greatest, not the smartest”). But when researchers Jens Möller and Karel Savyon had study participants rate themselves after performing poorly on an intellectual task, they found that the effect was even more pronounced. Subjects in studies of the Muhammed Ali effect report believing that moral traits like fairness and honesty are more socially desirable than intelligence. They also report believing these traits are easier for an individual to control, while at the same time being more difficult to measure. Möller and Savyon theorize that participants in their study were inflating their already inflated sense of their own moral worth to compensate for their diminished sense of intellectual worth. While researchers have yet to examine whether this amplification of the effect makes people more likely to condemn intellectual rivals on moral grounds, the idea that a heightened estimation of moral worth could make us more likely to assert our moral authority seems a plausible enough extrapolation from the findings. 

            That Ben Affleck felt intimated by the prospect of having to intelligently articulate his reasons for rejecting Harris’s positions, however, seems less likely than that he was prejudiced to the point of outrage against Harris sometime before encountering him in person. At one point in the interview he says, “You’re making a career out of ISIS, ISIS, ISIS,” a charge of pandering that suggests he knows something about Harris’s work (though Harris doesn't discuss ISIS in any of his books). Unfortunately, Affleck’s passion and the sneering tone of his accusations were probably more persuasive for many in the audience than any of the substantive points made on either side. But, amid Affleck’s high dudgeon, it’s easy to sift out views that are mainstream among liberals. The argument Harris makes at the outset of the segment that first sets Affleck off—though it seemed he’d already been set off by something—is in fact a critique of those same views. He says,

When you want to talk about the treatment of women and homosexuals and freethinkers and public intellectuals in the Muslim world, I would argue that liberals have failed us. [Affleck breaks in here to say, “Thank God you’re here.”] And the crucial point of confusion is that we have been sold this meme of Islamophobia, where every criticism of the doctrine of Islam gets conflated with bigotry toward Muslims as people.

This is what Affleck says is “gross” and “racist.” The ensuing debate, such as it is, focuses on the appropriateness—and morality—of criticizing the Muslim world for crimes only a subset of Muslims are guilty of. But how large is that subset?

Harris (along with Maher) makes two important points: first, he states over and over that it’s Muslim beliefs he’s criticizing, not the Muslim people, so if a particular Muslim doesn’t hold to the belief in question he or she is exempt from the criticism. Harris is ready to cite chapter and verse of Islamic holy texts to show that the attitudes toward women and homosexuals he objects to aren’t based on the idiosyncratic characters of a few sadistic individuals but are rather exactly what’s prescribed by religious doctrine. A passage from his book The End of Faith makes the point eloquently.

It is not merely that we are war with an otherwise peaceful religion that has been “hijacked” by extremists. We are at war with precisely the vision of life that is prescribed to all Muslims in the Koran, and further elaborated in the literature of the hadith, which recounts the sayings and actions of the Prophet. A future in which Islam and the West do not stand on the brink of mutual annihilation is a future in which most Muslims have learned to ignore most of their canon, just as most Christians have learned to do. (109-10)

But most secularists and moderate Christians in the U.S. have a hard time appreciating how seriously most Muslims take their Koran. There are of course passages in the Bible that are simply obscene, and Christians have certainly committed their share of atrocities at least in part because they believed their God commanded them to. But, whereas almost no Christians today advocate stoning their brothers, sisters, or spouses to death for coaxing them to worship other gods (Deuteronomy 13:6 8-15), a significant number of people in Islamic populations believe apostates and “innovators” deserve to have their heads lopped off.

            The second point Harris makes is that, while Affleck is correct in stressing how few Muslims make up or support the worst of the worst groups like Al Qaeda and ISIS, the numbers who believe women are essentially the property of their fathers and husbands, that homosexuals are vile sinners, or that atheist bloggers deserve to be killed are much higher. “We have to empower the true reformers in the Muslim world to change it,” as Harris insists. The journalist Nicholas Kristof says this is a mere “caricature” of the Muslim world. But Harris’s goal has never been to promote a negative view of Muslims, and he at no point suggests his criticisms apply to all Muslims, all over the world. His point, as he stresses multiple times, is that Islamic doctrine is inspiring large numbers of people to behave in appalling ways, and this is precisely why he’s so vocal in his criticisms of those doctrines.

Part of the difficulty here is that liberals (including this one) face a dilemma anytime they’re forced to account for the crimes of non-whites in non-Western cultures. In these cases, their central mission of standing up for the disadvantaged and the downtrodden runs headlong into their core principle of multiculturalism, which makes it taboo for them to speak out against another society’s beliefs and values. Guys like Harris are permitted to criticize Christianity when it’s used to justify interference in women’s sexual decisions or discrimination against homosexuals, because a white Westerner challenging white Western culture is just the system attempting to correct itself. But when Harris speaks out against Islam and the far worse treatment of women and homosexuals—and infidels and apostates—it prescribes, his position is denounced as “gross” and “racist” by the likes of Ben Affleck, with the encouragement of guys like Reza Aslan and Glenn Greenwald. A white American male casting his judgment on a non-Western belief system strikes them as the first step along the path to oppression that ends in armed invasion and possibly genocide. (Though, it should be noted, multiculturalists even attempt to silence female critics of Islam from the Muslim world.)

The biggest problem with this type of slippery-slope presumption isn’t just that it’s sloppy thinking—rejecting arguments because of alleged similarities to other, more loathsome ideas, or because of some imagined consequence should those ideas fall into the wrong hands. The biggest problem is that it time and again provides a rationale for opponents of an idea to silence and defame anyone advocating it. Unless someone is explicitly calling for mistreatment or aggression toward innocents who pose no threat, there’s simply no way to justify violating anyone’s rights to free inquiry and free expression—principles that should supersede multiculturalism because they’re the foundation and guarantors of so many other rights. Instead of using our own delusive moral authority in an attempt to limit discourse within the bounds we deem acceptable, we have a responsibility to allow our intellectual and political rivals the space to voice their positions, trusting in our fellow citizens’ ability to weigh the merits of competing arguments. 

But few intellectuals are willing to admit that they place multiculturalism before truth and the right to seek and express it. And, for those who are reluctant to fly publically into a rage or to haphazardly apply any of the growing assortment of labels for the myriad varieties of bigotry, there are now a host of theories that serve to reconcile competing political values. The multicultural dilemma probably makes all of us liberals too quick to accept explanations of violence or extremism—or any other bad behavior—emphasizing the role of external forces, whether it’s external to the individual or external to the culture. Accordingly, to combat Harris’s arguments about Islam, many intellectuals insist that religion simply does not cause violence. They argue instead that the real cause is something like resource scarcity, a history of oppression, or the prolonged occupation of Muslim regions by Western powers.

            If the arguments in support of the view that religion plays a negligible role in violence were as compelling as proponents insist they are, then it’s odd that they should so readily resort to mischaracterizing Harris’s positions when he challenges them. Glenn Greenwald, a journalist who believes religion is such a small factor that anyone who criticizes Islam is suspect, argues his case against Harris within an almost exclusively moral framework—not is Harris right, but is he an anti-Muslim? The religious scholar Reza Aslan quotes Harris out of context to give the appearance that he advocates preemptive strikes against Muslim groups. But Aslan’s real point of disagreement with Harris is impossible to pin down. He writes,

After all, there’s no question that a person’s religious beliefs can and often do influence his or her behavior. The mistake lies in assuming there is a necessary and distinct causal connection between belief and behavior.

Since he doesn’t explain what he means by “necessary and distinct,” we’re left with little more than the vague objection that religion’s role in motivating violence is more complex than some people seem to imagine. To make this criticism apply to Harris, however, Aslan is forced to erect a straw man—and to double down on the tactic after Harris has pointed out his error, suggesting that his misrepresentation is deliberate.

Few commenters on this debate appreciate just how radical Aslan’s and Greenwald’s (and Karen Armstrong’s) positions are. The straw men notwithstanding, Harris readily admits that religion is but one of many factors that play a role in religious violence. But this doesn’t go far enough for Aslan and Greenwald. While they acknowledge religion must fit somewhere in the mix, they insist its role is so mediated and mixed up with other factors that its influence is all but impossible to discern. Religion in their minds is a pure social construct, so intricately woven into the fabric of a culture that it could never be untangled. As evidence of this irreducible complexity, they point to the diverse interpretations of the Koran made by the wide variety of Muslim groups all over the world. There’s an undeniable kernel of truth in this line of thinking. But is religion really reconstructed from scratch in every culture?

One of the corollaries of this view is that all religions are essentially equal in their propensity to inspire violence, and therefore, if adherents of one particular faith happen to engage in disproportionate levels of violence, we must look to other cultural and political factors to explain it. That would also mean that what any given holy text actually says in its pages is completely immaterial. (This from a scholar who sticks to a literal interpretation of a truncated section of a book even though the author assures him he’s misreading it.) To highlight the absurdity of this idea, Harris likes to cite the Jains as an example. Mahavira, a Jain patriarch, gave this commandment: “Do not injure, abuse, oppress, enslave, insult, torment, or kill any creature or living being.” How plausible is the notion that adherents of this faith are no more and no less likely to commit acts of violence than those whose holy texts explicitly call for them to murder apostates? “Imagine how different our world might be if the Bible contained this as its central precept” (23), Harris writes in Letter to a Christian Nation.

            Since the U.S. is in fact a Christian nation, and since it has throughout its history displaced, massacred, invaded, occupied, and enslaved people from nearly every corner of the globe, many raise the question of what grounds Harris, or any other American, has for judging other cultures. And this is where the curious email exchange Harris began with the linguist and critic of American foreign policy Noam Chomsky takes up. Harris reached out to Chomsky hoping to begin an exchange that might help to clear up their differences, since he figured they have a large number of readers in common. Harris had written critically of Chomsky’s book about 9/11 in End of Faith, his own book on the topic of religious extremism written some time later. Chomsky’s argument seems to have been that the U.S. routinely commits atrocities on a scale similar to that of 9/11, and that the Al Qaeda attacks were an expectable consequence of our nation’s bullying presence in global affairs. Instead of dealing with foreign threats then, we should be concentrating our efforts on reforming our own foreign policy. But Harris points out that, while it’s true the U.S. has caused the deaths of countless innocents, the intention of our leaders wasn’t to kill as many people as possible to send a message of terror, making such actions fundamentally different from those of the Al Qaeda terrorists.

The first thing to note in the email exchange is that Harris proceeds on the assumption that any misunderstanding of his views by Chomsky is based on an honest mistake, while Chomsky immediately takes for granted that Harris’s alleged misrepresentations are deliberate (even though, since Harris sends him the excerpt from his book, that would mean he’s presenting the damning evidence of his own dishonesty). In other words, Chomsky switches into moralizing mode at the very outset of the exchange. The substance of the disagreement mainly concerns the U.S.’s 1998 bombing of the al-Shifa pharmaceutical factory in Sudan. According to Harris’s book, Chomsky argues this attack was morally equivalent to the attacks by Al Qaeda on 9/11. But in focusing merely on body counts, Harris charges that Chomsky is neglecting the far more important matter of intention.

Noam Chomsky

Chomsky insists after reading the excerpt, however, that he never claimed the two attacks were morally equivalent, and that furthermore he in fact did consider, and write at length about, the intentions of the Clinton administration officials who decided to bomb al-Shifa—just not in the book cited by Harris. In this other book, which Chomsky insists Harris is irresponsible for not having referenced, he argues that the administration’s claim that it received intelligence about the factory manufacturing chemical weapons was a lie and that the bombing was actually meant as retaliation for an earlier attack on the U.S. Embassy. Already at this point in the exchange Chomsky is writing to Harris as if he were guilty of dishonesty, unscholarly conduct, and collusion in covering up the crimes of the American government. 

But which is it? Is Harris being dishonest when he says Chomsky is claiming moral equivalence? Or is he being dishonest when he fails to cite an earlier source arguing that in fact what the U.S. did was morally worse? The more important question, however, is why does Chomsky assume Harris is being dishonest, especially in light of how complicated his position is? Here’s what Chomsky writes in response to Harris pressing him to answer directly the question about moral equivalence:

Clinton bombed al-Shifa in reaction to the Embassy bombings, having discovered no credible evidence in the brief interim of course, and knowing full well that there would be enormous casualties. Apologists may appeal to undetectable humanitarian intentions, but the fact is that the bombing was taken in exactly the way I described in the earlier publication which dealt the question of intentions in this case, the question that you claimed falsely that I ignored: to repeat, it just didn’t matter if lots of people are killed in a poor African country, just as we don’t care if we kill ants when we walk down the street. On moral grounds, that is arguably even worse than murder, which at least recognizes that the victim is human. That is exactly the situation.

Most of the rest of the exchange consists of Harris trying to figure out Chomsky’s views on the role of intention in moral judgment, and Chomsky accusing Harris of dishonesty and evasion for not acknowledging and exploring the implications of the U.S.’s culpability in the al-Shifa atrocity. When Harris tries to explain his view on the bombing by describing a hypothetical scenario in which one group stages an attack with the intention of killing as many people as possible, comparing it to another scenario in which a second group stages an attack with the intention of preventing another, larger attack, killing as few people as possible in the process, Chomsky will have none it. He insists Harris’s descriptions are “so ludicrous as to be embarrassing,” because they’re nothing like what actually happened. We know Chomsky is an intelligent enough man to understand perfectly well how a thought experiment works. So we’re left asking, what accounts for his mindless pounding on the drum of the U.S.’s greater culpability? And, again, why is he so convinced Harris is carrying on in bad faith?

What seems to be going on here is that Chomsky, a long-time critic of American foreign policy, actually began with the conclusion he sought to arrive at. After arguing for decades that the U.S. was the ultimate bad guy in the geopolitical sphere, his first impulse after the attacks of 9/11 was to salvage his efforts at casting the U.S. as the true villain. Toward that end, he lighted on al-Shifa as the ideal crime to offset any claim to innocent victimhood. He’s actually been making this case for quite some time, and Harris is by no means the first to insist that the intentions behind the two attacks should make us judge them very differently. Either Chomsky felt he knew enough about Harris to treat him like a villain himself, or he has simply learned to bully and level accusations against anyone pursuing a line of questions that will expose the weakness of his idea—he likens Harris’s arguments at one point to “apologetics for atrocities”—a tactic he keeps getting away with because he has a large following of liberal academics who accept his moral authority.

Harris saw clear to the end-game of his debate with Chomsky, and it’s quite possible Chomsky in some murky way did as well. The reason he was so sneeringly dismissive of Harris’s attempts to bring the discussion around to intentions, the reason he kept harping on how evil America had been in bombing al-Shifa, is that by focusing on this one particular crime he was avoiding the larger issue of competing ideologies. Chomsky’s account of the bombing is not as certain as he makes out, to say the least. An earlier claim he made about a Human Rights Watch report on the death toll, for instance, turned out to be completely fictitious. But even if the administration really was lying about its motives, it’s noteworthy that a lie was necessary. When Bin Laden announced his goals, he did so loudly and proudly. 

Chomsky’s one defense of his discounting of the attackers’ intentions (yes, he defends it, even though he accused Harris of being dishonest for pointing it out) is that everyone claims to have good intentions, so intentions simply don’t matter. This is shockingly facile coming from such a renowned intellectual—it would be shockingly facile coming from anyone. Of course Harris isn’t arguing that we should take someone’s own word for whether their intentions are good or bad. What Harris is arguing is that we should examine someone’s intentions in detail and make our own judgment about them. Al Qaeda’s plan to maximize terror by maximizing the death count of their attacks can only be seen as a good intention in the context of the group’s extreme religious ideology. That’s precisely why we should be discussing and criticizing that ideology, criticism which should extend to the more mainstream versions of Islam it grew out of.

Taking a step back from the particulars, we see that Chomsky believes the U.S. is guilty of far more and far graver acts of terror than any of the groups or nations officially designated as terrorist sponsors, and he seems unwilling to even begin a conversation with anyone who doesn’t accept this premise. Had he made some iron-clad case that the U.S. really did treat the pharmaceutical plant, and the thousands of lives that depended on its products, as pawns in some amoral game of geopolitical chess, he could have simply directed Harris to the proper source, or he could have reiterated key elements of that case. Regardless of what really happened with al-Shifa, we know full well what Al Qaeda’s intentions were, and Chomsky could have easily indulged Harris in discussing hypotheticals had he not feared that doing so would force him to undermine his own case. Is Harris an apologist for American imperialism? Here’s a quote from the section of his book discussing Chomsky's ideas:

We have surely done some terrible things in the past. Undoubtedly, we are poised to do terrible things in the future. Nothing I have written in this book should be construed as a denial of these facts, or as defense of state practices that are manifestly abhorrent. There may be much that Western powers, and the United States in particular, should pay reparations for. And our failure to acknowledge our misdeeds over the years has undermined our credibility in the international community. We can concede all of this, and even share Chomsky’s acute sense of outrage, while recognizing that his analysis of our current situation in the world is a masterpiece of moral blindness.

To be fair, lines like this last one are inflammatory, so it was understandable that Chomsky was miffed, up to a point. But Harris is right to point to his moral blindness, the same blindness that makes Aslan, Affleck, and Greenwald unable to see that the specific nature of beliefs and doctrines and governing principles actually matters. If we believe it’s evil to subjugate women, abuse homosexuals, and murder freethinkers, the fact that our country does lots of horrible things shouldn’t stop us from speaking out against these practices to people of every skin color, in every culture, on every part of the globe.

            Sam Harris is no passive target in all of this. In a debate, he gives as good or better than he gets, and he has a penchant for finding the most provocative way to phrase his points—like calling Islam “the motherlode of bad ideas.” He doesn’t hesitate to call people out for misrepresenting his views and defaming him as a person, but I’ve yet to see him try to win an argument by going after the person making it. And I’ve never seen him try to sabotage an intellectual dispute with a cheap performance of moral outrage, or discredit opponents by fixing them with labels they don't deserve. Reading his writings and seeing him lecture or debate, you get the sense that he genuinely wants to test the strength of ideas against each other and see what new insight such exchanges may bring. That’s why it’s frustrating to see these discussions again and again go off the rails because his opponent feels justified in dismissing and condemning him based on inaccurate portrayals, from an overweening and unaccountable sense of self-righteousness.

Ironically, honoring the type of limits to calls for greater social justice that Aslan and Chomsky take as sacrosanct—where the West forebears to condescend to the rest—serves more than anything else to bolster the sense of division and otherness that makes many in the U.S. care so little about things like what happened in al-Shifa. As technology pushes on the transformation of our far-flung societies and diverse cultures into a global community, we ought naturally to start seeing people from Northern Africa and the Middle East—and anywhere else—not as scary and exotic ciphers, but as fellow citizens of the world, as neighbors even. This same feeling of connection that makes us all see each other as more human, more worthy of each other’s compassion and protection, simultaneously opens us up to each other’s criticisms and moral judgments. Chomsky is right that we Americans are far too complacent about our country’s many crimes. But opening the discussion up to our own crimes opens it likewise to other crimes that cannot be tolerated anywhere on the globe, regardless of the culture, regardless of any history of oppression, and regardless too of any sanction delivered from the diverse landscape of supposedly sacred realms. 

Other popular posts like this:

THE SOUL OF THE SKEPTIC: WHAT PRECISELY IS SAM HARRIS WAKING UP FROM?

MEDIEVAL VS ENLIGHTENED: SORRY, MEDIEVALISTS, DAN SAVAGE WAS RIGHT

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

Read More
Dennis Junk Dennis Junk

Medieval vs Enlightened: Sorry, Medievalists, Dan Savage Was Right

The medievalist letter writer claims that being “part of the center” is what makes living in the enlightened West preferable to living in the 12th century. But there’s simply no way whoever wrote the letter actually believes this. If you happen to be poor, female, a racial or religious minority, a homosexual, or a member of any other marginalized group, you’d be far more loath to return to the Middle Ages than those of us comfortably ensconced in this notional center, just as you’d be loath to relocate to any society not governed by Enlightenment principles today.

            A letter from an anonymous scholar of the medieval period to the sex columnist Dan Savage has been making the rounds of social media lately. Responding to a letter from a young woman asking how she should handle sex for the first time with her Muslim boyfriend, who happened to be a virgin, Savage wrote, “If he’s still struggling with the sex-negative, woman-phobic zap that his upbringing (and a medieval version of his faith) put on his head, he needs to work through that crap before he gets naked with you.” The anonymous writer bristles in bold lettering at Savage’s terminology: “I’m a medievalist, and this is one of the things about our current discourse on religion that drives me nuts. Contemporary radical Christianity, Judaism, and Islam are all terrible, but none of them are medieval, especially in terms of sexuality.” Oddly, however, the letter, published under the title, “A Medievalist Schools Dan on Medieval Attitudes toward Sex,” isn’t really as much about correcting popular misconceptions about sex in the Middle Ages as it is about promoting a currently fashionable but highly dubious way of understanding radical religion in the various manifestations we see today.

            While the medievalist’s overall argument is based far more on ideology than actual evidence, the letter does make one important and valid point. As citizens of a technologically advanced secular democracy, it’s tempting for us to judge other cultures by the standards of our own. Just as each of us expects every young person we encounter to follow a path to maturity roughly identical to the one we’ve taken ourselves, people in advanced civilizations tend to think of less developed societies as occupying one or another of the stages that brought us to our own current level of progress. This not only inspires a condescending attitude toward other cultures; it also often leads to an overly simplified understanding of our own culture’s history. The letter to Savage explains:

I’m not saying that the Middle Ages was a great period of freedom (sexual or otherwise), but the sexual culture of 12th-century France, Iraq, Jerusalem, or Minsk did not involve the degree of self-loathing brought about by modern approaches to sexuality. Modern sexual purity has become a marker of faith, which it wasn’t in the Middle Ages. (For instance, the Bishop of Winchester ran the brothels in South London—for real, it was a primary and publicly acknowledged source of his revenue—and one particularly powerful Bishop of Winchester was both the product of adultery and the father of a bastard, which didn’t stop him from being a cardinal and papal legate.) And faith, especially in modern radical religion, is a marker of social identity in a way it rarely was in the Middle Ages.

If we imagine the past as a bad dream of sexual repression from which our civilization has only recently awoken, historical tidbits about the prevalence and public acceptance of prostitution may come as a surprise. But do these revelations really undermine any characterization of the period as marked by religious suppression of sexual freedom?

            Obviously, the letter writer’s understanding of the Middle Ages is more nuanced than most of ours, but the argument reduces to pointing out a couple of random details to distract us from the bigger picture. The passage quoted above begins with an acknowledgement that the Middle Ages was not a time of sexual freedom, and isn’t it primarily that lack of freedom that Savage was referring to when he used the term medieval? The point about self-loathing is purely speculative if taken to apply to the devout generally, and simply wrong with regard to ascetics who wore hairshirts, flagellated themselves, or practiced other forms of mortification of the flesh. In addition, we must wonder how much those prostitutes enjoyed the status conferred on them by the society that was supposedly so accepting of their profession; we must also wonder if this medievalist is aware of what medieval Islamic scholars like Imam Malik (711-795) and Imam Shafi (767-820) wrote about homosexuality. The letter writer is on shaky ground yet again with regard to the claim that sexual purity wasn’t a marker of faith (though it’s hard to know precisely what the phrase even means). There were all kinds of strange prohibitions in Christendom against sex on certain days of the week, certain times of the year, and in any position outside of missionary. Anyone watching the BBC’s adaptation of Wolf Hall knows how much virginity was prized in women—as King Henry VIII could only be wed to a woman who’d never had sex with another man. And there’s obviously an Islamic tradition of favoring virgins, or else why would so many of them be promised to martyrs? Finally, of course faith wasn’t a marker of social identity—nearly everyone in every community was of the same faith. If you decided to take up another set of beliefs, chances are you’d have been burned as a heretic or beheaded as an apostate.

            The letter writer is eager to make the point that the sexual mores espoused by modern religious radicals are not strictly identical to the ones people lived according to in the Middle Ages. Of course, the varieties of religion in any one time aren’t ever identical to those in another, or even to others in the same era. Does anyone really believe otherwise? The important question is whether there’s enough similarity between modern religious beliefs on the one hand and medieval religious beliefs on the other for the use of the term to be apposite. And the answer is a definitive yes. So what is the medievalist’s goal in writing to correct Savage? The letter goes on,

The Middle Eastern boyfriend wasn’t taught a medieval version of his faith, and radical religion in the West isn’t a retreat into the past—it is a very modern way of conceiving identity. Even something like ISIS is really just interested in the medieval borders of their caliphate; their ideology developed out of 18th- and 19th-century anticolonial sentiment. The reason why this matters (beyond medievalists just being like, OMG no one gets us) is that the common response in the West to religious radicalism is to urge enlightenment, and to believe that enlightenment is a progressive narrative that is ever more inclusive. But these religions are responses to enlightenment, in fact often to The Enlightenment.

The Enlightenment, or Age of Reason, is popularly thought to have been the end of the Middle or so-called Dark Ages. The story goes that the medieval period was a time of Catholic oppression, feudal inequality, stunted innovation, and rampant violence. Then some brilliant philosophers woke the West up to the power of reason, science, and democracy, thus marking the dawn of the modern world. Historians and academics of various stripes like to sneer at this story of straightforward scientific and moral progress. It’s too simplistic. It ignores countless atrocities perpetrated by those supposedly enlightened societies. And it undergirds an ugly contemptuousness toward less advanced cultures. But is the story of the Enlightenment completely wrong?

            The medievalist letter writer makes no bones about the source of his ideas, writing in a parenthetical, “Michel Foucault does a great job of talking about these developments, and modern sexuality, including homosexual and heterosexual identity, as well—and I’m stealing and watering down his thoughts here.” Foucault, though he eschewed the label, is a leading figure in poststructuralist and postmodern schools of thought. His abiding interest throughout his career was with the underlying dynamics of social power as they manifested themselves in the construction of knowledge. He was one of those French philosophers who don’t believe in things like objective truth, human nature, or historical progress of any kind.

Foucault and the scores of scholars inspired by his work take it as their mission to expose all the hidden justifications for oppression in our culture’s various media for disseminating information. Why they would bother taking on this mission in the first place, though, is a mystery, beginning as they do from the premise that any notion of moral progress can only be yet another manifestation of one group’s power over another. If you don’t believe in social justice, why pursue it? If you don’t believe in truth, why seek it out? And what are Foucault’s ideas about the relationship between knowledge and power but theories of human nature? Despite this fundamental incoherence, many postmodern academics today zealously pounce on any opportunity to chastise scientists, artists, and other academics for alleged undercurrents in their work of sexism, racism, homophobia, Islamophobia, or some other oppressive ideology. Few sectors of academia remain untouched by this tradition, and its influence leads legions of intellectuals to unselfconsciously substitute sanctimony for real scholarship.

            So how do Foucault and the medievalist letter writer view the Enlightenment? The letter refers vaguely to “concepts of mass culture and population.” Already, it seems we’re getting far afield of how most historians and philosophers characterize the Enlightenment, not to mention how most Enlightenment figures themselves described their objectives. The letter continues,

Its narrative depends upon centralized control: It gave us the modern army, the modern prison, the mental asylum, genocide, and totalitarianism as well as modern science and democracy. Again, I’m not saying that I’d prefer to live in the 12th century (I wouldn’t), but that’s because I can imagine myself as part of that center. Educated, well-off Westerners generally assume that they are part of the center, that they can affect the government and contribute to the progress of enlightenment. This means that their identity is invested in the social form of modernity.

It’s true that the terms Enlightenment and Dark Ages were first used by Western scholars in the nineteenth century as an exercise in self-congratulation, and it’s also true that any moral progress that was made over the period occurred alongside untold atrocities. But neither of these complications to the oversimplified version of the narrative establishes in any way that the Enlightenment never really occurred—as the letter writer’s repeated assurances that it’s preferable to be alive today ought to make clear. What’s also clear is that this medievalist is deliberately conflating enlightenment with modernity, so that all the tragedies and outrages of the modern world can be laid at the feet of enlightenment thinking. How else could he describe the enlightenment as being simultaneously about both totalitarianism and democracy? But not everything that happened after the Enlightenment was necessarily caused by it, and nor should every social institution that arose from the late 19th to the early 20th century be seen as representative of enlightenment thinking.

            The medievalist letter writer claims that being “part of the center” is what makes living in the enlightened West preferable to living in the 12th century. But there’s simply no way whoever wrote the letter actually believes this. If you happen to be poor, female, a racial or religious minority, a homosexual, or a member of any other marginalized group, you’d be far more loath to return to the Middle Ages than those of us comfortably ensconced in this notional center, just as you’d be loath to relocate to any society not governed by Enlightenment principles today.

The medievalist insists that groups like ISIS follow an ideology that dates to the 18th and 19th centuries and arose in response to colonialism, implying that Islamic extremism would be just another consequence of the inherently oppressive nature of the West and its supposedly enlightened ideas. “Radical religion,” from this Foucauldian perspective, offers a social identity to those excluded (or who feel excluded) from the dominant system of Western enlightenment capitalism. It is a modern response to a modern problem, and by making it seem like some medieval holdover, we cover up the way in which our own social power produces the conditions for this kind of identity, thus making violence appear to be the only response for these recalcitrant “holdouts.”

This is the position of scholars and journalists like Reza Aslan and Glenn Greenwald as well. It’s emblematic of the same postmodern ideology that forces on us the conclusion that if chimpanzees are violent to one another, it must be the result of contact with primatologists and other humans; if indigenous people in traditionalist cultures go to war with their neighbors, it must be owing to contact with missionaries and anthropologists; and if radical Islamists are killing their moderate co-religionists, kidnapping women, or throwing homosexuals from rooftops, well, it can only be the fault of Western colonialism. Never mind that these things are prescribed by holy texts dating from—you guessed it—the Middle Ages. The West, to postmodernists, is the source of all evil, because the West has all the power.

Directionality in Societal Development

But the letter writer’s fear that thinking of radical religion as a historical holdover will inevitably lead us to conclude military action is the only solution is based on an obvious non sequitur. There’s simply no reason someone who sees religious radicalism as medieval must advocate further violence to stamp it out. And that brings up another vital question: what solution do the postmodernists propose for things like religious violence in the Middle East and Africa? They seem to think that if they can only convince enough people that Western culture is inherently sexist, racist, violent, and so on—basically a gargantuan engine of oppression—then every geopolitical problem will take care of itself somehow.

            If it’s absurd to believe that everything that comes from the West is good and pure and true just because it comes from the West, it’s just as absurd to believe that everything that comes from the West is evil and tainted and false for the same reason. Had the medievalist spent some time reading the webpage on the Enlightenment so helpfully hyperlinked to in the letter, whoever it is may have realized how off-the-mark Foucault’s formulation was. The letter writer gets it exactly wrong in the part about mass culture and population, since the movement is actually associated with individualism, including individual rights. But what best distinguishes enlightenment thinking from medieval thinking, in any region or era, is the conviction that knowledge, justice, and better lives for everyone in the society are achievable through the application of reason, science, and skepticism, while medieval cultures rely instead on faith, scriptural or hierarchical authority, and tradition. The two central symbols of the Enlightenment are Galileo declaring that the church was wrong to dismiss the idea of a heliocentric cosmos and the Founding Fathers appending the Bill of Rights to the U.S. Constitution. You can argue that it’s only owing to a history of colonialism that Western democracies today enjoy the highest standard of living among all the nations of the globe. But even the medievalist letter writer attests to how much better it is to live in enlightened countries today than in the same countries in the Middle Ages.

            The postmodernism of Foucault and his kindred academics is not now, and has not ever been, compelling on intellectual grounds, which leaves open the question of why so many scholars have turned against the humanist and Enlightenment ideals that once gave them their raison d’être. I can’t help suspecting that the appeal of postmodernism stems from certain religious qualities of the worldview, qualities that ironically make it resemble certain aspects of medieval thought: the bowing to the authority of celebrity scholars (mostly white males), the cloistered obsession with esoteric texts, rituals of expiation and self-abasement, and competitive finger-wagging. There’s even a core belief in something very like original sin; only in this case it consists of being born into the ranks of a privileged group whose past members were guilty of some unspeakable crime. Postmodern identity politics seems to appeal most strongly to whites with an overpowering desire for acceptance by those less fortunate, as if they were looking for some kind of forgiveness or redemption only the oppressed have the power to grant. That’s why these academics are so quick to be persuaded they should never speak up unless it’s on behalf of some marginalized group, as if good intentions were proof against absurdity. As safe and accommodating and well-intentioned as this stance sounds, though, in practice it amounts to little more than moral and intellectual cowardice.

Life really has gotten much better since the Enlightenment, and it really does continue to get better for an increasing number of formerly oppressed groups of people today. All this progress has been made, and continues being made, precisely because there are facts and ideas—scientific theories, human rights, justice, and equality—that transcend the social conditions surrounding their origins. Accepting this reality doesn’t in any way mean seeing violence as the only option for combatting religious extremism, despite many academics’ insistence to the contrary. Nor does it mean abandoning the cause of political, cultural, and religious pluralism. But, if we continue disavowing the very ideals that have driven this progress, however fitfully and haltingly it has occurred, if we continue denying that it can even be said to have occurred at all, then what hope can we possibly have of pushing it even further along in the future?   

Also read:

THE IDIOCY OF OUTRAGE: SAM HARRIS'S RUN-INS WITH BEN AFFLECK AND NOAM CHOMSKY

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And: 

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

On ISIS's explicit avowal of adherence to medieval texts: “What ISIS Really Wants" by Graeme Wood of the Atlantic

Read More
Dennis Junk Dennis Junk

The Soul of the Skeptic: What Precisely Is Sam Harris Waking Up from?

In my first foray into Sam Harris’s work, I struggle with some of the concepts he holds up as keys to a more contented, more spiritual life. Along the way, though, I find myself captivated by the details of Harris’s own spiritual journey, and I’m left wondering if there just may be more to this meditation stuff than I’m able to initially wrap my mind around.

             Sam Harris believes that we can derive many of the benefits people cite as reasons for subscribing to one religion or another from non-religious practices and modes of thinking, ones that don’t invoke bizarre scriptures replete with supernatural absurdities. In The Moral Landscape,for instance, he attempted to show that we don’t need a divine arbiter to settle our ethical and political disputes because reason alone should suffice. Now, with Waking Up, Harris is taking on an issue that many defenders of Christianity, or religion more generally, have long insisted he is completely oblivious to. By focusing on the truth or verifiability of religious propositions, Harris’s critics charge, he misses the more important point: religion isn’t fundamentally about the beliefs themselves so much as the effects those beliefs have on a community, including the psychological impact on individuals of collective enactments of the associated rituals—feelings of connectedness, higher purpose, and loving concern for all one’s neighbors.

            Harris likes to point out that his scholarly critics simply have a hard time appreciating just how fundamentalist most religious believers really are, and so they turn a blind eye toward the myriad atrocities religion sanctions, or even calls for explicitly. There’s a view currently fashionable among the more politically correct scientists and academics that makes criticizing religious beliefs seem peevish, even misanthropic, because religion is merely something people do, like reading stories or playing games, to imbue their lives with texture and meaning, or to heighten their sense of belonging to a community. According to this view, the particular religion in question—Islam, Buddhism, Hinduism, Jainism, Christianity—isn’t as important as the people who subscribe to it, nor do any specific tenets of a given faith have any consequence. That’s why Harris so frequently comes under fire—and is even accused of bigotry—for suggesting things like that the passages in the Koran calling for violence actually matter and that Islam is much more likely to inspire violence because of them.

We can forgive Harris his impatience with this line of reasoning, which leads his critics to insist that violence is in every case politically and never religiously motivated. This argument can only be stated with varying levels of rancor, never empirically supported, and is hence dismissible as a mere article of faith in its own right, one that can’t survive any encounter with the reality of religious violence. Harris knows how important a role politics plays and that it’s often only the fundamentalist subset of the population of believers who are dangerous. But, as he points out, “Fundamentalism is only a problem when the fundamentals are a problem” (2:30:09). It’s only by the lights of postmodern identity politics that an observation this banal could strike so many as so outrageous.

            But what will undoubtedly come as a disappointment to Harris’s more ardently anti-religious readers, and as a surprise to fault-seeking religious apologists, is that from the premise that not all religions are equally destructive and equally absurd follows the conclusion that some religious ideas or practices may actually be beneficial or point the way toward valid truths. Harris has discussed his experiences with spiritual retreats and various forms of  meditation in past works, but now with Waking Up he goes so far as to advocate certain of the ancient contemplative practices he’s experimented with. Has he abandoned his scientific skepticism? Not by any means; near the end of the book, he writes, “As a general matter, I believe we should be very slow to draw conclusions about the nature of the cosmos on the basis of inner experiences—no matter how profound they seem” (192). What he’s doing here, and with the book as a whole, is underscoring the distinction between religious belief on the one hand and religious experience on the other.

Acknowledging that some practices which are nominally religious can be of real value, Harris goes on to argue that we need not accept absurd religious doctrines to fully appreciate them. And this is where the subtitle of his book, A Guide to Spirituality without Religion, comes from. As paradoxical as this concept may seem to people of faith, Harris cites a survey finding that 20% of Americans describe themselves as “spiritual but not religious” (6). And he argues that separating the two terms isn’t just acceptable; it’s logically necessary.

Spirituality must be distinguished from religion—because people of every faith, and of none, have the same sorts of spiritual experiences. While these states of mind are usually interpreted through the lens of one or another religious doctrine, we know this is a mistake. Nothing that a Christian, a Muslim, and a Hindu can experience—self-transcending love, ecstasy, bliss, inner light—constitutes evidence in support of their traditional beliefs, because their beliefs are logically incompatible with one another. A deeper principle must be at work. (9)

People of faith frequently respond to the criticism that their beliefs fly in the face of logic and evidence by claiming they simply know God is real because they have experiences that can only be attributed to a divine presence. Any failure on the part of skeptics to acknowledge the lived reality of such experiences makes their arguments all the more easily dismissible as overly literal or pedantic, and it makes the skeptics themselves come across as closed-minded and out-of-touch.

            On the other hand, Harris’s suggestion of a “deeper principle” underlying religious experiences smacks of New Age thinking at its most wooly. For one thing, church authorities often condemn, excommunicate, or execute congregants with mystical leanings for their heresy. (Harris cites a few examples.) But the deeper principle Harris is referring to isn’t an otherworldly one. And he’s perfectly aware of the unfortunate connotations the words he uses often carry:

I share the concern, expressed by many atheists, that the terms spiritual and mystical are often used to make claims not merely about the quality of certain experiences but about reality at large. Far too often, these words are invoked in support of religious beliefs that are morally and intellectually grotesque. Consequently, many of my fellow atheists consider all talk of spirituality to be a sign of mental illness, conscious imposture, or self-deception. This is a problem, because millions of people have had experiences for which spiritual and mystical seem the only terms available. (11)

You can’t expect people to be convinced their religious beliefs are invalid when your case rests on a denial of something as perfectly real to them as their own experiences. And it’s difficult to make the case that these experiences must be separated from the religious claims they’re usually tied to while refusing to apply the most familiar labels to them, because that comes awfully close to denying their legitimacy.

*****

            Throughout Waking Up, Harris focuses on one spiritual practice in particular, a variety of meditation that seeks to separate consciousness from any sense of self, and he argues that the insights one can glean from experiencing this rift are both personally beneficial and neuroscientifically sound. Certain Hindu and Buddhist traditions hold that the self is an illusion, a trick of the mind, and our modern scientific understanding of the mind, Harris argues, corroborates this view. By default, most of us think of the connection between our minds and our bodies dualistically; we believe we have a spirit, a soul, or some other immaterial essence that occupies and commands our physical bodies. Even those of us who profess not to believe in any such thing as a soul have a hard time avoiding a conception of the self as a unified center of consciousness, a homunculus sitting at the controls. Accordingly, we attach ourselves to our own thoughts and perceptions—we identify with them. Since it seems we’re programmed to agonize over past mistakes and worry about impending catastrophes, we can’t help feeling the full brunt of a constant barrage of negative thoughts. Most of us recognize the sentiment Harris expresses in writing that “It seems to me that I spend much of my life in a neurotic trance” (11). And this is precisely the trance we need to wake up from.

            To end the spiraling chain reaction of negative thoughts and foul feelings, we must detach ourselves from our thinking, and to do this, Harris suggests, we must recognize that there is no us doing the thinking. The “I” in the conventional phrasing “I think” or “I feel” is nowhere to be found. Is it in our brains? Which part? Harris describes the work of the Nobel laureate neuroscientist Roger Sperry, who in the 1950s did a series a fascinating experiments with split-brain patients, so called because the corpus callosum, the bundle of fibers connecting the two hemispheres of their brains, had been surgically severed to reduce the severity of epileptic seizures. Sperry found that he could present instructions to the patients’ left visual fields—which would only be perceived by the right hemisphere—and induce responses that the patients themselves couldn’t explain, because language resides predominantly in the left hemisphere. When asked to justify their behavior, though, the split-brain patients gave no indication that they had no idea why they were doing what they’d been instructed to do. Instead, they confabulated answers. For instance, if the right hemisphere is instructed to pick up an egg from among an assortment of objects on a table, the left hemisphere may explain the choice by saying something like, “Oh, I picked it because I had eggs for breakfast yesterday.”

            As weird as this type of confabulation may seem, it has still weirder implications. At any given moment, it’s easy enough for us to form intentions and execute plans for behavior. But where do those intentions really come from? And how can we be sure our behaviors reflect the intentions we believe they reflect? We are only ever aware of a tiny fraction of our minds’ operations, so it would be all too easy for us to conclude we are the ones in charge of everything we do even though it’s really someone—or something else behind the scenes pulling the strings. The reason split-brain patients so naturally confabulate about their motives is that the language centers of our brains probably do it all the time, even when our corpus callosa are intact. We are only ever dimly aware of our true motivations, and likely completely in the dark about them as often as not. Whenever we attempt to explain ourselves, we’re really just trying to make up a plausible story that incorporates all the given details, one that makes sense both to us and to anyone listening.

            If you’re still not convinced that the self is an illusion, try to come up with a valid justification for locating the self in either the left or the right hemisphere of split-brain patients. You may be tempted to attribute consciousness, and hence selfhood, to the hemisphere with the capacity for language. But you can see for yourself how easy it is to direct your attention away from words and fill your consciousness solely with images or wordless sounds. Some people actually rely on their right hemispheres for much of their linguistic processing, and after split-brain surgery these people can speak for the right hemisphere with things like cards that have written words on them. We’re forced to conclude that both sides of the split brain are conscious. And, since the corpus callosum channels a limited amount of information back and forth in the brain, we probably all have at least two independent centers of consciousness in our minds, even those of us whose hemispheres communicate.

What this means is that just because your actions and intentions seem to align, you still can’t be sure there isn’t another conscious mind housed in your brain who is also assured its own actions and intentions are aligned. There have even been cases where the two sides of a split-brain patient’s mind have expressed conflicting beliefs and desires. For some, phenomena like these sound the death knell for any dualistic religious belief. Harris writes,

Consider what this says about the dogma—widely held under Christianity and Islam—that a person’s salvation depends upon her believing the right doctrine about God. If a split-brain patient’s left hemisphere accepts the divinity of Jesus, but the right doesn’t, are we to imagine that she now harbors two immortal souls, one destined for the company of angels and the other for an eternity of hellfire? (67-8)

Indeed, the soul, the immaterial inhabitant of the body, can be divided more than once. Harris makes this point using a thought experiment originally devised by philosopher Derek Parfit. Imagine you are teleported Star Trek-style to Mars. The teleporter creates a replica of your body, including your brain and its contents, faithful all the way down to the orientation of the atoms. So everything goes black here on Earth, and then you wake up on Mars exactly as you left. But now imagine something went wrong on Earth and the original you wasn’t destroyed before the replica was created. In that case, there would be two of you left whole and alive. Which one is the real you? There’s no good basis for settling the question one way or the other.

            Harris uses the split-brain experiments and Parfit’s thought experiment to establish the main insight that lies at the core of the spiritual practices he goes on to describe: that the self, as we are programmed to think of and experience it, doesn’t really exist. Of course, this is only true in a limited sense. In many contexts, it’s still perfectly legitimate to speak of the self. As Harris explains,

The self that does not survive scrutiny is the subject of experience in each present moment—the feeling of being a thinker of thoughts inside one’s head, the sense of being an owner or inhabitant of a physical body, which this false self seems to appropriate as a kind of vehicle. Even if you don’t believe such a homunculus exists—perhaps because you believe, on the basis of science, that you are identical to your body and brain rather than a ghostly resident therein—you almost certainly feel like an internal self in almost every waking moment. And yet, however one looks for it, this self is nowhere to be found. It cannot be seen amid the particulars of experience, and it cannot be seen when experience itself is viewed as a totality. However, its absence can be found—and when it is, the feeling of being a self disappears. (92)

The implication is that even if you come to believe as a matter of fact that the self is an illusion you nevertheless continue to experience that illusion. It’s only under certain circumstances, or as a result of engaging in certain practices, that you’ll be able to experience consciousness in the absence of self.

****

            Harris briefly discusses avenues apart from meditation that move us toward what he calls “self-transcendence”: we often lose ourselves in our work, or in a good book or movie; we may feel a diminishing of self before the immensities of nature and the universe, or as part of a drug-induced hallucination; or it could be attendance at a musical performance where you’re just one tiny part of a vast pulsing crowd of exuberant fans. It could be during intense sex. Or you may of course also experience some fading away of your individuality through participation in religious ceremonies. But Harris’s sights are set on one specific method for achieving self-transcendence. As he writes in his introduction,

This book is by turns a seeker’s memoir, an introduction to the brain, a manual of contemplative instruction, and a philosophical unraveling of what most people consider to be the center of their inner lives: the feeling of self we call “I.” I have not set out to describe all the traditional approaches to spirituality and to weigh their strengths and weaknesses. Rather, my goal is to pluck the diamond from the dunghill of esoteric religion. There is a diamond there, and I have devoted a fair amount of my life to contemplating it, but getting it in hand requires that we remain true to the deepest principles of scientific skepticism and make no obeisance to tradition. (10)

This is music to the ears of many skeptics who have long suspected that there may actually be something to meditative techniques but are overcome with fits of eye-rolling every time they try to investigate the topic. If someone with skeptical bona fides as impressive as Harris’s has taken the time to wade through all the nonsense to see if there are any worthwhile takeaways, then I imagine I’m far from alone in being eager to find out what he’s discovered.

            So how does one achieve a state of consciousness divorced from any sense of self? And how does this experience help us escape the neurotic trance most of us are locked in? Harris describes some of the basic principles of Advaita, a Hindu practice, and Dzogchen, a Tibetan Buddhist one. According to Advaita, one can achieve “cessation”—an end to thinking, and hence to the self—at any stage of practice. But Dzogchen practitioners insist it comes only after much intense practice. In one of several inset passages with direct instructions to readers, Harris invites us to experiment with the Dzogchen technique of imagining a moment in our lives when we felt positive emotions, like the last time we accomplished something we’re proud of. After concentrating on the thoughts and feelings for some time, we are then encouraged to think of a time when we felt something negative, like embarrassment or fear. The goal here is to be aware of the ideas and feelings as they come into being. “In the teachings of Dzogchen,” Harris writes, “it is often said that thoughts and emotions arise in consciousness the way that images appear on the surface of the mirror.” Most of the time, though, we are tricked into mistaking the mirror for what’s reflected in it.

In subjective terms, you are consciousness itself—you are not the next, evanescent image or string of words that appears in your mind. Not seeing it arise, however, the next thought will seem to become what you are. (139)

This is what Harris means when he speaks of separating your consciousness from your thoughts. And he believes it’s a state of mind you can achieve with sufficient practice calling forth and observing different thoughts and emotions, until eventually you experience—for moments at a time—a feeling of transcending the self, which entails a ceasing of thought, a type of formless and empty awareness that has us sensing a pleasant unburdening of the weight of our identities.

            Harris also describes a more expeditious route to selflessness, one discovered by a British Architect named Douglas Harding, who went on to be renowned among New Agers for his insight. His technique, which was first inspired by a drawing made by physicist Ernst Mach that was a literal rendition of his first-person viewpoint, including the side of his nose and the ridge of his eyebrow, consists simply of trying to imagine you have no head. Harris quotes at length from Harding’s description of what happened when he originally succeeded:

What actually happened was something absurdly simply and unspectacular: I stopped thinking. A peculiar quiet, an odd kind of alert limpness or numbness, came over me. Reason and imagination and all mental chatter died down. For once, words really failed me. Past and future dropped away. I forgot who and what I was, my name, manhood, animal-hood, all that could be called mine. It was as if I had been born that instant, brand new, mindless, innocent of all memories. There existed only the Now, the present moment and what was clearly given it. (143) 

Harris recommends a slight twist to this approach—one that involves looking out at the world and simply trying to reverse your perspective to look for your head. One way to do this is to imagine you’re talking to another person and then “let your attention travel in the direction of the other person’s gaze” (145). It’s not about trying to picture what you look like to another person; it’s about recognizing that your face is absent from the encounter—because obviously you can’t see it. “But looking for yourself in this way can precipitate a sudden change in perspective, of the sort Harding describes” (146). It’s a sort of out-of-body experience.

            If you pull off the feat of seeing through the illusion of self, either through disciplined practice at observing the contents of your own consciousness or through shortcuts like imagining you have no head, you will experience a pronounced transformation. Even if for only a few moments, you will have reached enlightenment. As a reward for your efforts, you will enjoy a temporary cessation of the omnipresent hum of anxiety-inducing thoughts that you hardly even notice drowning out so much of the other elements of your consciousness. “There arose no questions,” Harding writes of his experiments in headlessness, “no reference beyond the experience itself, but only peace and a quiet joy, and the sensation of having dropped an intolerable burden” (143). Skeptics reading these descriptions will have to overcome the temptation to joke about practitioners without a thought in their head.

            Christianity, Judaism, and Islam are all based on dualistic conceptions of the self, and the devout are enjoined to engage in ritual practices in service to God, an entirely separate being. The more non-dualistic philosophies of the East are much more amenable to attempts to reconcile them with science. Practices like meditation aren’t directed at any supernatural entity but are engaged in for their own sake, because they are somehow inherently rewarding. Unfortunately, this leads to a catch-22. Harris explains,

As we have seen, there are good reasons to believe that adopting a practice like meditation can lead to positive changes in one’s life. But the deepest goal of spirituality is freedom from the illusion of the self—and to seek such freedom, as though it were a future state to be attained through effort, is to reinforce the chains of one’s apparent bondage in each moment. (123)

This paradox seems at first like a good recommendation for the quicker routes to self-transcendence like Harding’s. But, according to Harris, “Harding confessed that many of his students recognized the state of ‘headlessness’ only to say, ‘So what?’” To Harris, the problem here is that the transformation was so easily achieved that its true value couldn’t be appreciated:

Unless a person has spent some time seeking self-transcendence dualistically, she is unlikely to recognize that the brief glimpse of selflessness is actually the answer to her search. Having then said, ‘So what?’ in the face of the highest teachings, there is nothing for her to do but persist in her confusion. (148)

We have to wonder, though, if maybe Harding’s underwhelmed students aren’t the ones who are confused. It’s entirely possible that Harris, who has devoted so much time and effort to his quest for enlightenment, is overvaluing the experience to assuage his own cognitive dissonance.

****

             The penultimate chapter of Waking Up gives Harris’s more skeptical fans plenty to sink their teeth into, including a thorough takedown of neurosurgeon Eben Alexander’s so-called Proof of Heaven and several cases of supposedly enlightened gurus taking advantage of their followers by, among other exploits, sleeping with their wives. But Harris claims his own experiences with gurus have been almost entirely positive, and he goes as far as recommending that anyone hoping to achieve self-transcendence seek out the services of one. 

            This is where I began to have issues with the larger project behind Harris’s book. If meditation were a set of skills like those required to play tennis, it would seem more reasonable to claim that the guidance of an expert coach is necessary to develop them. But what is a meditation guru supposed to do if he (I presume they’re mostly male) has no way to measure, or even see, your performance? Harris suggests they can answer questions that arise during practice, but apart from basic instructions like the ones Harris himself provides it seems unlikely an expert could be of much help. If a guru has a useful technique, he shouldn’t need to be present in the room to share it. Harding passed his technique on to Harris through writing for instance. And if self-transcendence is as dramatic a transformation as it’s made out to be, you shouldn’t have any trouble recognizing it when you experience it.

            Harris’s valuation of the teachings he’s received from his own gurus really can’t be sifted from his impression of how rewarding his overall efforts at exploring spirituality have been, nor can it be separated from his personal feelings toward those gurus. This a problem that plagues much of the research on the effectiveness of various forms of psychotherapy; essentially, a patient’s report that the therapeutic treatment was successful means little else but that the patient had a positive relationship with the therapist administering it. Similarly, it may be the case that Harris’s sense of how worthwhile those moments of self-transcendence are has more than he's himself aware of to do with his personal retrospective assessment of how fulfilling his own journey to reach them has been. The view from Everest must be far more sublime to those who’ve made the climb than to those who were airlifted to the top.

            More troublingly, there’s an unmistakable resemblance between, on the one hand, Harris’s efforts to locate convergences between science and contemplative religious practices and, on the other, the tendency of New Age philosophers to draw specious comparisons between ancient Eastern doctrines and modern theories in physics. Zen koans are paradoxical and counterintuitive, this line of reasoning goes, and so are the results of the double-slit experiment in quantum mechanics—the Buddhists must have intuited something about the quantum world centuries ago. Dzogchen Buddhists have believed the self is an illusion and have been seeking a cessation of thinking for centuries, and modern neuroscience demonstrates that the self is something quite different from what most of us think it is. Therefore, the Buddhists must have long ago discovered something essential about the mind. In both of these examples, it seems like you have to do a lot of fudging to make the ancient doctrines line up with the modern scientific findings.

            It’s not nearly as evident as Harris makes out that what the Buddhists mean by the doctrine that the self is an illusion is the same thing neuroscientists mean when they point out that consciousness is divisible, or that we’re often unaware of our own motivations. (Douglas Hofstadter refers to the self as an epiphenomenon, which he does characterize as a type of illusion, but only because the overall experience bears so little resemblance to any of the individual processes that go in to producing it.) I’ve never heard a cognitive scientist discuss the fallacy of identifying with your own thoughts or recommend that we try to stop thinking. Indeed, I don’t think most people really do identify with their thoughts. I for one don’t believe I am my thoughts; I definitely feel like I have my thoughts, or that I do my thinking. To point out that thoughts sometimes arise in my mind independent of my volition does nothing to undermine this belief. And Harris never explains exactly why seeing through the illusion of the self should bring about relief from all the anxiety produced by negative thinking. Cessation sounds a little like simply rendering yourself insensate.

The problem that brings about the neurotic trance so many of us find ourselves trapped in doesn’t seem to be that people fall for the trick of selfhood; it’s that they mistake their most neurotic thinking at any given moment for unquestionable and unchangeable reality. Clinical techniques like cognitive behavioral therapy involve challenging your own thinking, and there’s relief to be had in that—but it has nothing to do with disowning your thoughts or seeing your self as illusory. From this modern cognitive perspective, Dzogchen practices that have us focusing our attention on the effects of different lines of thinking are probably still hugely beneficial. But what’s that got to do with self-transcendence?

            For that matter, is the self really an illusion? Insofar as we think of it as a single object or as something that can be frozen in time and examined, it is indeed illusory. But calling the self an illusion is a bit like calling music an illusion. It’s impossible to point to music as existing in any specific location. You can separate a song into constituent elements that all on their own still constitute music. And of course you can create exact replicas of songs and play them on other planets. But it’s pretty silly to conclude from all these observations that music isn’t real. Rather, music, like the self, is a confluence of many diverse processes that can only be experienced in real time. In claiming that neuroscience corroborates the doctrine that the self is an illusion, Harris may be failing at the central task he set for himself by making too much obeisance to tradition. 

            What about all those reports from people like Harding who have had life-changing experiences while meditating or imagining they have no head? I can attest that I immediately recognized what Harding was describing in the sections Harris quotes. For me, it happened about twenty minutes into a walk I’d gone on through my neighborhood to help me come up with an idea for a short story. I tried to imagine myself as an unformed character at the outset of an as-yet-undeveloped plot. After only a few moments of this, I had a profound sense of stepping away from my own identity, and the attendant feeling of liberation from the disappointments and heartbreaks of my past, from the stresses of the present, and from my habitual forebodings about the future was both revelatory and exhilarating. Since reading Waking Up, I’ve tried both Harding’s and Harris’s approaches to reaching this state again quite a few times. But, though the results have been more impactful than the “So what?” response of Harding’s least impressed students, I haven’t experienced anything as seemingly life-altering as I did on that walk, forcing me to suspect it had as much to do with my state of mind prior to the experiment as it did with the technique itself.

            For me, the experience was of stepping away from my identity—or of seeing the details of that identity from a much broader perspective—than it was of seeing through some illusion of self. I became something like a stem cell version of myself, drastically more pluripotent, more free. It felt much more like disconnecting from my own biography than like disconnecting from the center of my consciousness. This may seem like a finicky distinction. But it goes to the core of Harris’s project—the notion that there’s a convergence between ancient meditative practices and our modern scientific understanding of consciousness. And it bears on just how much of that ancient philosophy we really need to get into if we want to have these kinds of spiritual experiences.

            Personally, I’m not at all convinced by Harris’s case on behalf of pared down Buddhist philosophy and the efficacy of guru guidance—though I probably will continue to experiment with the meditation techniques he lays out. Waking Up, it must be noted, is really less of a guide to spirituality without religion than it is a guide to one particular, particularly esoteric, spiritual practice. But, despite these quibbles, I give the book my highest recommendation, and that’s because its greatest failure is also its greatest success. Harris didn’t even come close to helping me stop thinking—or even persuading me that I should try—because I haven’t been able to stop thinking about his book ever since I started reading it. Perhaps what I appreciate most about Waking Up, though, is that it puts the lie to so many idiotic ideas people tend to have about skeptics and atheists. Just as recognizing that to do what’s right we must sometimes resist the urgings of our hearts in no way makes us heartless, neither does understanding that to be steadfast in pursuit of truth we must admit there’s no such thing as an immortal soul in any way make us soulless. And, while many associate skepticism with closed-mindedness, most of the skeptics I know of are true seekers, just like Harris. The crucial difference, which Harris calls “the sine qua non of the scientific attitude,” is “between demanding good reasons for what one believes and being satisfied with bad ones” (199).  

Also read: 

Capuchin-22: A Review of “The Bonobo and the Atheist: In Search of Humanism among the Primates” by Frans De Waal

And: 

Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education

And:

The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini

Read More
Dennis Junk Dennis Junk

Science’s Difference Problem: Nicholas Wade’s Troublesome Inheritance and the Missing Moral Framework for Discussing the Biology of Behavior

Nicholas Wade went there. In his book “A Troublesome Inheritance,” he argues that not only is race a real, biological phenomenon, but one that has potentially important implications for our understanding of the fates of different peoples. Is it possible to even discuss such things without being justifiably labeled a racist? More importantly, if biological differences do show up in the research, how can we discuss them without being grossly immoral?

            No sooner had Nicholas Wade’s new book become available for free two-day shipping than a contest began to see who could pen the most devastating critical review of it, the one that best satisfies our desperate urge to dismiss Wade’s arguments and reinforce our faith in the futility of studying biological differences between human races, a faith backed up by a cherished official consensus ever so conveniently in line with our moral convictions. That Charles Murray, one of the authors of the evil tome The Bell Curve, wrote an early highly favorable review for the Wall Street Journal only upped the stakes for all would-be champions of liberal science. Even as the victor awaits crowning, many scholars are posting links to their favorite contender’s critiques all over social media to advertise their principled rejection of this book they either haven’t read yet or have no intention of ever reading.

You don’t have to go beyond the title, A Troublesome Inheritance: Genes, Race and Human History, to understand what all these conscientious intellectuals are so eager to distance themselves from—and so eager to condemn. History has undeniably treated some races much more poorly than others, so if their fates are in any small way influenced by genes the implication of inferiority is unavoidable. Regardless of what he actually says in the book, Wade’s very program strikes many as racist from its inception.

            The going theories for the dawn of the European Enlightenment and the rise of Western culture—and western people—to global ascendency attribute the phenomenon to a combination of geographic advantages and historical happenstance. Wade, along with many other scholars, finds such explanations unsatisfying. Geography can explain why some societies never reached sufficient population densities to make the transition into states. “Much harder to understand,” Wade writes, “is how Europe and Asia, lying on much the same lines of latitude, were driven in the different directions that led to the West’s dominance” (223). Wade’s theory incorporates elements of geography—like the relatively uniform expanse of undivided territory between the Yangtze and Yellow rivers that facilitated the establishment of autocratic rule, and the diversity of fragmented regions in Europe preventing such consolidation—but he goes on to suggest that these different environments would have led to the development of different types of institutions. Individuals more disposed toward behaviors favored by these institutions, Wade speculates, would be rewarded with greater wealth, which would in turn allow them to have more children with behavioral dispositions similar to their own.

            After hundreds of years and multiple generations, Wade argues, the populations of diverse regions would respond to these diverse institutions by evolving subtly different temperaments. In China for instance, favorable, and hence selected for traits may have included intelligence, conformity, and obedience. These behavioral propensities would subsequently play a role in determining the future direction of the institutions that fostered their evolution. Average differences in personality would, according to Wade, also make it more or less likely that certain new types of institution would arise within a given society, or that they could be successfully transplanted into it. And it’s a society’s institutions that ultimately determine its fate relative to other societies. To the objection that geography can, at least in principle, explain the vastly different historical outcomes among peoples of specific regions, Wade responds, “Geographic determinism, however, is as absurd a position as genetic determinism, given that evolution is about the interaction between the two” (222).

            East Asians score higher on average on IQ tests than people with European ancestry, but there’s no evidence that any advantage they enjoy in intelligence, or any proclivity they may display toward obedience and conformity—traits supposedly manifest in their long history of autocratic governance—is attributable to genetic differences as opposed to traditional attitudes toward schoolwork, authority, and group membership inculcated through common socialization practices. So we can rest assured that Wade’s just-so story about evolved differences between the races in social behavior is eminently dismissible. Wade himself at several points throughout A Troublesome Inheritance admits that his case is wholly speculative. So why, given the abominable history of racist abuses of evolutionary science, would Wade publish such a book?

It’s not because he’s unaware of the past abuses. Indeed, in his second chapter, titled “Perversions of Science,” which none of the critical reviewers deigns to mention, Wade chronicles the rise of eugenics and its culmination in the Holocaust. He concludes,

After the Second World War, scientists resolved for the best of reasons that genetics research would never again be allowed to fuel the racial fantasies of murderous despots. Now that new information about human races has been developed, the lessons of the past should not be forgotten and indeed are all the more relevant. (38)

The convention among Wade’s critics is to divide his book into two parts, acknowledge that the first is accurate and compelling enough, and then unload the full academic arsenal of both scientific and moral objections to the second. This approach necessarily scants a few important links in his chain of reasoning in an effort to reduce his overall point to its most objectionable elements. And for all their moralizing, the critics, almost to a one, fail to consider Wade’s expressed motivation for taking on such a fraught issue.

            Even acknowledging Wade’s case is weak for the role of biological evolution in historical developments like the Industrial Revolution, we may still examine his reasoning up to that point in the book, which may strike many as more firmly grounded. You can also start to get a sense of what was motivating Wade when you realize that the first half of A Troublesome Inheritance recapitulates his two previous books on human evolution. The first, Before the Dawn, chronicled the evolution and history of our ancestors from a species that resembled a chimpanzee through millennia as tribal hunter-gatherers to the first permanent settlements and the emergence of agriculture. Thus, we see that all along his scholarly interest has been focused on major transitions in human prehistory.

While critics of Wade’s latest book focus almost exclusively on his attempts at connecting genomics to geopolitical history, he begins his exploration of differences between human populations by emphasizing the critical differences between humans and chimpanzees, which we can all agree came about through biological evolution. Citing a number of studies comparing human infants to chimps, Wade writes in A Troublesome Inheritance,

Besides shared intentions, another striking social behavior is that of following norms, or rules generally agreed on within the “we” group. Allied with the rule following are two other basic principles of human social behavior. One is a tendency to criticize, and if necessary punish, those who do not follow the agreed-upon norms. Another is to bolster one’s own reputation, presenting oneself as an unselfish and valuable follower of the group’s norms, an exercise that may involve finding fault with others. (49)

What separates us from chimpanzees and other apes—including our ancestors—is our much greater sociality and our much greater capacity for cooperation. (Though primatologist Frans de Waal would object to leaving the much more altruistic bonobos out of the story.) The basis for these changes was the evolution of a suite of social emotions—emotions that predispose us toward certain types of social behaviors, like punishing those who fail to adhere to group norms (keeping mum about genes and race for instance). If there’s any doubt that the human readiness to punish wrongdoers and rule violators is instinctual, ongoing studies demonstrating this trait in children too young to speak make the claim that the behavior must be taught ever more untenable. The conclusion most psychologists derive from such studies is that, for all their myriad manifestations in various contexts and diverse cultures, the social emotions of humans emerge from a biological substrate common to us all.  

            After Before the Dawn, Wade came out with The Faith Instinct, which explores theories developed by biologist David Sloan Wilson and evolutionary psychologist Jesse Bering about the adaptive role of religion in human societies. In light of cooperation’s status as one of the most essential behavioral differences between humans and chimps, other behaviors that facilitate or regulate coordinated activity suggest themselves as candidates for having pushed our ancestors along the path toward several key transitions. Language for instance must have been an important development. Religion may have been another. As Wade argues in A Troublesome Inheritance

The fact that every known society has a religion suggests that each inherited a propensity for religion from the ancestral human population. The alternative explanation, that each society independently invented and maintained this distinctive human behavior, seems less likely. The propensity for religion seems instinctual, rather than purely cultural, because it is so deeply set in the human mind, touching emotional centers and appearing with such spontaneity. There is a strong evolutionary reason, moreover, that explains why religion may have become wired in the neural circuitry. A major function of religion is to provide social cohesion, a matter of particular importance among early societies. If the more cohesive societies regularly prevailed over the less cohesive, as would be likely in any military dispute, an instinct for religious behavior would have been strongly favored by natural selection. This would explain why the religious instinct is universal. But the particular form that religion takes in each society depends on culture, just as with language. (125-6)

As is evident in this passage, Wade never suggests any one-to-one correspondence between genes and behaviors. Genes function in the context of other genes in the context of individual bodies in the context of several other individual bodies. But natural selection is only about outcomes with regard to survival and reproduction. The evolution of social behavior must thus be understood as taking place through the competition, not just of individuals, but also of institutions we normally think of as purely cultural.

            The evolutionary sequence Wade envisions begins with increasing sociability enforced by a tendency to punish individuals who fail to cooperate, and moves on to tribal religions which involve synchronized behaviors, unifying beliefs, and omnipresent but invisible witnesses who discourage would-be rule violators. Once humans began living in more cohesive groups, behaviors that influenced the overall functioning of those groups became the targets of selection. Religion may have been among the first institutions that emerged to foster cohesion, but others relying on the same substrate of instincts and emotions would follow. Tracing the trajectory of our prehistory from the origin of our species in Africa, to the peopling of the world’s continents, to the first permanent settlements and the adoption of agriculture, Wade writes,

The common theme of all these developments is that when circumstances change, when a new resource can be exploited or a new enemy appears on the border, a society will change its institutions in response. Thus it’s easy to see the dynamics of how human social change takes place and why such a variety of human social structures exists. As soon as the mode of subsistence changes, a society will develop new institutions to exploit its environment more effectively. The individuals whose social behavior is better attuned to such institutions will prosper and leave more children, and the genetic variations that underlie such a behavior will become more common. (63-4)

First a society responds to shifting pressures culturally, but a new culture amounts to a new environment for individuals to adapt to. Wade understands that much of this adaptation occurs through learning. Some of the challenges posed by an evolving culture will, however, be easier for some individuals to address than others. Evolutionary anthropologists tend to think of culture as a buffer between environments and genes. Many consider it more of a wall. To Wade, though, culture is merely another aspect of the environment individuals and their genes compete to thrive in.

If you’re a cultural anthropologist and you want to study how cultures change over time, the most convenient assumption you can make is that any behavioral differences you observe between societies or over periods of time are owing solely to the forces you’re hoping to isolate. Biological changes would complicate your analysis. If, on the other hand, you’re interested in studying the biological evolution of social behaviors, you will likely be inclined to assume that differences between cultures, if not based completely on genetic variance, at least rest on a substrate of inherited traits. Wade has quite obviously been interested in social evolution since his first book on anthropology, so it’s understandable that he would be excited about genome studies suggesting that human evolution has been operating recently enough to affect humans in distantly separated regions of the globe. And it’s understandable that he’d be frustrated by sanctions against investigating possible behavioral differences tied to these regional genetic differences. But this doesn’t stop his critics from insinuating that his true agenda is something other than solely scientific.

            On the technology and pop culture website io9, blogger and former policy analyst Annalee Newitz calls Wade’s book an “argument for white supremacy,” which goes a half-step farther than the critical review by Eric Johnson the post links to, titled "On the Origin of White Power." Johnson sarcastically states that Wade isn’t a racist and acknowledges that the author is correct in pointing out that considering race as a possible explanatory factor isn’t necessarily racist. But, according to Johnson’s characterization,

He then explains why white people are better because of their genes. In fairness, Wade does not say Caucasians are betterper se, merely better adapted (because of their genes) to the modern economic institutions that Western society has created, and which now dominate the world’s economy and culture.

The clear implication here is that Wade’s mission is to prove that the white race is superior but that he also wanted to cloak this agenda in the garb of honest scientific inquiry. Why else would Wade publish his problematic musings? Johnson believes that scientists and journalists should self-censor speculations or as-yet unproven theories that could exacerbate societal injustices. He writes, “False scientific conclusions, often those that justify certain well-entrenched beliefs, can impact peoples’ lives for decades to come, especially when policy decisions are based on their findings.” The question this position begs is how certain can we be that any scientific “conclusion”—Wade would likely characterize it as an exploration—is indeed false before it’s been made public and become the topic of further discussion and research?

Johnson’s is the leading contender for the title of most devastating critique of A Troublesome Inheritance, and he makes several excellent points that severely undermine parts of Wade’s case for natural selection playing a critical role in recent historical developments. But, like H. Allen Orr’s critique in The New York Review, the first runner-up in the contest, Johnson’s essay is oozing with condescension and startlingly unselfconscious sanctimony. These reviewers profess to be standing up for science even as they ply their readers with egregious ad hominem rhetoric (Wade is just a science writer, not a scientist) and arguments from adverse consequences (racist groups are citing Wade’s book in support of their agendas), thereby underscoring another of Wade’s arguments—that the case against racial differences in social behavior is at least as ideological as it is scientific. Might the principle that researchers should go public with politically sensitive ideas or findings only after they’ve reached some threshold of wider acceptance end up stifling free inquiry? And, if Wade’s theories really are as unlikely to bear empirical or conceptual fruit as his critics insist, shouldn’t the scientific case against them be enough? Isn’t all the innuendo and moral condemnation superfluous—maybe even a little suspicious?

            White supremacists may get some comfort from parts of Wade’s book, but if they read from cover to cover they’ll come across plenty of passages to get upset about. In addition to the suggestion that Asians are more intelligent than Caucasians, there’s the matter of the entire eighth chapter, which describes a scenario for how Ashkenazi Jews became even more intelligent than Asians and even more creative and better suited to urban institutions than Caucasians of Northern European ancestry. Wade also points out more than once that the genetic differences between the races are based, not on the presence or absence of single genes, but on clusters of alleles occurring with varying frequencies. He insists that

the significant differences are those between societies, not their individual members. But minor variations in social behavior, though barely perceptible, if at all, in an individual, combine to create societies of very different character. (244)

In other words, none of Wade’s speculations, nor any of the findings he reports, justifies discriminating against any individual because of his or her race. At best, there would only ever be a slightly larger probability that an individual will manifest any trait associated with people of the same ancestry. You’re still much better off reading the details of the résumé. Critics may dismiss as mere lip service Wade’s disclaimers about how “Racism and discrimination are wrong as a matter of principle, not of science” (7), and how the possibility of genetic advantages in certain traits “does not of course mean that Europeans are superior to others—a meaningless term in any case from an evolutionary perspective” (238).  But if Wade is secretly taking delight in the success of one race over another, it’s odd how casually he observes that “the forces of differentiation seem now to have reversed course due to increased migration, travel and intermarriage” (71).

            Wade does of course have to cite some evidence, indirect though it may be, in support of his speculations. First, he covers several genomic studies showing that, contrary to much earlier scholarship, populations of various regions of the globe are genetically distinguishable. Race, in other words, is not merely a social construct, as many have insisted. He then moves on to research suggesting that a significant portion of the human genome reveals evidence of positive selection recently enough to have affected regional populations differently. Joshua Akey’s 2009 review of multiple studies on markers of recent evolution is central to his argument. Wade interprets Akey’s report as suggesting that as much as 14 percent of the human genome shows signs of recent selection. Orr insists this is a mistake in his review, putting the number at 8 percent.

Steven Pinker, who discusses Akey’s paper in his 2011 book The Better Angels of Our Nature, likewise takes the number to be 8 and not 14. But even that lower proportion is significant. Pinker, an evolutionary psychologist, stresses just how revolutionary this finding might be.

Some journalists have uncomprehendingly lauded these results as a refutation of evolutionary psychology and what they see as its politically dangerous implication of a human nature shaped by adaptation to a hunter-gatherer lifestyle. In fact the evidence for recent selection, if it applies to genes with effects on cognition and emotion, would license a far more radical form of evolutionary psychology, one in which minds have been biologically shaped by recent environments in addition to ancient ones. And it could have the incendiary implication that aboriginal and immigrant populations are less biologically adapted to the demands of modern life than populations that have lived in literate societies for millennia. (614)

Contra critics who paint him as a crypto-supremacist, it’s quite clearly that “far more radical form of evolutionary psychology” Wade is excited about. That’s why he’s exasperated by what he sees as Pinker’s refusal to admit that the case for that form is strong enough to warrant pursuing it further owing to fear of its political ramifications. Pinker does consider much of the same evidence as Wade, but where Wade sees only clear support Pinker sees several intractable complications. Indeed, the section of Better Angels where Pinker discusses recent evolution is an important addendum to Wade’s book, and it must be noted Pinker doesn’t rule out the possibility of regional selection for social behaviors. He simply says that “for the time being, we have no need for that hypothesis” (622).

            Wade is also able to point to one gene that has already been identified whose alleles correspond to varying frequencies of violent behavior. The MAO-A gene comes in high- and low-activity varieties, and the low-activity version is more common among certain ethnic groups, like sub-Saharan Africans and Maoris. But, as Pinker points out, a majority of Chinese men also have the low-activity version of the gene, and they aren’t known for being particularly prone to violence. So the picture isn’t straightforward. Aside from the Ashkenazim, Wade cites another well-documented case in which selection for behavioral traits could have played an important role. In his book A Farewell to Alms, Gregory Clark presents an impressive collection of historical data suggesting that in the lead-up to the Industrial Revolution in England, people with personality traits that would likely have contributed to the rapid change were rewarded with more money, and people with more money had more children. The children of the wealthy would quickly overpopulate the ranks of the upper classes and thus large numbers of them inevitably descended into lower ranks. The effect of this “ratchet of wealth” (180), as Wade calls it, after multiple generations would be genes for behaviors like impulse control, patience, and thrift cascading throughout the population, priming it for the emergence of historically unprecedented institutions.

            Wade acknowledges that Clark’s theory awaits direct confirmation through the discovery of actual alleles associated with the behavioral traits he describes. But he points to experiments with artificial selection that suggest the time-scale Clark considers, about 24 generations, would have been sufficient to effect measurable changes. In his critical review, though, Johnson counters that natural selection is much slower than artificial selection, and he shows that Clark’s own numbers demonstrate a rapid attenuation of the effects of selection. Pinker points to other shortcomings in the argument, like the number of cases in which institutions changed and populations exploded in periods too short to have seen any significant change in allele frequencies. Wade isn’t swayed by any of these objections, which he takes on one-by-one, contrary to Orr’s characterization of the disagreement. As of now, the debate is ongoing. It may not be settled conclusively until scientists have a much better understanding of how genes work to influence behavior, which Wade estimates could take decades.

            Pinker is not known for being politically correct, but Wade may have a point when he accuses him of not following the evidence to the most likely conclusions. “The fact that a hypothesis is politically uncomfortable,” Pinker writes, “does not mean that it is false, but it does mean that we should consider the evidence very carefully before concluding that it is true” (614). This sentiment echoes the position taken by Johnson: Hold off going public with sensitive ideas until you’re sure they’re right. But how can we ever be sure whether an idea has any validity if we’re not willing to investigate it? Wade’s case for natural selection operating through changing institutions during recorded history isn’t entirely convincing, but neither is it completely implausible. The evidence that would settle the issue simply hasn’t been discovered yet. But neither is there any evidence in Wade’s book to support the conclusion that his interest in the topic is political as opposed to purely scientific. “Each gene under selection,” he writes, “will eventually tell a fascinating story about some historical stress to which the population was exposed and then adapted” (105). Fascinating indeed, however troubling they may be.

            Is the best way to handle troublesome issues like the possible role of genes in behavioral variations between races to declare them off-limits to scientists until the evidence is incontrovertible? Might this policy come with the risk that avoiding the topic now will make it all too easy to deny any evidence that does emerge later? If genes really do play a role in violence and impulse-control, then we may need to take that into account when we’re devising solutions to societal inequities.

Genes are not gods whose desires must be bowed to. But neither are they imaginary forces that will go away if we just ignore them. The challenge of dealing with possible biological differences also arises in the context of gender. Because women continue to earn smaller incomes on average than men and are underrepresented in science and technology fields, and because the discrepancy is thought to be the product of discrimination and sexism, many scholars argue that any research into biological factors that may explain these outcomes is merely an effort at rationalizing injustice. The problem is the evidence for biological differences in behavior between the genders is much stronger than it is for those between populations from various regions. We can ignore these findings—and perhaps even condemn the scientists who conduct the studies—because they don’t jive with our preferred explanations. But solutions based on willful ignorance have little chance of being effective.

            The sad fact is that scientists and academics have nothing even resembling a viable moral framework for discussing biological behavioral differences. Their only recourse is to deny and inveigh. The quite reasonable fear is that warnings like Wade’s about how the variations are subtle and may not exist at all in any given individual will go unheeded as the news of the findings is disseminated, and dumbed-down versions of the theories will be coopted in the service of reactionary agendas. A study reveals that women respond more readily to a baby’s vocalizations and the headlines read “Genes Make Women Better Parents.” An allele associated with violent behavior is found to be more common in African Americans and some politician cites it as evidence that the astronomical incarceration rate for black males is justifiable. But is censorship the answer? Average differences between genders in career preferences is directly relevant to any discussion of uneven representation in various fields. And it’s possible that people with a certain allele will respond differently to different types of behavioral intervention. As Carl Sagan explained, in a much different context, in his book Demon-Haunted World, “we cannot have science in bits and pieces, applying it where we feel safe and ignoring it where we feel threatened—again, because we are not wise enough to do so” (297).

            Part of the reason the public has trouble understanding what differences between varying types of people may mean is that scientists are at odds with each other about how to talk about them. And with all the righteous declamations they can start to sound a lot like the talking heads on cable news shows. Conscientious and well-intentioned scholars have so thoroughly poisoned the well when it comes to biological behavioral differences that their possible existence is treated as a moral catastrophe. How should we discuss the topic? Working to convey the importance of the distinction between average and absolute differences may be a good start. Efforts to encourage people to celebrate diversity and to challenge the equating of genes with destiny are already popularly embraced. In the realm of policy, we might shift our focus from equality of outcome to equality of opportunity. It’s all too easy to find clear examples of racial disadvantages—in housing, in schooling, in the job market—that go well beyond simple head counting at top schools and in executive boardrooms. Slight differences in behavioral propensities can’t justify such blatant instances of unfairness. Granted, that type of unfairness is much more difficult to find when it comes to gender disparities, but the lesson there is that policies and agendas based on old assumptions might need to give way to a new understanding, not that we should pretend the evidence doesn’t exist or has no meaning.

            Wade believes it was safe for him to write about race because “opposition to racism is now well entrenched” in the Western world (7). In one sense, he’s right about that. Very few people openly profess a belief in racial hierarchies. In another sense, though, it’s just as accurate to say that racism is itself well entrenched in our society. Will A Troublesome Inheritance put the brakes on efforts to bring about greater social justice? This seems unlikely if only because the publication of every Bell Curve occasions the writing of another Mismeasure of Man.

  The unfortunate result is that where you stand on the issue will become yet another badge of political identity as we form ranks on either side. Most academics will continue to consider speculation irresponsible, apply a far higher degree of scrutiny to the research, and direct the purest moral outrage they can muster, while still appearing rational and sane, at anyone who dares violate the taboo. This represents the triumph of politics over science. And it ensures the further entrenchment of views on either side of the divide.

Despite the few superficial similarities between Wade’s arguments and those of racists and eugenicists of centuries past, we have to realize that our moral condemnation of what we suppose are his invidious extra-scientific intentions is itself borne of extra-scientific ideology. Whether race plays a role in behavior is a scientific question. Our attitude toward that question and the parts of the answer that trickle in despite our best efforts at maintaining its status as taboo just may emerge out of assumptions that no longer apply. So we must recognize that succumbing to the temptation to moralize when faced with scientific disagreement automatically makes hypocrites of us all. And we should bear in mind as well that insofar as racial and gender differences really do exist it will only be through coming to a better understanding of them that we can hope to usher in a more just society for children of any and all genders and races. 

Also read: 

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And: 

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

Read More
Dennis Junk Dennis Junk

Are 1 in 5 Women Really Sexually Assaulted on College Campuses?

If you wanted to know how many young women are sexually assaulted on college campuses, you could easily devise a survey to ask a sample of them directly. But that’s not what advocates of stricter measures to prevent assault tend to do. Instead, they ask ambiguous questions they go on to interpret as suggesting an assault occurred. This almost guarantees wildly inflated numbers.

            If you were a university administrator and you wanted to know how prevalent a particular experience was for students on campus, you would probably conduct a survey that asked a few direct questions about that experience—foremost among them the question of whether the student had at some point had the experience you’re interested in. Obvious, right? Recently, we’ve been hearing from many news media sources, and even from President Obama himself, that one in five college women experience sexual assault at some time during their tenure as students. It would be reasonable to assume that the surveys used to arrive at this ratio actually asked the participants directly whether or not they had been assaulted. 

            But it turns out the web survey that produced the one-in-five figure did no such thing. Instead, it asked students whether they had had any of several categories of experience the study authors later classified as sexual assault, or attempted sexual assault, in their analysis. This raises the important question of how we should define sexual assault when we’re discussing the issue—along with the related question of why we’re not talking about a crime that’s more clearly defined, like rape. 

Of course, whatever you call it, sexual violence is such a horrible crime that most of us are willing to forgive anyone who exaggerates the numbers or paints an overly frightening picture of reality in an attempt to prevent future cases. (The issue is so serious that PolitiFact refrained from applying their trademark Truth-O-Meter to the one-in-five figure.) 

            But there are four problems with this attitude. The first is that for every supposed assault there is an alleged perpetrator. Dramatically overestimating the prevalence of the crime comes with the attendant risk of turning public perception against the accused, making it more difficult for the innocent to convince anyone of their innocence. 

            The second problem is that by exaggerating the danger in an effort to protect college students we’re sabotaging any opportunity these young adults may have to make informed decisions about the risks they take on. No one wants students to die in car accidents either, but we don’t manipulate the statistics to persuade them one in five drivers will die in a crash before they graduate from college. 

            The third problem is that going to college and experimenting with sex are for many people a wonderful set of experiences they remember fondly for the rest of their lives. Do we really want young women to barricade themselves in their dorms? Do we want young men to feel like they have to get signed and notarized documentation of consent before they try to kiss anyone? The fourth problem I’ll get to in a bit.

            We need to strike some appropriate balance in our efforts to raise awareness without causing paranoia or inspiring unwarranted suspicion. And that balance should be represented by the results of our best good-faith effort to arrive at as precise an understanding of the risk as our most reliable methods allow. For this purpose, The Department of Justice’s Campus Sexual Assault Study, the source of the oft-cited statistic, is all but completely worthless. It has limitations, to begin with, when it comes to representativeness, since it surveyed students on just two university campuses. And, while the overall sample was chosen randomly, the 42% response rate implies a great deal of self-selection on behalf of the participants. The researchers did compare late responders to early ones to see if there was a systematic difference in their responses. But this doesn’t by any means rule out the possibility that many students chose categorically not to respond because they had nothing to say, and therefore had no interest in the study. (Some may have even found it offensive.) These are difficulties common to this sort of simple web-based survey, and they make interpreting the results problematic enough to recommend against their use in informing policy decisions.

            The biggest problems with the study, however, are not with the sample but with the methods. The survey questions appear to have been deliberately designed to generate inflated incidence rates. The basic strategy of avoiding direct questions about whether the students had been the victims of sexual assault is often justified with the assumption that many young people can’t be counted on to know what actions constitute rape and assault. But attempting to describe scenarios in survey items to get around this challenge opens the way for multiple interpretations and discounts the role of countless contextual factors. The CSA researchers write, “A surprisingly large number of respondents reported that they were at a party when the incident happened.” Cathy Young, a contributing editor at Reason magazine who analyzed the study all the way back in 2011, wrote that

the vast majority of the incidents it uncovered involved what the study termed “incapacitation” by alcohol (or, rarely, drugs): 14 percent of female respondents reported such an experience while in college, compared to six percent who reported sexual assault by physical force. Yet the question measuring incapacitation was framed ambiguously enough that it could have netted many “gray area” cases: “Has someone had sexual contact with you when you were unable to provide consent or stop what was happening because you were passed out, drugged, drunk, incapacitated, or asleep?” Does “unable to provide consent or stop” refer to actual incapacitation – given as only one option in the question – or impaired judgment?  An alleged assailant would be unlikely to get a break by claiming he was unable to stop because he was drunk.

This type of confusion is why it’s important to design survey questions carefully. That the items in the CSA study failed to make the kind of fine distinctions that would allow for more conclusive interpretations suggests the researchers had other goals in mind.

            The researchers’ use of the blanket term “sexual assault,” and their grouping of attempted with completed assaults, is equally suspicious. Any survey designer cognizant of all the difficulties of web surveys would likely try to narrow the focus of the study as much as possible, and they would also try to eliminate as many sources of confusion with regard to definitions or descriptions as possible. But, as Young points out,

The CSA Study’s estimate of sexual assault by physical force is somewhat problematic as well – particularly for attempted sexual assaults, which account for nearly two-thirds of the total. Women were asked if anyone had ever had or attempted to have sexual contact with them by using force or threat, defined as “someone holding you down with his or her body weight, pinning your arms, hitting or kicking you, or using or threatening to use a weapon.” Suppose that, during a make-out session, the man tries to initiate sex by rolling on top of the woman, with his weight keeping her from moving away – but once she tells him to stop, he complies. Would this count as attempted sexual assault?

The simplest way to get around many of these difficulties would have been to ask the survey participants directly whether they had experienced the category of crime the researchers were interested in. If the researchers were concerned that the students might not understand that being raped while drunk still counts as rape, why didn’t they just ask the participants a question to that effect? It’s a simple enough question to devise.

            The study did pose a follow up question to participants it classified as victims of forcible assault, the responses to which hint at the students’ actual thoughts about the incidents. It turns out 37 percent of so-called forcible assault victims explained that they hadn’t contacted law enforcement because they didn’t think the incident constituted a crime. That bears repeating: a third of the students the study says were forcibly assaulted didn’t think any crime had occurred. With regard to another category of victims, those of incapacitated assault, Young writes, “Not surprisingly, three-quarters of the female students in this category did not label their experience as rape.” Of those the study classified as actually having been raped while intoxicated, only 37 percent believed they had in fact been raped. Two thirds of the women the study labels as incapacitated rape victims didn’t believe they had been raped. Why so much disagreement on such a serious issue? Of the entire incapacitated sexual assault victim category, Young writes,

Two-thirds said they did not report the incident to the authorities because they didn’t think it was serious enough. Interestingly, only two percent reported having suffered emotional or psychological injury – a figure so low that the authors felt compelled to include a footnote asserting that the actual incidence of such trauma was undoubtedly far higher.

So the largest category making up the total one-in-five statistic is predominantly composed of individuals who didn’t think what happened to them was serious enough to report. And nearly all of them came away unscathed, both physically and psychologically.

            The impetus behind the CSA study was a common narrative about a so-called “rape culture” in which sexual violence is accepted as normal and young women fail to report incidents because they’re convinced you’re just supposed to tolerate it. That was the researchers’ rationale for using their own classification scheme for the survey participants’ experiences even when it was at odds with the students’ beliefs. But researchers have been doing this same dance for thirty years. As Young writes,

When the first campus rape studies in the 1980s found that many women labeled as victims by researchers did not believe they had been raped, the standard explanation was that cultural attitudes prevent women from recognizing forced sex as rape if the perpetrator is a close acquaintance. This may have been true twenty-five years ago, but it seems far less likely in our era of mandatory date rape and sexual assault workshops and prevention programs on college campuses.

The CSA also surveyed a large number of men, almost none of whom admitted to assaulting women. The researchers hypothesize that the men may have feared the survey wasn’t really anonymous, but that would mean they knew the behaviors in question were wrong. Again, if the researchers are really worried about mistaken beliefs regarding the definition of rape, they could investigate the issue with a few added survey items.

            The huge discrepancies between incidences of sexual violence as measured by researchers and as reported by survey participants becomes even more suspicious in light of the history of similar studies. Those campus rape studies Young refers to from the 1980s produced a ratio of one in four. Their credibility was likewise undermined by later surveys that found that most of the supposed victims didn’t believe they’d been raped, and around forty percent of them went on to have sex with their alleged assailants again. A more recent study by the CDC used similar methods—a phone survey with a low response rate—and concluded that one in five women has been raped at some time in her life. Looking closer at this study, feminist critic and critic of feminism Christina Hoff Sommers attributes this finding as well to “a non-representative sample and vaguely worded questions.” It turns out activists have been conducting different versions of this same survey, and getting similarly, wildly inflated results for decades.

            Sommers challenges the CDC findings in a video everyone concerned with the issue of sexual violence should watch. We all need to understand that well-intentioned and intelligent people can, and often do, get carried away with activism that seems to have laudable goals but ends up doing more harm than good. Some people even build entire careers on this type of crusading. And PR has become so sophisticated that we never need to let a shortage, or utter lack of evidence keep us from advocating for our favorite causes. But there’s still a fourth problem with crazily exaggerated risk assessments—they obfuscate issues of real importance, making it more difficult to come up with real solutions. As Sommers explains,

To prevent rape and sexual assault we need state-of-the-art research. We need sober estimates. False and sensationalist statistics are going to get in the way of effective policies. And unfortunately, when it comes to research on sexual violence, exaggeration and sensation are not the exception; they are the rule. If you hear about a study that shows epidemic levels of sexual violence against American women, or college students, or women in the military, I can almost guarantee the researchers used some version of the defective CDC methodology. Now by this method, known as advocacy research, you can easily manufacture a women’s crisis. But here’s the bottom line: this is madness. First of all it trivializes the horrific pain and suffering of survivors. And it sends scarce resources in the wrong direction. Sexual violence is too serious a matter for antics, for politically motivated posturing. And right now the media, politicians, rape culture activists—they are deeply invested in these exaggerated numbers.

So while more and more normal, healthy, and consensual sexual practices are considered crimes, actual acts of exploitation and violence are becoming all the more easily overlooked in the atmosphere of paranoia. And college students face the dilemma of either risking assault or accusation by going out to enjoy themselves or succumbing to the hysteria and staying home, missing out on some of the richest experiences college life has to offer.

            One in five is a truly horrifying ratio. As conservative crime researcher Heather McDonald points out, “Such an assault rate would represent a crime wave unprecedented in civilized history. By comparison, the 2012 rape rate in New Orleans and its immediately surrounding parishes was .0234 percent; the rate for all violent crimes in New Orleans in 2012 was .48 percent.” I don’t know how a woman can pass a man on a sidewalk after hearing such numbers and not look at him with suspicion. Most of the reforms rape culture activists are pushing for now chip away at due process and strip away the rights of the accused. No one wants to make coming forward any more difficult for actual victims, but our first response to anyone making such a grave accusation—making any accusation—should be skepticism. Victims suffer severe psychological trauma, but then so do the falsely accused. The strongest evidence of an honest accusation is often the fact that the accuser must incur some cost in making it. That’s why we say victims who come forward are heroic. That’s the difference between a victim and a survivor.

            Trumpeting crazy numbers creates the illusion that a large percentage of men are monsters, and this fosters an us-versus-them mentality that obliterates any appreciation for the difficulty of establishing guilt. That would be a truly scary world to live in. Fortunately, we in the US don’t really live in such a world. Sex doesn’t have to be that scary. It’s usually pretty damn fun. And the vast majority of men you meet—the vast majority of women as well—are good people. In fact, I’d wager most men would step in if they were around when some psychopath was trying to rape someone.

Also read:  

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

And: 

VIOLENCE IN HUMAN EVOLUTION AND POSTMODERNISM'S CAPTURE OF ANTHROPOLOGY

Read More
Dennis Junk Dennis Junk

Why I Won't Be Attending the Gender-Flipped Shakespeare Play

Gender-flipped performances are often billed as experiments to help audiences reconsider and look more deeply into what we consider the essential characteristics of males and females. But not to far beneath the surface, you find that they tend to be more like ideological cudgels for those who would deny biological differences.

The Guardian’s “Women’s Blog” reports that “Gender-flips used to challenge sexist stereotypes are having a moment,” and this is largely owing, author Kira Cochrane suggests, to the fact that “Sometimes the best way to make a point about sexism is also the simplest.” This simple approach to making a point consists of taking a work of art or piece of advertising and swapping the genders featured in them. Cochrane goes on to point out that “the gender-flip certainly isn’t a new way to make a political point,” and notes that “it’s with the recent rise of feminist campaigning and online debate that this approach has gone mainstream.”

What is the political point gender-flips are making? As a dancer in a Jennifer Lopez video that reverses the conventional gender roles asks, “Why do men always objectify the women in every single video?” Australian comedian Christiaan Van Vuuren explains that he posed for a reproduction of a GQ cover originally featuring a sexy woman to call attention to the “over-sexualization of the female body in the high-fashion world.” The original cover photo of Miranda Kerr is undeniably beautiful. The gender-flipped version is funny. The obvious takeaway is that we look at women and men differently (gasp!). When women strike an alluring pose, or don revealing clothes, it’s sexy. When men try to do the same thing, it’s ridiculous. Feminists insist that this objectification or over-sexualization of women is a means of oppression. But is it? And are gender-flips simple ways of making a point, or just cheap gimmicks? 

Tonight, my alma mater IPFW is hosting a production called “Juliet and Romeo,” a gender-flipped version of Shakespeare’s most recognizable play. The lead on the Facebook page for the event asks us to imagine that “Juliet is instead a bold Montague who courts a young, sheltered Capulet by the name of Romeo.” Lest you fear the production is just a stunt to make a political point about gender, the hosts have planned a “panel discussion focusing on Shakespeare, gender, and language.” Many former classmates and teachers, most of whom I consider friends, a couple I consider good friends, are either attending or participating in the event. But I won’t be going.

I don’t believe the production is being put on in the spirit of open-minded experimentation. Like the other gender-flip examples, the purpose of staging “Juliet and Romeo” is to make a point about stereotypes. And I believe this proclivity toward using literature as fodder to fuel ideological agendas is precisely what’s most wrong with English lit programs in today’s universities. There have to be better ways to foster interest in great works than by letting activists posing as educators use them as anvils to hammer agendas into students’ heads against.

You may take the position that my objections would carry more weight were I to attend the event before rendering judgment on it. But I believe the way to approach literature is as an experience, not as a static set of principles or stand-alone abstractions. And I don’t want thoughts about gender politics to intrude on my experience of Shakespeare—especially when those thoughts are of such dubious merit. I want to avoid the experience of a gender-flipped production of Shakespeare because I believe scholarship should push us farther into literature—enhance our experience of it, make it more immediate and real—not cast us out of it by importing elements of political agendas and making us cogitate about some supposed implications for society of what’s going on before our eyes.

Regarding that political point, I see no contradiction in accepting, even celebrating, our culture’s gender roles while at the same time supporting equal rights for both genders. Sexism is a belief that one gender is inferior to the other. Demonstrating that people of different genders tend to play different roles in no way proves that either is being treated as inferior. As for objectification and over-sexualization, a moment’s reflection ought to make clear that the feminists are getting this issue perfectly backward. Physical attractiveness is one of the avenues through which women exercise power over men. Miranda Kerr got paid handsomely for that GQ cover. And what could be more arrantly hypocritical than Jennifer Lopez complaining about objectification in music videos? She owes her celebrity in large part to her willingness to allow herself to be objectified. The very concept of objectification is only something we accept from long familiarity--people are sexually aroused by other people, not objects.

I’m not opposed to having a discussion about gender roles and power relations, but if you have something to say, then say it. I’m not even completely opposed to discussing gender in the context of Shakespeare’s plays. What I am opposed to is people hijacking our experience of Shakespeare to get some message across, people toeing the line by teaching that literature is properly understood by “looking at it through the lens” of one or another well-intentioned but completely unsupported ideology, and people misguidedly making sex fraught and uncomfortable for everyone. I doubt I’m alone in turning to literature, at least in part, to get away from that sort of puritanism in church. Guilt-tripping guys and encouraging women to walk around with a chip on their shoulders must be one of the least effective ways to get people to respect each other more we've ever come up with.

But, when you guys do a performance of the original Shakespeare, you can count on me being there to experience it. 

Update:

The link to this post on Facebook generated some heated commentary. Some were denials of ideological intentions on behalf of those putting on the event. Some were mischaracterizations based on presumed “traditionalist” associations with my position. Some made the point that Shakespeare himself played around with gender, so it should be okay for others to do the same with his work. In the end, I did feel compelled to attend the event because I had taken such a strong position.

Having flipflopped and attended the event, I have to admit I enjoyed it. All the people involved were witty, charming, intellectually stimulating, and pretty much all-around delightful.

But, as was my original complaint, it was quite clear—and at two points explicitly stated—that the "experiment" entailed using the play as a springboard for a discussion of current issues like marriage rights. Everyone, from the cast to audience members, was quick to insist after the play that they felt it was completely natural and convincing. But gradually more examples of "awkward," "uncomfortable," or "weird" lines or scenes came up. Shannon Bischoff, a linguist one commenter characterized as the least politically correct guy I’d ever meet, did in fact bring up a couple aspects of the adaptation that he found troubling. But even he paused after saying something felt weird, as if to say, "Is that alright?" (Being weirded out about a 15 year old Romeo being pursued by a Juliet in her late teens was okay because it was about age not gender.)

The adapter himself, Jack Cant, said at one point that though he was tempted to rewrite some of the parts that seemed really strange he decided to leave them in because he wanted to let people be uncomfortable. The underlying assumption of the entire discussion was that gender is a "social construct" and that our expectations are owing solely to "stereotypes." And the purpose of the exercise was for everyone to be brought face-to-face with their assumptions about gender so that they could expiate them. I don't think any fair-minded attendee could deny the agreed-upon message was that this is a way to help us do away with gender roles—and that doing so would be a good thing. (If there was any doubt, Jack’s wife eliminated it when she stood up from her seat in the audience to say she wondered if Jack had learned enough from the exercise to avoid applying gender stereotypes to his nieces.) And this is exactly what I mean by ideology. Sure, Shakespeare played around with gender in As You Like It and Twelfth Night. But he did it for dramatic or comedic effect primarily, and to send a message secondarily—or more likely not at all.

For the record, I think biology plays a large (but of course not exclusive) part in gender roles, I enjoy and celebrate gender roles (love being a man; love women who love being women), but I also support marriage rights for homosexuals and try to be as accepting as I can of people who don't fit the conventional roles.

To make one further clarification: whether you support an anti-gender agenda and whether you think Shakespeare should be used as a tool for this or any other ideological agenda are two separate issues. I happen not to support anti-genderism. My main point in this post, however, is that ideology—good, bad, valid, invalid—should not play a part in literature education. Because, for instance, while students are being made to feel uncomfortable about their unexamined gender assumptions, they're not feeling uncomfortable about, say, whether Romeo might be rushing into marriage too hastily, or whether Juliet will wake up in time to keep him from drinking the poison—you know, the actual play.

Whether Shakespeare was sending a message or not, I'm sure he wanted first and foremost for his audiences to respond to the characters he actually created. And we shouldn't be using "lenses" to look at plays; we should be experiencing them. They're not treatises. They're not coded allegories. And, as old as they may be to us, every generation of students gets to discover them anew.

We can discuss politics and gender or whatever you want. There's a time and a place for that and it's not in a lit classroom. Sure, let's encourage students to have open minds about gender and other issues, and let's help them to explore their culture and their own habits of thought. There are good ways to do that—ideologically adulterated Shakespeare is not one of them.

Also read:

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

And: 

HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT

And: 

THE ISSUE WITH JURASSIC WORLD NO ONE HAS THE BALLS TO TALK ABOUT

Read More
Dennis Junk Dennis Junk

“The World until Yesterday” and the Great Anthropology Divide: Wade Davis’s and James C. Scott’s Bizarre and Dishonest Reviews of Jared Diamond’s Work

The field of anthropology is divided into two rival factions, the postmodernists and the scientists—though the postmodernists like to insist they’re being scientific as well. The divide can be seen in critiques of Jared Diamond’s “The World until Yesterday.”

Cultural anthropology has for some time been divided into two groups. The first attempts to understand cultural variation empirically by incorporating it into theories of human evolution and ecological adaptation. The second merely celebrates cultural diversity, and its members are quick to attack any findings or arguments by those in the first group that can in any way be construed as unflattering to the cultures being studied. (This dichotomy is intended to serve as a useful, and only slight, oversimplification.)

Jared Diamond’s scholarship in anthropology places him squarely in the first group. Yet he manages to thwart many of the assumptions held by those in the second group because he studiously avoids the sins of racism and biological determinism they insist every last member of the first group is guilty of. Rather than seeing his work as an exemplar or as evidence that the field is amenable to scientific investigation, however, members of the second group invent crimes and victims so they can continue insisting there’s something immoral about scientific anthropology (though the second group, oddly enough, claims that designation as well).

            Diamond is not an anthropologist by training, but his Pulitzer Prize-winning book Guns, Germs, and Steel, in which he sets out to explain why some societies became technologically advanced conquerors over the past 10,000 years while others maintained their hunter-gatherer lifestyles, became a classic in the field almost as soon as it was published in 1997. His interest in cultural variation arose in large part out of his experiences traveling through New Guinea, the most culturally diverse region of the planet, to conduct ornithological research. By the time he published his first book about human evolution, The Third Chimpanzee, at age 54, he’d spent more time among people from a more diverse set of cultures than many anthropologists do over their entire careers.

In his latest book, The World until Yesterday: What Can We Learn from Traditional Societies?, Diamond compares the lifestyles of people living in modern industrialized societies with those of people who rely on hunting and gathering or horticultural subsistence strategies. His first aim is simply to highlight the differences, since the way most us live today is, evolutionarily speaking, a very recent development; his second is to show that certain traditional practices may actually lead to greater well-being, and may thus be advantageous if adopted by those of us living in advanced civilizations.

            Obviously, Diamond’s approach has certain limitations, chief among them that it affords him little space for in-depth explorations of individual cultures. Instead, he attempts to identify general patterns that apply to traditional societies all over the world. What this means in the context of the great divide in anthropology is that no sooner had Diamond set pen to paper than he’d fallen afoul of the most passionately held convictions of the second group, who bristle at any discussion of universal trends in human societies. The anthropologist Wade Davis’s review of The World until Yesterday in The Guardian is extremely helpful for anyone hoping to appreciate the differences between the two camps because it exemplifies nearly all of the features of this type of historical particularism, with one exception: it’s clearly, even gracefully, written. But this isn’t to say Davis is at all straightforward about his own positions, which you have to read between the lines to glean. Situating the commitment to avoid general theories and focus instead on celebrating the details in a historical context, Davis writes,

This ethnographic orientation, distilled in the concept of cultural relativism, was a radical departure, as unique in its way as was Einstein’s theory of relativity in the field of physics. It became the central revelation of modern anthropology. Cultures do not exist in some absolute sense; each is but a model of reality, the consequence of one particular set of intellectual and spiritual choices made, however successfully, many generations before. The goal of the anthropologist is not just to decipher the exotic other, but also to embrace the wonder of distinct and novel cultural possibilities, that we might enrich our understanding of human nature and just possibly liberate ourselves from cultural myopia, the parochial tyranny that has haunted humanity since the birth of memory.

This stance with regard to other cultures sounds viable enough—it even seems admirable. But Davis is saying something more radical than you may think at first glance. He’s claiming that cultural differences can have no explanations because they arise out of “intellectual and spiritual choices.” It must be pointed out as well that he’s profoundly confused about how relativity in physics relates to—or doesn’t relate to—cultural relativity in anthropology. Einstein discovered that time is relative with regard to velocity compared to a constant speed of light, so the faster one travels the more slowly time advances. Since this rule applies the same everywhere in the universe, the theory actually works much better as an analogy for the types of generalization Diamond tries to discover than it does for the idea that no such generalizations can be discovered. Cultural relativism is not a “revelation” about whether or not cultures can be said to exist or not; it’s a principle that enjoins us to try to understand other cultures on their own terms, not as deviations from our own. Diamond appreciates this principle—he just doesn’t take it to as great an extreme as Davis and the other anthropologists in his camp.

            The idea that cultures don’t exist in any absolute sense implies that comparing one culture to another won’t result in any meaningful or valid insights. But this isn’t a finding or a discovery, as Davis suggests; it’s an a priori conviction. For anthropologists in Davis’s camp, as soon as you start looking outside of a particular culture for an explanation of how it became what it is, you’re no longer looking to understand that culture on its own terms; you’re instead imposing outside ideas and outside values on it. So the simple act of trying to think about variation in a scientific way automatically makes you guilty of a subtle form of colonization. Davis writes,

The very premise of Guns, Germs, and Steel is that a hierarchy of progress exists in the realm of culture, with measures of success that are exclusively material and technological; the fascinating intellectual challenge is to determine just why the west ended up on top. In the posing of this question, Diamond evokes 19th-century thinking that modern anthropology fundamentally rejects. The triumph of secular materialism may be the conceit of modernity, but it does very little to unveil the essence of culture or to account for its diversity and complexity.

For Davis, comparison automatically implies assignment of relative values. But, if we agree that two things can be different without one being superior, we must conclude that Davis is simply being dishonest, because you don’t have to read beyond the Prelude to Guns, Germs, and Steel to find Diamond’s explicit disavowal of this premise that supposedly underlies the entire book:

…don’t words such as “civilization,” and phrases such as “rise of civilization,” convey the false impression that civilization is good, tribal hunter-gatherers are miserable, and history for the past 13,000 years has involved progress toward greater human happiness? In fact, I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in happiness. My own impression, from having divided my life between United States cities and New Guinea villages, is that the so-called blessings of civilization are mixed. For example, compared with hunter-gatherers, citizens of modern industrialized states enjoy better medical care, lower risk of death by homicide, and a longer life span, but receive much less social support from friendships and extended families. My motive for investigating these geographic differences in human societies is not to celebrate one type of society over another but simply to understand what happened in history. (18)

            For Davis and those sharing his postmodern ideology, this type of dishonesty is acceptable because they believe the political ends of protecting indigenous peoples from exploitation justifies their deceitful means. In other words, they’re placing their political goals before their scholarly or scientific ones. Davis argues that the only viable course is to let people from various cultures speak for themselves, since facts and theories in the wrong hands will inevitably lubricate the already slippery slope to colonialism and exploitation. Even Diamond’s theories about environmental influences, in this light, can be dangerous. Davis writes,

In accounting for their simple material culture, their failure to develop writing or agriculture, he laudably rejects notions of race, noting that there is no correlation between intelligence and technological prowess. Yet in seeking ecological and climatic explanations for the development of their way of life, he is as certain of their essential primitiveness as were the early European settlers who remained unconvinced that Aborigines were human beings. The thought that the hundreds of distinct tribes of Australia might simply represent different ways of being, embodying the consequences of unique sets of intellectual and spiritual choices, does not seem to have occurred to him.

Davis is rather deviously suggesting a kinship between Diamond and the evil colonialists of yore, but the connection rests on a non sequitur, that positing environmental explanations of cultural differences necessarily implies primitiveness on the part of the “lesser” culture.

Davis doesn’t explicitly say anywhere in his review that all scientific explanations are colonialist, but once you rule out biological, cognitive, environmental, and climatic theories, well, there’s not much left. Davis’s rival explanation, such as it is, posits a series of collective choices made over the course of history, which in a sense must be true. But it merely begs the question of what precisely led the people to make those choices, and this question inevitably brings us back to all those factors Diamond weighs as potential explanations. Davis could have made the point that not every aspect of every cultural can be explained by ecological factors—but Diamond never suggests otherwise. Citing the example of Kaulong widow strangling in The World until Yesterday, Diamond writes that there’s no reason to believe the practice is in any way adaptive and admits that it can only be “an independent historical cultural trait that arose for some unknown reason in that particular area of New Britain” (21).

I hope we can all agree that harming or exploiting indigenous peoples in any part of the world is wrong and that we should support the implementation of policies that protect them and their ways of life (as long as those ways don’t involve violations of anyone’s rights as a human—yes, that moral imperative supersedes cultural relativism, fears of colonialism be damned). But the idea that trying to understand cultural variation scientifically always and everywhere undermines the dignity of people living in non-Western cultures is the logical equivalent of insisting that trying to understand variations in peoples’ personalities through empirical methods is an affront to their agency and freedom to make choices as individuals. If the position of these political-activist anthropologists had any validity, it would undermine the entire field of psychology, and for that matter the social sciences in general. It’s safe to assume that the opacity that typifies these anthropologists’ writing is meant to protect their ideas from obvious objections like this one. 

As well as Davis writes, it’s nonetheless difficult to figure out what his specific problems with Diamond’s book are. At one point he complains, “Traditional societies do not exist to help us tweak our lives as we emulate a few of their cultural practices. They remind us that our way is not the only way.” Fair enough—but then he concludes with a passage that seems startlingly close to a summation of Diamond’s own thesis.

The voices of traditional societies ultimately matter because they can still remind us that there are indeed alternatives, other ways of orienting human beings in social, spiritual and ecological space… By their very existence the diverse cultures of the world bear witness to the folly of those who say that we cannot change, as we all know we must, the fundamental manner in which we inhabit this planet. This is a sentiment that Jared Diamond, a deeply humane and committed conservationist, would surely endorse.

On the surface, it seems like Davis isn’t even disagreeing with Diamond. What he’s not saying explicitly, however, but hopes nonetheless that we understand is that sampling or experiencing other cultures is great—but explaining them is evil.

            Davis’s review was published in January of 2013, and its main points have been echoed by several other anti-scientific anthropologists—but perhaps none so eminent as the Yale Professor of Anthropology and Political Science, James C. Scott, whose review, “Crops, Towns, Government,” appeared in the London Review of Books in November. After praising Diamond’s plea for the preservation of vanishing languages, Scott begins complaining about the idea that modern traditional societies offer us any evidence at all about how our ancestors lived. He writes of Diamond,

He imagines he can triangulate his way to the deep past by assuming that contemporary hunter-gatherer societies are ‘our living ancestors’, that they show what we were like before we discovered crops, towns and government. This assumption rests on the indefensible premise that contemporary hunter-gatherer societies are survivals, museum exhibits of the way life was lived for the entirety of human history ‘until yesterday’–preserved in amber for our examination.

Don’t be fooled by those lonely English quotation marks—Diamond never makes this mistake, nor does his argument rest on any such premise. Scott is simply being dishonest. In the first chapter of The World until Yesterday, Diamond explains why he wanted to write about the types of changes that took place in New Guinea between the first contact with Westerners in 1931 and today. “New Guinea is in some respects,” he writes, “a window onto the human world as it was until a mere yesterday, measured against a time scale of the 6,000,000 years of human evolution.” He follows this line with a parenthetical, “(I emphasize ‘in some respects’—of course the New Guinea Highlands of 1931 were not an unchanged world of yesterday)” (5-6). It’s clear he added this line because he was anticipating criticisms like Davis’s and Scott’s.

The confusion arises from Scott’s conflation of the cultures and lifestyles Diamond describes with the individuals representing them. Diamond assumes that factors like population size, social stratification, and level of technological advancement have a profound influence on culture. So, if we want to know about our ancestors, we need to look to societies living in conditions similar to the ones they must’ve lived in with regard to just these types of factors. In another bid to ward off the types of criticism he knows to expect from anthropologists like Scott and Davis, he includes a footnote in his introduction which explains precisely what he’s interested in.

By the terms “traditional” and “small-scale” societies, which I shall use throughout this book, I mean past and present societies living at low population densities in small groups ranging from a few dozen to a few thousand people, subsisting by hunting-gathering or by farming or herding, and transformed to a limited degree by contact with large, Westernized, industrial societies. In reality, all such traditional societies still existing today have been at least partly modified by contact, and could alternatively be described as “transitional” rather than “traditional” societies, but they often still retain many features and social processes of the small societies of the past. I contrast traditional small-scale societies with “Westernized” societies, by which I mean the large modern industrial societies run by state governments, familiar to readers of this book as the societies in which most of my readers now live. They are termed “Westernized” because important features of those societies (such as the Industrial Revolution and public health) arose first in Western Europe in the 1700s and 1800s, and spread from there overseas to many other countries. (6)

Scott goes on to take Diamond to task for suggesting that traditional societies are more violent than modern industrialized societies. This is perhaps the most incendiary point of disagreement between the factions on either side of the anthropology divide. The political activists worry that if anthropologists claim indigenous peoples are more violent outsiders will take it as justification to pacify them, which has historically meant armed invasion and displacement. Since the stakes are so high, Scott has no compunctions about misrepresenting Diamond’s arguments. “There is, contra Diamond,” he writes, “a strong case that might be made for the relative non-violence and physical well-being of contemporary hunters and gatherers when compared with the early agrarian states.” 

Well, no, not contra Diamond, who only compares traditional societies to modern Westernized states, like the ones his readers live in, not early agrarian ones. Scott is referring to Diamond's theories about the initial transition to states, claiming that interstate violence negates the benefits of any pacifying central authority. But it may still be better to live under the threat of infrequent state warfare than of much more frequent ambushes or retaliatory attacks by nearby tribes. Scott also suggests that records of high rates of enslavement in early states somehow undermine the case for more homicide in traditional societies, but again Diamond doesn’t discuss early states. Diamond would probably agree that slavery, in the context of his theories, is an interesting topic, but it's hardly the fatal flaw in his ideas Scott makes it out to be.

The misrepresentations extend beyond Diamond’s arguments to encompass the evidence he builds them on. Scott insists it’s all anecdotal, pseudoscientific, and extremely limited in scope. His biggest mistake here is to pull Steven Pinker into the argument, a psychologist whose name alone may tar Diamond’s book in the eyes of anthropologists who share Scott’s ideology, but for anyone else, especially if they’ve actually read Pinker’s work, that name lends further credence to Diamond’s case. (Pinker has actually done the math on whether your chances of dying a violent death are better or worse in different types of society.) Scott writes,

Having chosen some rather bellicose societies (the Dani, the Yanomamo) as illustrations, and larded his account with anecdotal evidence from informants, he reaches the same conclusion as Steven Pinker in The Better Angels of Our Nature: we know, on the basis of certain contemporary hunter-gatherers, that our ancestors were violent and homicidal and that they have only recently (very recently in Pinker’s account) been pacified and civilised by the state. Life without the state is nasty, brutish and short.

In reality, both Diamond and Pinker rely on evidence from a herculean variety of sources going well beyond contemporary ethnographies. To cite just one example Scott neglects to mention, an article by Samuel Bowles published in the journal Science in 2009 examines the rates of death by violence at several prehistoric sites and shows that they’re startlingly similar to those found among modern hunter-gatherers. Insofar as Scott even mentions archeological evidence, it's merely to insist on its worthlessness. Anyone who reads The World until Yesterday after reading Scott’s review will be astonished by how nuanced Diamond’s section on violence actually is. Taking up almost a hundred pages, it is far more insightful and better supported than the essay that purports to undermine it. The section also shows, contra Scott, that Diamond is well aware of all the difficulties and dangers of trying to arrive at conclusions based on any one line of evidence—which is precisely why he follows as many lines as are available to him.

However, even if we accept that traditional societies really are more violent, it could still be the case that tribal conflicts are caused, or at least intensified, through contact with large-scale societies. In order to make this argument, though, political-activist anthropologists must shift their position from claiming that no evidence of violence exists to claiming that the evidence is meaningless or misleading. Scott writes,

No matter how one defines violence and warfare in existing hunter-gatherer societies, the greater part of it by far can be shown to be an effect of the perils and opportunities represented by a world of states. A great deal of the warfare among the Yanomamo was, in this sense, initiated to monopolise key commodities on the trade routes to commercial outlets (see, for example, R. Brian Ferguson’s Yanomami Warfare: A Political History, a strong antidote to the pseudo-scientific account of Napoleon Chagnon on which Diamond relies heavily).

It’s true that Ferguson puts forth a rival theory for warfare among the Yanomamö—and the political-activist anthropologists hold him up as a hero for doing so. (At least one Yanomamö man insisted, in response to Chagnon’s badgering questions about why they fought so much, that it had nothing to do with commodities—they raided other villages for women.) But Ferguson’s work hardly settles the debate. Why, for instance, do the patterns of violence appear in traditional societies all over the world, regardless of which state societies they’re in supposed contact with? And state governments don’t just influence violence in an upward direction. As Diamond points out, “State governments routinely adopt a conscious policy of ending traditional warfare: for example, the first goal of 20th-Century Australian patrol officers in the Territory of Papua New Guinea, on entering a new area, was to stop warfare and cannibalism” (133-4).

What is the proper moral stance anthropologists should take with regard to people living in traditional societies? Should they make it their priority to report the findings of their inquiries honestly? Or should they prioritize their role as advocates for indigenous people’s rights? These are fair questions—and they take on a great deal of added gravity when you consider the history, not to mention the ongoing examples, of how indigenous peoples have suffered at the hands of peoples from Western societies. The answers hinge on how much influence anthropologists currently have on policies that impact traditional societies and on whether science, or Western culture in general, is by its very nature somehow harmful to indigenous peoples. Scott’s and Davis’s positions on both of these issues are clear. Scott writes,

Contemporary hunter-gatherer life can tell us a great deal about the world of states and empires but it can tell us nothing at all about our prehistory. We have virtually no credible evidence about the world until yesterday and, until we do, the only defensible intellectual position is to shut up.

Scott’s argument raises two further questions: when and from where can we count on the “credible evidence” to start rolling in? His “only defensible intellectual position” isn’t that we should reserve judgment or hold off trying to arrive at explanations; it’s that we shouldn’t bother trying to judge the merits of the evidence and that any attempts at explanation are hopeless. This isn’t an intellectual position at all—it’s an obvious endorsement of anti-intellectualism. What Scott really means is that he believes making questions about our hunter-gatherer ancestors off-limits is the only morally defensible position.

            It’s easy to conjure up mental images of the horrors inflicted on native peoples by western explorers and colonial institutions. But framing the history of encounters between peoples with varying levels of technological advancement as one long Manichean tragedy of evil imperialists having their rapacious and murderous way with perfectly innocent noble savages risks trivializing important elements of both types of culture. Traditional societies aren’t peaceful utopias. Western societies and Western governments aren’t mere engines of oppression. Most importantly, while it may be true that science can be—and sometimes is—coopted to serve oppressive or exploitative ends, there’s nothing inherently harmful or immoral about science, which can just as well be used to counter arguments for the mistreatment of one group of people by another. To anthropologists like Davis and Scott, human behavior is something to stand in spiritual awe of, indigenous societies something to experience religious guilt about, in any case not anything to profane with dirty, mechanistic explanations. But, for all their declamations about the evils of thinking that any particular culture can in any sense be said to be inferior to another, they have a pretty dim view of our own.

            It may be simple pride that makes it hard for Scott to accept that gold miners in Brazil weren’t sitting around waiting for some prominent anthropologist at the University of Michigan, or UCLA, or Yale, to publish an article in Science about Yanomamö violence to give them proper justification to use their superior weapons to displace the people living on prime locations. The sad fact is, if the motivation to exploit indigenous peoples is strong enough, and if the moral and political opposition isn’t sufficient, justifications will be found regardless of which anthropologist decides to publish on which topics. But the crucial point Scott misses is that our moral and political opposition cannot be founded on dishonest representations or willful blindness regarding the behaviors, good or bad, of the people we would protect. To understand why this is so, and because Scott embarrassed himself with his childishness, embarrassed The London Review which failed to properly fact-check his article, and did a disservice to the discipline of anthropology by attempting to shout down an honest and humane scholar he disagrees with, it's only fitting that we turn to a passage in The World until Yesterday Scott should have paid more attention to. “I sympathize with scholars outraged by the mistreatment of indigenous peoples,” Diamond writes,

But denying the reality of traditional warfare because of political misuse of its reality is a bad strategy, for the same reason that denying any other reality for any other laudable political goal is a bad strategy. The reason not to mistreat indigenous people is not that they are falsely accused of being warlike, but that it’s unjust to mistreat them. The facts about traditional warfare, just like the facts about any other controversial phenomenon that can be observed and studied, are likely eventually to come out. When they do come out, if scholars have been denying traditional warfare’s reality for laudable political reasons, the discovery of the facts will undermine the laudable political goals. The rights of indigenous people should be asserted on moral grounds, not by making untrue claims susceptible to refutation. (153-4)

Also read:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

Nice Guys with Nothing to Say: Brett Martin’s Difficulty with “Difficult Men” and the Failure of Arts Scholarship

Brett Martin’s book “Difficult Men” contains fascinating sections about the history and politics behind some of our favorite shows. But whenever he reaches for deeper insights about the shows’ appeal, the results range from utterly banal to unwittingly comical. The reason for his failure is his reliance on politically motivated theorizing, which is all too fashionable in academia.

With his book Difficult Men: Behind the Scenes of a Creative Revolution: From “The Sopranos” and “The Wire” to “Mad Men” and “Breaking Bad”, Brett Martin shows that you can apply the whole repertoire of analytic tools furnished by contemporary scholarship in the arts to a cultural phenomenon without arriving at anything even remotely approaching an insight. Which isn’t to say the book isn’t worth reading: if you’re interested in the backstories of how cable TV series underwent their transformation to higher production quality, film-grade acting and directing, greater realism, and multiple, intricately interlocking plotlines, along with all the gossip surrounding the creators and stars, then you’ll be delighted to discover how good Martin is at delivering the dish. 

He had excellent access to some of the showrunners, seems to know everything about the ones he didn’t have access to anyway, and has a keen sense for the watershed moments in shows—as when Tony Soprano snuck away from scouting out a college with his daughter Meadow to murder a man, unceremoniously, with a smile on his face, despite the fears of HBO executives that audiences would turn against the lead character for doing so. And Difficult Men is in no way a difficult read. Martin’s prose is clever without calling too much attention to itself. His knowledge of history and pop culture rivals that of anyone in the current cohort of hipster sophisticates. And his enthusiasm for the topic radiates off the pages while not marring his objectivity with fanboyism. But if you’re more interested in the broader phenomenon of unforgivable male characters audiences can’t help loving you’ll have to look elsewhere for any substantive discussion of it.

Difficult Men would have benefited from Martin being a more difficult man himself. Instead, he seems at several points to be apologizing on behalf of the show creators and their creations, simultaneously ecstatic at the unfettering of artistic freedom and skittish whenever bumping up against questions about what the resulting shows are reflecting about artists and audiences alike. He celebrates the shows’ shucking off of political correctness even as he goes out of his way to brandish his own PC bona fides. With regard to his book’s focus on men, for instance, he writes,

Though a handful of women play hugely influential roles in this narrative—as writers, actors, producers, and executives—there aren’t enough of them. Not only were the most important shows of the era run by men, they were also largely about manhood—in particular the contours of male power and the infinite varieties of male combat. Why that was had something to do with a cultural landscape still awash in postfeminist dislocation and confusion about exactly what being a man meant. (13)

Martin throws multiple explanations at the centrality of “male combat” in high-end series, but the basic fact that he suggests accounts for the prevalence of this theme across so many shows in TV’s Third Golden Age is that most of the artists working on the shows are afflicted with the same preoccupations.

In other words, middle-aged men predominated because middle-aged men had the power to create them. And certainly the autocratic power of the showrunner-auteur scratches a peculiarly masculine itch. (13)

Never mind that women make up a substantial portion of the viewership. If it ever occurred to Martin that this alleged “masculine itch” may have something to do with why men outnumber women in high-stakes competitive fields like TV scriptwriting, he knew better than to put the suspicion in writing.

            The centrality of dominant and volatile male characters in America’s latest creative efflorescence is in many ways a repudiation of the premises underlying the scholarship of the decades leading up to it. With women moving into the workplace after the Second World War, and with the rise of feminism in the 1970s, the stage was set for an experiment in how malleable human culture really was with regard to gender roles. How much change did society’s tastes undergo in the latter half of the twentieth century? Despite his emphasis on “postfeminist dislocation” as a factor in the appeal of TV’s latest crop of bad boys, Martin is savvy enough to appreciate these characters’ long pedigree, up to a point. He writes of Tony Soprano, for instance,

In his self-absorption, his horniness, his alternating cruelty and regret, his gnawing unease, Tony was, give or take Prozac and one or two murders, a direct descendant of Updike’s Rabbit Angstrom. In other words, the American Everyman. (84)

According to the rules of modern criticism, it’s okay to trace creative influences along their historical lineages. And Martin is quite good at situating the Third Golden Age in its historical and technological context:

The ambition and achievement of these shows went beyond the simple notion of “television getting good.” The open-ended, twelve- or thirteen-episode serialized drama was maturing into its own, distinct art form. What’s more, it had become the signature American art form of the first decade of the twenty-first century, the equivalent of what the films of Scorsese, Altman, Coppola, and others had been to the 1970s or the novels of Updike, Roth, and Mailer had been to the 1960s. (11)

What you’re not allowed to do, however—and what Martin knows better than to try to get away with—is notice that all those male filmmakers and novelists of the 60s and 70s were dealing with the same themes as the male showrunners Martin is covering. Is this pre-feminist dislocation? Mad Men could’ve featured Don Draper reading Rabbit, Run right after it was published in 1960. In fact, Don bears nearly as much resemblance to the main character of what was arguably the first novel ever written, The Tale of Genji, by the eleventh-century Japanese noblewoman, Murasaki Shikibu, as Tony Soprano bears to Rabbit Angstrom.

            Missed connections, tautologies, and non sequiturs abound whenever Martin attempts to account for the resonance of a particular theme or show, and at points his groping after insight is downright embarrassing. Difficult Men, as good as it is on history and the politicking of TV executives, can serve as a case study in the utter banality and logical bankruptcy of scholarly approaches to discussing the arts. These politically and academically sanctioned approaches can be summed up concisely, without scanting any important nuances, in the space of paragraph. While any proposed theory about average gender differences with biological bases must be strenuously and vociferously criticized and dismissed (and its proponents demonized without concern for fairness), any posited connection between a popular theme and contemporary social or political issues is seen not just as acceptable but as automatically plausible, to the point where after drawing the connection the writer need provide no further evidence whatsoever.

One of several explanations Martin throws out for the appeal of characters like Tony Soprano and Don Draper, for instance, is that they helped liberal HBO and AMC subscribers cope with having a president like George W. Bush in office. “This was the ascendant Right being presented to the disempowered Left—as if to reassure it that those in charge were still recognizably human” (87). But most of Mad Men’s run, and Breaking Bad’s too, has been under a President Obama. This doesn’t present a problem for Martin’s analysis, though, because there’s always something going on in the world that can be said to resonate with a show’s central themes. Of Breaking Bad, he writes,

Like The Sopranos, too, it uncannily anticipated a national mood soon to be intensified by current events—in this case the great economic unsettlement of the late aughts, which would leave many previously secure middle-class Americans suddenly feeling like desperate outlaws in their own suburbs. (272)

If this strikes you as comically facile, I can assure you that were the discussion taking place in the context of an explanation proposed by a social scientist, writers like Martin would be falling all over themselves trying to be the first to explain the danger of conflating correlation with causation, whether the scientist actually made that mistake or not.

            But arts scholarship isn’t limited to this type of socio-historical loose association because at some point you simply can’t avoid bringing individual artists, characters, and behind-the-scenes players into the discussion. Even when it comes to a specific person or character’s motivation, though, it’s important to focus on upbringing in a given family and sociopolitical climate as opposed to any general trend in human psychology. This willful blindness becomes most problematic when Martin tries to identify commonalities shared by all the leading men in the shows he’s discussing. He writes, for example,

All of them strove, awkwardly at times, for connection, occasionally finding it in glimpses and fragments, but as often getting blocked by their own vanities, their fears, and their accumulated past crimes. (189-90)

This is the closest Martin comes to a valid insight into difficult men in the entire book. The problem is that the rule against recognizing trends in human nature has made him blind to the applicability of this observation to pretty much everyone in the world. You could use this passage as a cold read and convince people you’re a psychic.

            So far, our summation of contemporary arts scholarship includes a rule against referring to human nature and an injunction to focus instead on sociopolitical factors, no matter how implausible their putative influence. But the allowance for social forces playing a role in upbringing provides something of a backdoor for a certain understanding of human nature to enter the discussion. Although the academic versions of this minimalist psychology are byzantine to the point of incomprehensibility, most of the main precepts will be familiar to you from movie and book reviews and criticism: parents, whom we both love and hate, affect nearly every aspect of our adult personalities; every category of desire, interest, or relationship is a manifestation of the sex drive; and we all have subconscious desires—all sexual in one way or another—based largely on forgotten family dramas that we enjoy seeing played out and given expression in art. That’s it. 

            So, if we’re discussing Breaking Bad for instance, a critic might refer to Walt and Jesse’s relationship as either oedipal, meaning they’re playing the roles of father and son who love but want to kill each other, or homoerotic, meaning their partnership substitutes for the homosexual relationship they’d both really prefer. The special attention the show gives to the blue meth and all the machines and gadgets used to make it constitutes a fetish. And the appeal of the show is that all of us in the audience wish we could do everything Walt does. Since we must repress those desires, we come to the show because watching it effects a type of release.

            Not a single element of this theory has any scientific validity. If we were such horny devils, we could just as easily watch internet pornography as tune into Mad Men. Psychoanalysis is to modern scientific psychology what alchemy is to chemistry and what astrology is to astronomy. But the biggest weakness of Freud’s pseudo-theories from a scientific perspective is probably what has made them so attractive to scholars in the humanities over the past century: they don’t lend themselves to testable predictions, so they can easily be applied to a variety of outcomes. As explanations, they can never fail or be definitively refuted—but that’s because they don’t really explain anything. Quoting Craig Wright, a writer for Six Feet Under, Martin writes that

…the left always articulates a critique through the arts.  “But the funny part is that masked by, or nested within, that critique is a kind of helpless eroticization of the power of the Right. They’re still in love with Big Daddy, even though they hate him.”

That was certainly true for the women who made Tony Soprano an unlikely sex symbol—and for the men who found him no less seductive. Wish fulfillment has always been at the queasy heart of the mobster genre, the longing for a life outside the bounds of convention, mingled with the conflicted desire to see the perpetrator punished for the same transgression… Likewise for viewers, for whom a life of taking, killing, and sleeping with whomever and whatever one wants had an undeniable, if conflict-laden, appeal. (88)

So Tony reminds us of W. because they’re both powerful figures, and we’re interested in powerful figures because they remind us of our dads and because we eroticize power. Even if this were true, would it contribute anything to our understanding or enjoyment of the show? Are any of these characters really that much like your own dad? Tony smashes some poor guy’s head because he got in his way, and sometimes we wish we could do that. Don Draper sleeps with lots of attractive women, and all the men watching the show would like to do that too. Startling revelations, those.

What a scholar in search of substantive insights might focus on instead is the universality of the struggle to reconcile selfish desires—sex, status, money, comfort—with the needs and well-being of the groups to which we belong. Don Draper wants to sleep around, but he also genuinely wants Betty and their children to be happy. Tony Soprano wants to be feared and respected, but he doesn’t want his daughter to think he’s a murderous thug. Walter White wants to prove he can provide for his family, but he also wants Skyler and Walter Junior to be safe. These tradeoffs and dilemmas—not the difficult men themselves—are what most distinguish these shows from conventional TV dramas. In most movies and shows, the protagonist may have some selfish desires that compete with his or her more altruistic or communal instincts, but which side ultimately wins out is a foregone conclusion. “Heroes are much better suited for the movies,” Martin quotes Alan Ball saying. “I’m more interested in real people. And real people are fucked up” (106).

Ball is the showrunner behind the HBO series Six Feet Under and True Blood, and though Martin gives him quite a bit of space in Difficult Men he doesn’t seem to notice that Ball’s “feminine style” (102) of showrunning undermines his theory about domineering characters being direct reflections of their domineering creators. The handful of interesting observations about what makes for a good series in Martin’s book is pretty evenly divvied up between Ball and David Simon, the creator of The Wire. Recalling his response to the episode of The Sopranos in which Tony strangles a rat while visiting a college campus with Meadow, Ball says,

I felt like was watching a movie from the seventies. Where it was like, “You know those cartoon ideas of good and evil? Well, forget them. We’re going to address something that’s really real.” The performances were electric. The writing was spectacular. But it was the moral complexity, the complexity of the characters and their dilemmas, that made it incredibly exciting. (94-5)

The connection between us and the characters isn’t just that we have some of the same impulses and desires; it’s that we have to do similar balancing acts as we face similar dilemmas. No, we don’t have to figure out how to whack a guy without our daughters finding out, but a lot of us probably do want to shield our kids from some of the ugliness of our jobs. And most of us have to prioritize career advancement against family obligations in one way or another. What makes for compelling drama isn’t our rooting for a character who knows what’s right and does it—that’s not drama at all. What pulls us into these shows is the process the characters go through of deciding which of their competing desires or obligations they should act on. If we see them do the wrong thing once in a while, well, that just ups the ante for the scenes when doing the right thing really counts.

            On the one hand, parents and sponsors want a show that has a good message, a guy with the right ideas and virtuous motives confronted with people with bad ideas and villainous motives. The good guy wins and the lesson is conveyed to the comfortable audiences. On the other hand, writers, for the most part, want to dispense with this idea of lessons and focus on characters with murderous, adulterous, or self-aggrandizing impulses, allowing for the possibility that they’ll sometimes succumb to them. But sometimes writers face the dilemma of having something they really want to say with their stories. Martin describes David Simon’s struggle to square this circle.

 As late as 2012, he would complain in a New York Times interview that fans were still talking about their favorite characters rather than concentrating on the show’s political message… The real miracle of The Wire is that, with only a few late exceptions, it overcame the proud pedantry of its creators to become one of the greatest literary accomplishments of the early twenty-first century. (135)

But then it’s Simon himself who Martin quotes to explain how having a message to convey can get in the way of a good story.

Everybody, if they’re trying to say something, if they have a point to make, they can be a little dangerous if they’re left alone. Somebody has to be standing behind them saying, dramatically, “Can we do it this way?” When the guy is making the argument about what he’s trying to say, you need somebody else saying, “Yeah, but…” (207)

The exploration of this tension makes up the most substantive and compelling section of Difficult Men.

            Unfortunately, Martin fails to contribute anything to this discussion of drama and dilemmas beyond these short passages and quotes. And at several points he forgets his own observation about drama not being reducible to any underlying message. The most disappointing part of Difficult Men is the chapter devoted to Vince Gilligan and his show Breaking Bad. Gilligan is another counterexample to the theory that domineering and volatile men in the writer’s seat account for domineering and volatile characters in the shows; the writing room he runs gives the chapter its name, “The Happiest Room in Hollywood.” Martin writes that Breaking Bad is “arguably the best show on TV, in many ways the culmination of everything the Third Golden Age had made possible” (264). In trying to explain why the show is so good, he claims that

…whereas the antiheroes of those earlier series were at least arguably the victims of their circumstances—family, society, addiction, and so on—Walter White was insistently, unambiguously, an agent with free will. His journey became a grotesque magnification of the American ethos of self-actualization, Oprah Winfrey’s exhortation that all must find and “live your best life.” What if, Breaking Bad asked, one’s best life happened to be as a ruthless drug lord? (268)

This is Martin making the very mistake he warns against earlier in the book by finding some fundamental message at the core of the show. (Though he could simply believe that even though it’s a bad idea for writers to try to convey messages it’s okay for critics to read them into the shows.) But he’s doing the best he can with the tools of scholarship he’s allowed to marshal. This assessment is an extension of his point about post-feminist dislocation, turning the entire series into a slap in the face to Oprah, that great fount of male angst.

            To point out that Martin is perfectly wrong about Walter White isn’t merely to offer a rival interpretation. Until the end of season four, as any reasonable viewer who’s paid a modicum of attention to the development of his character will attest, Walter is far more at the mercy of circumstances than any of the other antiheroes in the Third Golden Age lineup. Here’s Walter explaining why he doesn’t want to undergo an expensive experimental cancer treatment in season one:

What I want—what I need—is a choice. Sometimes I feel like I never actually make any of my own. Choices, I mean. My entire life, it just seems I never, you know, had a real say about any of it. With this last one—cancer—all I have left is how I choose to approach this.

He’s secretly cooking meth to make money for his family already at this point, but that’s a lot more him making the most of a bad situation than being the captain of his own fate. Can you imagine Tony or Don saying anything like this? Even when Walt delivers his famous “I am the danger” speech in season four—which gets my vote for the best moment in TV history (or film history too for that matter)—the statement is purely aspirational; he’s still in all kinds of danger at that point. Did Martin neglect the first four seasons and pick up watching only after Walt finally killed Gus? Either way, it’s a big, embarrassing mistake.

           The dilemmas Walt faces are what make his story so compelling. He’s far more powerless than other bad boy characters at the start of the series, and he’s also far more altruistic in his motives. That’s precisely why it’s so disturbing—and riveting—to see those motives corrupted by his gradually accumulating power. It’s hard not to think of the cartel drug lords we always hear about in Mexico according to those “cartoon ideas of good and evil” Alan Ball was so delighted to see smashed by Tony Soprano. But Breaking Bad goes a long way toward bridging the divide between such villains and a type of life we have no trouble imagining. The show isn’t about free will or self-actualization at all; it’s about how even the nicest guy can be turned into one of the scariest villains by being placed in a not all that far-fetched set of circumstances. In much the same way, Martin, clearly a smart guy and a talented writer, can be made to look like a bit of an idiot by being forced to rely on a bunch of really bad ideas as he explores the inner workings some really great shows.

            If men’s selfish desires—sex, status, money, freedom—aren’t any more powerful than women’s, their approaches to satisfying them still tend to be more direct, less subtle. But what makes it harder for a woman’s struggles with her own desires to take on the same urgency as a man’s is probably not that far removed from the reasons women are seldom as physically imposing as men. Volatility in a large man can be really frightening. Men are more likely to have high-status careers like Don’s still today, but they’re also far more likely to end up in prison. These are pretty high stakes. And Don’s actions have ramifications for not just his own family’s well-being, but that of everyone at Sterling Cooper and their families, which is a consequence of that high-status. So status works as a proxy for size. Carmela Soprano’s volatility could be frightening too, but she isn’t the time-bomb Tony is. Speaking of bombs, Skyler White is an expert at bullying men, but going head-to-head with Walter she’s way overmatched. Men will always be scarier than women on average, so their struggles to rein in their scarier impulses will seem more urgent. Still, the Third Golden Age is a teenager now, and as anxious as I am to see what happens to Walter White and all his friends and family, I think the bad boy thing is getting a little stale. Anyone seen Damages

Also read:

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

and:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

And:

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

The Self-Righteousness Instinct: Steven Pinker on the Better Angels of Modernity and the Evils of Morality

Is violence really declining? How can that be true? What could be causing it? Why are so many of us convinced the world is going to hell in a hand basket? Steven Pinker attempts to answer these questions in his magnificent and mind-blowing book.

51a5k0THlNL.jpg

Steven Pinker is one of the few scientists who can write a really long book and still expect a significant number of people to read it. But I have a feeling many who might be vaguely intrigued by the buzz surrounding his 2011 book The Better Angels of Our Nature: Why Violence Has Declined wonder why he had to make it nearly seven hundred outsized pages long. Many curious folk likely also wonder why a linguist who proselytizes for psychological theories derived from evolutionary or Darwinian accounts of human nature would write a doorstop drawing on historical and cultural data to describe the downward trajectories of rates of the worst societal woes. The message that violence of pretty much every variety is at unprecedentedly low rates comes as quite a shock, as it runs counter to our intuitive, news-fueled sense of being on a crash course for Armageddon. So part of the reason behind the book’s heft is that Pinker has to bolster his case with lots of evidence to get us to rethink our views. But flipping through the book you find that somewhere between half and a third of its mass is devoted, not to evidence of the decline, but to answering the questions of why the trend has occurred and why it gives every indication of continuing into the foreseeable future. So is this a book about how evolution has made us violent or about how culture is making us peaceful?

The first thing that needs to be said about Better Angels is that you should read it. Despite its girth, it’s at no point the least bit cumbersome to read, and at many points it’s so fascinating that, weighty as it is, you’ll have a hard time putting it down. Pinker has mastered a prose style that’s simple and direct to the point of feeling casual without ever wanting for sophistication. You can also rest assured that what you’re reading is timely and important because it explores aspects of history and social evolution that impact pretty much everyone in the world but that have gone ignored—if not censoriously denied—by most of the eminences contributing to the zeitgeist since the decades following the last world war.

            Still, I suspect many people who take the plunge into the first hundred or so pages are going to feel a bit disoriented as they try to figure out what the real purpose of the book is, and this may cause them to falter in their resolve to finish reading. The problem is that the resistance Better Angels runs to such a prodigious page-count simultaneously anticipating and responding to doesn’t come from news media or the blinkered celebrities in the carnivals of sanctimonious imbecility that are political talk shows. It comes from Pinker’s fellow academics. The overall point of Better Angels remains obscure owing to some deliberate caginess on the author’s part when it comes to identifying the true targets of his arguments. 

            This evasiveness doesn’t make the book difficult to read, but a quality of diffuseness to the theoretical sections, a multitude of strands left dangling, does at points make you doubt whether Pinker had a clear purpose in writing, which makes you doubt your own purpose in reading. With just a little tying together of those strands, however, you start to see that while on the surface he’s merely righting the misperception that over the course of history our species has been either consistently or increasingly violent, what he’s really after is something different, something bigger. He’s trying to instigate, or at least play a part in instigating, a revolution—or more precisely a renaissance—in the way scholars and intellectuals think not just about human nature but about the most promising ways to improve the lot of human societies.

The longstanding complaint about evolutionary explanations of human behavior is that by focusing on our biology as opposed to our supposedly limitless capacity for learning they imply a certain level of fixity to our nature, and this fixedness is thought to further imply a limit to what political reforms can accomplish. The reasoning goes, if the explanation for the way things are is to be found in our biology, then, unless our biology changes, the way things are is the way they’re going to remain. Since biological change occurs at the glacial pace of natural selection, we’re pretty much stuck with the nature we have. 

            Historically, many scholars have made matters worse for evolutionary scientists today by applying ostensibly Darwinian reasoning to what seemed at the time obvious biological differences between human races in intelligence and capacity for acquiring the more civilized graces, making no secret of their conviction that the differences justified colonial expansion and other forms of oppressive rule. As a result, evolutionary psychologists of the past couple of decades have routinely had to defend themselves against charges that they’re secretly trying to advance some reactionary (or even genocidal) agenda. Considering Pinker’s choice of topic in Better Angels in light of this type of criticism, we can start to get a sense of what he’s up to—and why his efforts are discombobulating.

If you’ve spent any time on a university campus in the past forty years, particularly if it was in a department of the humanities, then you have been inculcated with an ideology that was once labeled postmodernism but that eventually became so entrenched in academia, and in intellectual culture more broadly, that it no longer requires a label. (If you took a class with the word "studies" in the title, then you got a direct shot to the brain.) Many younger scholars actually deny any espousal of it—“I’m not a pomo!”—with reference to a passé version marked by nonsensical tangles of meaningless jargon and the conviction that knowledge of the real world is impossible because “the real world” is merely a collective delusion or social construction put in place to perpetuate societal power structures. The disavowals notwithstanding, the essence of the ideology persists in an inescapable but unremarked obsession with those same power structures—the binaries of men and women, whites and blacks, rich and poor, the West and the rest—and the abiding assumption that texts and other forms of media must be assessed not just according to their truth content, aesthetic virtue, or entertainment value, but also with regard to what we imagine to be their political implications. Indeed, those imagined political implications are often taken as clear indicators of the author’s true purpose in writing, which we must sniff out—through a process called “deconstruction,” or its anemic offspring “rhetorical analysis”—lest we complacently succumb to the subtle persuasion.

In the late nineteenth and early twentieth centuries, faith in what we now call modernism inspired intellectuals to assume that the civilizations of Western Europe and the United States were on a steady march of progress toward improved lives for all their own inhabitants as well as the world beyond their borders. Democracy had brought about a new age of government in which rulers respected the rights and freedom of citizens. Medicine was helping ever more people live ever longer lives. And machines were transforming everything from how people labored to how they communicated with friends and loved ones. Everyone recognized that the driving force behind this progress was the juggernaut of scientific discovery. But jump ahead a hundred years to the early twenty-first century and you see a quite different attitude toward modernity. As Pinker explains in the closing chapter of Better Angels,

A loathing of modernity is one of the great constants of contemporary social criticism. Whether the nostalgia is for small-town intimacy, ecological sustainability, communitarian solidarity, family values, religious faith, primitive communism, or harmony with the rhythms of nature, everyone longs to turn back the clock. What has technology given us, they say, but alienation, despoliation, social pathology, the loss of meaning, and a consumer culture that is destroying the planet to give us McMansions, SUVs, and reality television? (692)

The social pathology here consists of all the inequities and injustices suffered by the people on the losing side of those binaries all us closet pomos go about obsessing over. Then of course there’s industrial-scale war and all the other types of modern violence. With terrorism, the War on Terror, the civil war in Syria, the Israel-Palestine conflict, genocides in the Sudan, Kosovo, and Rwanda, and the marauding bands of drugged-out gang rapists in the Congo, it seems safe to assume that science and democracy and capitalism have contributed to the construction of an unsafe global system with some fatal, even catastrophic design flaws. And that’s before we consider the two world wars and the Holocaust. So where the hell is this decline Pinker refers to in his title?

            One way to think about the strain of postmodernism or anti-modernism with the most currency today (and if you’re reading this essay you can just assume your views have been influenced by it) is that it places morality and politics—identity politics in particular—atop a hierarchy of guiding standards above science and individual rights. So, for instance, concerns over the possibility that a negative image of Amazonian tribespeople might encourage their further exploitation trump objective reporting on their culture by anthropologists, even though there’s no evidence to support those concerns. And evidence that the disproportionate number of men in STEM fields reflects average differences between men and women in lifestyle preferences and career interests is ignored out of deference to a political ideal of perfect parity. The urge to grant moral and political ideals veto power over science is justified in part by all the oppression and injustice that abounds in modern civilizations—sexism, racism, economic exploitation—but most of all it’s rationalized with reference to the violence thought to follow in the wake of any movement toward modernity. Pinker writes,

“The twentieth century was the bloodiest in history” is a cliché that has been used to indict a vast range of demons, including atheism, Darwin, government, science, capitalism, communism, the ideal of progress, and the male gender. But is it true? The claim is rarely backed up by numbers from any century other than the 20th, or by any mention of the hemoclysms of centuries past. (193)

He gives the question even more gravity when he reports that all those other areas in which modernity is alleged to be such a colossal failure tend to improve in the absence of violence. “Across time and space,” he writes in the preface, “the more peaceable societies also tend to be richer, healthier, better educated, better governed, more respectful of their women, and more likely to engage in trade” (xxiii). So the question isn’t just about what the story with violence is; it’s about whether science, liberal democracy, and capitalism are the disastrous blunders we’ve learned to think of them as or whether they still just might hold some promise for a better world.

*******

            It’s in about the third chapter of Better Angels that you start to get the sense that Pinker’s style of thinking is, well, way out of style. He seems to be marching to the beat not of his own drummer but of some drummer from the nineteenth century. In the chapter previous, he drew a line connecting the violence of chimpanzees to that in what he calls non-state societies, and the images he’s left you with are savage indeed. Now he’s bringing in the philosopher Thomas Hobbes’s idea of a government Leviathan that once established immediately works to curb the violence that characterizes us humans in states of nature and anarchy. According to sociologist Norbert Elias’s 1969 book, The Civilizing Process, a work whose thesis plays a starring role throughout Better Angels, the consolidation of a Leviathan in England set in motion a trend toward pacification, beginning with the aristocracy no less, before spreading down to the lower ranks and radiating out to the countries of continental Europe and onward thence to other parts of the world. You can measure your feelings of unease in response to Pinker’s civilizing scenario as a proxy for how thoroughly steeped you are in postmodernism.

            The two factors missing from his account of the civilizing pacification of Europe that distinguish it from the self-congratulatory and self-exculpatory sagas of centuries past are the innate superiority of the paler stock and the special mission of conquest and conversion commissioned by a Christian god. In a later chapter, Pinker violates the contemporary taboo against discussing—or even thinking about—the potential role of average group (racial) differences in a propensity toward violence, but he concludes the case for any such differences is unconvincing: “while recent biological evolution may, in theory, have tweaked our inclinations toward violence and nonviolence, we have no good evidence that it actually has” (621). The conclusion that the Civilizing Process can’t be contingent on congenital characteristics follows from the observation of how readily individuals from far-flung regions acquire local habits of self-restraint and fellow-feeling when they’re raised in modernized societies. As for religion, Pinker includes it in a category of factors that are “Important but Inconsistent” with regard to the trend toward peace, dismissing the idea that atheism leads to genocide by pointing out that “Fascism happily coexisted with Catholicism in Spain, Italy, Portugal, and Croatia, and though Hitler had little use for Christianity, he was by no means an atheist, and professed that he was carrying out a divine plan.” Though he cites several examples of atrocities incited by religious fervor, he does credit “particular religious movements at particular times in history” with successfully working against violence (677).

            Despite his penchant for blithely trampling on the taboos of the liberal intelligentsia, Pinker refuses to cooperate with our reflex to pigeonhole him with imperialists or far-right traditionalists past or present. He continually holds up to ridicule the idea that violence has any redeeming effects. In a section on the connection between increasing peacefulness and rising intelligence, he suggests that our violence-tolerant “recent ancestors” can rightly be considered “morally retarded” (658).

  He singles out George W. Bush as an unfortunate and contemptible counterexample in a trend toward more complex political rhetoric among our leaders. And if it’s either gender that comes out not looking as virtuous in Better Angels it ain’t the distaff one. Pinker is difficult to categorize politically because he’s a scientist through and through. What he’s after are reasoned arguments supported by properly weighed evidence.

But there is something going on in Better Angels beyond a mere accounting for the ongoing decline in violence that most of us are completely oblivious of being the beneficiaries of. For one, there’s a challenge to the taboo status of topics like genetic differences between groups, or differences between individuals in IQ, or differences between genders. And there’s an implicit challenge as well to the complementary premises he took on more directly in his earlier book The Blank Slate that biological theories of human nature always lead to oppressive politics and that theories of the infinite malleability of human behavior always lead to progress (communism relies on a blank slate theory, and it inspired guys like Stalin, Mao, and Pol Pot to murder untold millions). But the most interesting and important task Pinker has set for himself with Better Angels is a restoration of the Enlightenment, with its twin pillars of science and individual rights, to its rightful place atop the hierarchy of our most cherished guiding principles, the position we as a society misguidedly allowed to be usurped by postmodernism, with its own dual pillars of relativism and identity politics.

  But, while the book succeeds handily in undermining the moral case against modernism, it does so largely by stealth, with only a few explicit references to the ideologies whose advocates have dogged Pinker and his fellow evolutionary psychologists for decades. Instead, he explores how our moral intuitions and political ideals often inspire us to make profoundly irrational arguments for positions that rational scrutiny reveals to be quite immoral, even murderous. As one illustration of how good causes can be taken to silly, but as yet harmless, extremes, he gives the example of how “violence against children has been defined down to dodgeball” (415) in gym classes all over the US, writing that

The prohibition against dodgeball represents the overshooting of yet another successful campaign against violence, the century-long movement to prevent the abuse and neglect of children. It reminds us of how a civilizing offensive can leave a culture with a legacy of puzzling customs, peccadilloes, and taboos. The code of etiquette bequeathed to us by this and other Rights Revolutions is pervasive enough to have acquired a name. We call it political correctness. (381)

Such “civilizing offensives” are deliberately undertaken counterparts to the fortuitously occurring Civilizing Process Elias proposed to explain the jagged downward slope in graphs of relative rates of violence beginning in the Middle Ages in Europe. The original change Elias describes came about as a result of rulers consolidating their territories and acquiring greater authority. As Pinker explains,

Once Leviathan was in charge, the rules of the game changed. A man’s ticket to fortune was no longer being the baddest knight in the area but making a pilgrimage to the king’s court and currying favor with him and his entourage. The court, basically a government bureaucracy, had no use for hotheads and loose cannons, but sought responsible custodians to run its provinces. The nobles had to change their marketing. They had to cultivate their manners, so as not to offend the king’s minions, and their empathy, to understand what they wanted. The manners appropriate for the court came to be called “courtly” manners or “courtesy.” (75)

And this higher premium on manners and self-presentation among the nobles would lead to a cascade of societal changes.

Elias first lighted on his theory of the Civilizing Process as he was reading some of the etiquette guides which survived from that era. It’s striking to us moderns to see that knights of yore had to be told not to dispose of their snot by shooting it into their host’s table cloth, but that simply shows how thoroughly people today internalize these rules. As Elias explains, they’ve become second nature to us. Of course, we still have to learn them as children. Pinker prefaces his discussion of Elias’s theory with a recollection of his bafflement at why it was so important for him as a child to abstain from using his knife as a backstop to help him scoop food off his plate with a fork. Table manners, he concludes, reside on the far end of a continuum of self-restraint at the opposite end of which are once-common practices like cutting off the nose of a dining partner who insults you. Likewise, protecting children from the perils of flying rubber balls is the product of a campaign against the once-common custom of brutalizing them. The centrality of self-control is the common underlying theme: we control our urge to misuse utensils, including their use in attacking our fellow diners, and we control our urge to throw things at our classmates, even if it’s just in sport. The effect of the Civilizing Process in the Middle Ages, Pinker explains, was that “A culture of honor—the readiness to take revenge—gave way to a culture of dignity—the readiness to control one’s emotions” (72). In other words, diplomacy became more important than deterrence.

            What we’re learning here is that even an evolved mind can adjust to changing incentive schemes. Chimpanzees have to control their impulses toward aggression, sexual indulgence, and food consumption in order to survive in hierarchical bands with other chimps, many of whom are bigger, stronger, and better-connected. Much of the violence in chimp populations takes the form of adult males vying for positions in the hierarchy so they can enjoy the perquisites males of lower status must forgo to avoid being brutalized. Lower ranking males meanwhile bide their time, hopefully forestalling their gratification until such time as they grow stronger or the alpha grows weaker. In humans, the capacity for impulse-control and the habit of delaying gratification are even more important because we live in even more complex societies. Those capacities can either lie dormant or they can be developed to their full potential depending on exactly how complex the society is in which we come of age. Elias noticed a connection between the move toward more structured bureaucracies, less violence, and an increasing focus on etiquette, and he concluded that self-restraint in the form of adhering to strict codes of comportment was both an advertisement of, and a type of training for, the impulse-control that would make someone a successful bureaucrat.

            Aside from children who can’t fathom why we’d futz with our forks trying to capture recalcitrant peas, we normally take our society’s rules of etiquette for granted, no matter how inconvenient or illogical they are, seldom thinking twice before drawing unflattering conclusions about people who don’t bother adhering to them, the ones for whom they aren’t second nature. And the importance we place on etiquette goes beyond table manners. We judge people according to the discretion with which they dispose of any and all varieties of bodily effluent, as well as the delicacy with which they discuss topics sexual or otherwise basely instinctual. 

            Elias and Pinker’s theory is that, while the particular rules are largely arbitrary, the underlying principle of transcending our animal nature through the application of will, motivated by an appreciation of social convention and the sensibilities of fellow community members, is what marked the transition of certain constituencies of our species from a violent non-state existence to a relatively peaceful, civilized lifestyle. To Pinker, the uptick in violence that ensued once the counterculture of the 1960s came into full blossom was no coincidence. The squares may not have been as exciting as the rock stars who sang their anthems to hedonism and the liberating thrill of sticking it to the man. But a society of squares has certain advantages—a lower probability for each of its citizens of getting beaten or killed foremost among them.

            The Civilizing Process as Elias and Pinker, along with Immanuel Kant, understand it picks up momentum as levels of peace conducive to increasingly complex forms of trade are achieved. To understand why the move toward markets or “gentle commerce” would lead to decreasing violence, us pomos have to swallow—at least momentarily—our animus for Wall Street and all the corporate fat cats in the top one percent of the wealth distribution. The basic dynamic underlying trade is that one person has access to more of something than they need, but less of something else, while another person has the opposite balance, so a trade benefits them both. It’s a win-win, or a positive-sum game. The hard part for educated liberals is to appreciate that economies work to increase the total wealth; there isn’t a set quantity everyone has to divvy up in a zero-sum game, an exchange in which every gain for one is a loss for another. And Pinker points to another benefit:

Positive-sum games also change the incentives for violence. If you’re trading favors or surpluses with someone, your trading partner suddenly becomes more valuable to you alive than dead. You have an incentive, moreover, to anticipate what he wants, the better to supply it to him in exchange for what you want. Though many intellectuals, following in the footsteps of Saints Augustine and Jerome, hold businesspeople in contempt for their selfishness and greed, in fact a free market puts a premium on empathy. (77)

The Occupy Wall Street crowd will want to jump in here with a lengthy list of examples of businesspeople being unempathetic in the extreme. But Pinker isn’t saying commerce always forces people to be altruistic; it merely encourages them to exercise their capacity for perspective-taking. Discussing the emergence of markets, he writes,

The advances encouraged the division of labor, increased surpluses, and lubricated the machinery of exchange. Life presented people with more positive-sum games and reduced the attractiveness of zero-sum plunder. To take advantage of the opportunities, people had to plan for the future, control their impulses, take other people’s perspectives, and exercise the other social and cognitive skills needed to prosper in social networks. (77)

And these changes, the theory suggests, will tend to make merchants less likely on average to harm anyone. As bad as bankers can be, they’re not out sacking villages.

            Once you have commerce, you also have a need to start keeping records. And once you start dealing with distant partners it helps to have a mode of communication that travels. As writing moved out of the monasteries, and as technological advances in transportation brought more of the world within reach, ideas and innovations collided to inspire sequential breakthroughs and discoveries. Every advance could be preserved, dispersed, and ratcheted up. Pinker focuses on two relatively brief historical periods that witnessed revolutions in the way we think about violence, and both came in the wake of major advances in the technologies involved in transportation and communication. The first is the Humanitarian Revolution that occurred in the second half of the eighteenth century, and the second covers the Rights Revolutions in the second half of the twentieth. The Civilizing Process and gentle commerce weren’t sufficient to end age-old institutions like slavery and the torture of heretics. But then came the rise of the novel as a form of mass entertainment, and with all the training in perspective-taking readers were undergoing the hitherto unimagined suffering of slaves, criminals, and swarthy foreigners became intolerably imaginable. People began to agitate and change ensued.

            The Humanitarian Revolution occurred at the tail end of the Age of Reason and is recognized today as part of the period known as the Enlightenment. According to some scholarly scenarios, the Enlightenment, for all its successes like the American Constitution and the abolition of slavery, paved the way for all those allegedly unprecedented horrors in the first half of the twentieth century. Notwithstanding all this ivory tower traducing, the Enlightenment emerged from dormancy after the Second World War and gradually gained momentum, delivering us into a period Pinker calls the New Peace. Just as the original Enlightenment was preceded by increasing cosmopolitanism, improving transportation, and an explosion of literacy, the transformations that brought about the New Peace followed a burst of technological innovation. For Pinker, this is no coincidence. He writes,

If I were to put my money on the single most important exogenous cause of the Rights Revolutions, it would be the technologies that made ideas and people increasingly mobile. The decades of the Rights Revolutions were the decades of the electronics revolutions: television, transistor radios, cable, satellite, long-distance telephones, photocopiers, fax machines, the Internet, cell phones, text messaging, Web video. They were the decades of the interstate highway, high-speed rail, and the jet airplane. They were the decades of the unprecedented growth in higher education and in the endless frontier of scientific research. Less well known is that they were also the decades of an explosion in book publishing. From 1960 to 2000, the annual number of books published in the United States increased almost fivefold. (477)

Violence got slightly worse in the 60s. But the Civil Rights Movement was underway, Women’s Rights were being extended into new territories, and people even began to acknowledge that animals could suffer, prompting them to argue that we shouldn’t cause them to do so without cause. Today the push for Gay Rights continues. By 1990, the uptick in violence was over, and so far the move toward peace is looking like an ever greater success. Ironically, though, all the new types of media bringing images from all over the globe into our living rooms and pockets contributes to the sense that violence is worse than ever.

*******

            Three factors brought about a reduction in violence over the course of history then: strong government, trade, and communications technology. These factors had the impact they did because they interacted with two of our innate propensities, impulse-control and perspective-taking, by giving individuals both the motivation and the wherewithal to develop them both to ever greater degrees. It’s difficult to draw a clear delineation between developments that were driven by chance or coincidence and those driven by deliberate efforts to transform societies. But Pinker does credit political movements based on moral principles with having played key roles:

Insofar as violence is immoral, the Rights Revolutions show that a moral way of life often requires a decisive rejection of instinct, culture, religion, and standard practice. In their place is an ethics that is inspired by empathy and reason and stated in the language of rights. We force ourselves into the shoes (or paws) of other sentient beings and consider their interests, starting with their interest in not being hurt or killed, and we ignore superficialities that may catch our eye such as race, ethnicity, gender, age, sexual orientation, and to some extent, species. (475)

Some of the instincts we must reject in order to bring about peace, however, are actually moral instincts.

Pinker is setting up a distinction here between different kinds of morality. The one he describes that’s based on perspective-taking—which evidence he presents later suggests inspires sympathy—and is “stated in the language of rights” is the one he credits with transforming the world for the better. Of the idea that superficial differences shouldn’t distract us from our common humanity, he writes,

This conclusion, of course, is the moral vision of the Enlightenment and the strands of humanism and liberalism that have grown out of it. The Rights Revolutions are liberal revolutions. Each has been associated with liberal movements, and each is currently distributed along a gradient that runs, more or less, from Western Europe to the blue American states to the red American states to the democracies of Latin America and Asia and then to the more authoritarian countries, with Africa and most of the Islamic world pulling up the rear. In every case, the movements have left Western cultures with excesses of propriety and taboo that are deservedly ridiculed as political correctness. But the numbers show that the movements have reduced many causes of death and suffering and have made the culture increasingly intolerant of violence in any form. (475-6)

So you’re not allowed to play dodgeball at school or tell off-color jokes at work, but that’s a small price to pay. The most remarkable part of this passage though is that gradient he describes; it suggests the most violent regions of the globe are also the ones where people are the most obsessed with morality, with things like Sharia and so-called family values. It also suggests that academic complaints about the evils of Western culture are unfounded and startlingly misguided. As Pinker casually points out in his section on Women’s Rights, “Though the United States and other Western nations are often accused of being misogynistic patriarchies, the rest of the world is immensely worse” (413).

The Better Angels of Our Nature came out about a year before Jonathan Haidt’s The Righteous Mind, but Pinker’s book beats Haidt’s to the punch by identifying a serious flaw in his reasoning. The Righteous Mind explores how liberals and conservatives conceive of morality differently, and Haidt argues that each conception is equally valid so we should simply work to understand and appreciate opposing political views. It’s not like you’re going to change anyone’s mind anyway, right? But the liberal ideal of resisting certain moral intuitions tends to bring about a rather important change wherever it’s allowed to be realized. Pinker writes that

right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence. And that retraction is precisely the agenda of classical liberalism: a freedom of individuals from tribal and authoritarian force, and a tolerance of personal choices as long as they do not infringe on the autonomy and well-being of others. (637)

Classical liberalism—which Pinker distinguishes from contemporary political liberalism—can even be viewed as an effort to move morality away from the realm of instincts and intuitions into the more abstract domains of law and reason. The perspective-taking at the heart of Enlightenment morality can be said to consist of abstracting yourself from your identifying characteristics and immediate circumstances to imagine being someone else in unfamiliar straits. A man with a job imagines being a woman who can’t get one. A white man on good terms with law enforcement imagines being a black man who gets harassed. This practice of abstracting experiences and distilling individual concerns down to universal principles is the common thread connecting Enlightenment morality to science.

            So it’s probably no coincidence, Pinker argues, that as we’ve gotten more peaceful, people in Europe and the US have been getting better at abstract reasoning as well, a trend which has been going on for as long as researchers have had tests to measure it. Psychologists over the course of the twentieth century have had to adjust IQ test results (the average is always 100) a few points every generation because scores on a few subsets of questions have kept going up. The regular rising of scores is known as the Flynn Effect, after psychologist James Flynn, who was one of the first researchers to realize the trend was more than methodological noise. Having posited a possible connection between scientific and moral reasoning, Pinker asks, “Could there be a moral Flynn Effect?” He explains,

We have several grounds for supposing that enhanced powers of reason—specifically, the ability to set aside immediate experience, detach oneself from a parochial vantage point, and frame one’s ideas in abstract, universal terms—would lead to better moral commitments, including an avoidance of violence. And we have just seen that over the course of the 20th century, people’s reasoning abilities—particularly their ability to set aside immediate experience, detach themselves from a parochial vantage point, and think in abstract terms—were steadily enhanced. (656)

Pinker cites evidence from an array of studies showing that high-IQ people tend have high moral IQs as well. One of them, an infamous study by psychologist Satoshi Kanazawa based on data from over twenty thousand young adults in the US, demonstrates that exceptionally intelligent people tend to hold a particular set of political views. And just as Pinker finds it necessary to distinguish between two different types of morality he suggests we also need to distinguish between two different types of liberalism:

Intelligence is expected to correlate with classical liberalism because classical liberalism is itself a consequence of the interchangeability of perspectives that is inherent to reason itself. Intelligence need not correlate with other ideologies that get lumped into contemporary left-of-center political coalitions, such as populism, socialism, political correctness, identity politics, and the Green movement. Indeed, classical liberalism is sometimes congenial to the libertarian and anti-political-correctness factions in today’s right-of-center coalitions. (662)

And Kanazawa’s findings bear this out. It’s not liberalism in general that increases steadily with intelligence, but a particular kind of liberalism, the type focusing more on fairness than on ideology.

*******

Following the chapters devoted to historical change, from the early Middle Ages to the ongoing Rights Revolutions, Pinker includes two chapters on psychology, the first on our “Inner Demons” and the second on our “Better Angels.” Ideology gets some prime real estate in the Demons chapter, because, he writes, “the really big body counts in history pile up” when people believe they’re serving some greater good. “Yet for all that idealism,” he explains, “it’s ideology that drove many of the worst things that people have ever done to each other.” Christianity, Nazism, communism—they all “render opponents of the ideology infinitely evil and hence deserving of infinite punishment” (556). Pinker’s discussion of morality, on the other hand, is more complicated. It begins, oddly enough, in the Demons chapter, but stretches into the Angels one as well. This is how the section on morality in the Angels chapter begins:

The world has far too much morality. If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest. The human moral sense can excuse any atrocity in the minds of those who commit it, and it furnishes them with motives for acts of violence that bring them no tangible benefit. The torture of heretics and conversos, the burning of witches, the imprisonment of homosexuals, and the honor killing of unchaste sisters and daughters are just a few examples. (622)

The postmodern push to give precedence to moral and political considerations over science, reason, and fairness may seem like a good idea at first. But political ideologies can’t be defended on the grounds of their good intentions—they all have those. And morality has historically caused more harm than good. It’s only the minimalist, liberal morality that has any redemptive promise:

Though the net contribution of the human moral sense to human well-being may well be negative, on those occasions when it is suitably deployed it can claim some monumental advances, including the humanitarian reforms of the Enlightenment and the Rights Revolutions of recent decades. (622)

            One of the problems with ideologies Pinker explores is that they lend themselves too readily to for-us-or-against-us divisions which piggyback on all our tribal instincts, leading to dehumanization of opponents as a step along the path to unrestrained violence. But, we may ask, isn’t the Enlightenment just another ideology? If not, is there some reliable way to distinguish an ideological movement from a “civilizing offensive” or a “Rights Revolution”? Pinker doesn’t answer these questions directly, but it’s in his discussion of the demonic side of morality where Better Angels offers its most profound insights—and it’s also where we start to be able to piece together the larger purpose of the book. He writes,

In The Blank Slate I argued that the modern denial of the dark side of human nature—the doctrine of the Noble Savage—was a reaction against the romantic militarism, hydraulic theories of aggression, and glorification of struggle and strife that had been popular in the late 19th and early 20th centuries. Scientists and scholars who question the modern doctrine have been accused of justifying violence and have been subjected to vilification, blood libel, and physical assault. The Noble Savage myth appears to be another instance of an antiviolence movement leaving a cultural legacy of propriety and taboo. (488)

Since Pinker figured that what he and his fellow evolutionary psychologists kept running up against was akin to the repulsion people feel against poor table manners or kids winging balls at each other in gym class, he reasoned that he ought to be able to simply explain to the critics that evolutionary psychologists have no intention of justifying, or even encouraging complacency toward, the dark side of human nature. “But I am now convinced,” he writes after more than a decade of trying to explain himself, “that a denial of the human capacity for evil runs even deeper, and may itself be a feature of human nature” (488). That feature, he goes on to explain, makes us feel compelled to label as evil anyone who tries to explain evil scientifically—because evil as a cosmic force beyond the reach of human understanding plays an indispensable role in group identity.

            Pinker began to fully appreciate the nature of the resistance to letting biology into discussions of human harm-doing when he read about the work of psychologist Roy Baumeister exploring the wide discrepancies in accounts of anger-inducing incidents between perpetrators and victims. The first studies looked at responses to minor offenses, but Baumeister went on to present evidence that the pattern, which Pinker labels the “Moralization Gap,” can be scaled up to describe societal attitudes toward historical atrocities. Pinker explains,

The Moralization Gap consists of complementary bargaining tactics in the negotiation for recompense between a victim and a perpetrator. Like opposing counsel in a lawsuit over a tort, the social plaintiff will emphasize the deliberateness, or at least the depraved indifference, of the defendant’s action, together with the pain and suffering the plaintiff endures. The social defendant will emphasize the reasonableness or unavoidability of the action, and will minimize the plaintiff’s pain and suffering. The competing framings shape the negotiations over amends, and also play to the gallery in a competition for their sympathy and for a reputation as a responsible reciprocator. (491)

Another of the Inner Demons Pinker suggests plays a key role in human violence is the drive for dominance, which he explains operates not just at the level of the individual but at that of the group to which he or she belongs. We want our group, however we understand it in the immediate context, to rest comfortably atop a hierarchy of other groups. What happens is that the Moralization Gap gets mingled with this drive to establish individual and group superiority. You see this dynamic playing out even in national conflicts. Pinker points out,

The victims of a conflict are assiduous historians and cultivators of memory. The perpetrators are pragmatists, firmly planted in the present. Ordinarily we tend to think of historical memory as a good thing, but when the events being remembered are lingering wounds that call for redress, it can be a call to violence. (493)

Name a conflict and with little effort you’ll likely also be able to recall contentions over historical records associated with it.

            The outcome of the Moralization Gap being taken to the group historical level is what Pinker and Baumeister call the “Myth of Pure Evil.” Harm-doing narratives start to take on religious overtones as what began as a conflict between regular humans pursuing or defending their interests, in ways they probably reasoned were just, transforms into an eternal struggle against inhuman and sadistic agents of chaos. And Pinker has come to realize that it is this Myth of Pure Evil that behavioral scientists ineluctably end up blaspheming:

Baumeister notes that in the attempt to understand harm-doing, the viewpoint of the scientist or scholar overlaps with the viewpoint of the perpetrator. Both take a detached, amoral stance toward the harmful act. Both are contextualizers, always attentive to the complexities of the situation and how they contributed to the causation of the harm. And both believe that the harm is ultimately explicable. (495)

This is why evolutionary psychologists who study violence inspire what Pinker in The Blank Slate called “political paranoia and moral exhibitionism” (106) on the part of us naïve pomos, ravenously eager to showcase our valor by charging once more into the breach against the mythical malevolence. All the while, our impregnable assurance of our own righteousness is borne of the conviction that we’re standing up for the oppressed. Pinker writes,

The viewpoint of the moralist, in contrast, is the viewpoint of the victim. The harm is treated with reverence and awe. It continues to evoke sadness and anger long after it was perpetrated. And for all the feeble ratiocination we mortals throw at it, it remains a cosmic mystery, a manifestation of the irreducible and inexplicable existence of evil in the universe. Many chroniclers of the Holocaust consider it immoral even to try to explain it. (495-6)

We simply can’t help inflating the magnitude of the crime in our attempt to convince our ideological opponents of their folly—though what we’re really inflating is our own, and our group’s, glorification—and so we can’t abide anyone puncturing our overblown conception because doing so lends credence to the opposition, making us look a bit foolish in the process for all our exaggerations.

            Reading Better Angels, you get the sense that Pinker experienced some genuine surprise and some real delight in discovering more and more corroboration for the idea that rates of violence have been trending downward in nearly every domain he explored. But things get tricky as you proceed through the pages because many of his arguments take on opposing positions he avoids naming. He seems to have seen the trove of evidence for declining violence as an opportunity to outflank the critics of evolutionary psychology in leftist, postmodern academia (to use a martial metaphor). Instead of calling them out directly, he circles around to chip away at the moral case for their political mission. We see this, for example, in his discussion of rape, which psychologists get into all kinds of trouble for trying to explain. After examining how scientists seem to be taking the perspective of perpetrators, Pinker goes on to write,

The accusation of relativizing evil is particularly likely when the motive the analyst imputes to the perpetrator appears to be venial, like jealousy, status, or retaliation, rather than grandiose, like the persistence of suffering in the world or the perpetuation of race, class, or gender oppression. It is also likely when the analyst ascribes the motive to every human being rather than to a few psychopaths or to the agents of a malignant political system (hence the popularity of the doctrine of the Noble Savage). (496)

In his earlier section on Woman’s Rights and the decline of rape, he attributed the difficulty in finding good data on the incidence of the crime, as well as some of the “preposterous” ideas about what motivates it, to the same kind of overextensions of anti-violence campaigns that lead to arbitrary rules about the use of silverware and proscriptions against dodgeball:

Common sense never gets in the way of a sacred custom that has accompanied a decline in violence, and today rape centers unanimously insist that “rape or sexual assault is not an act of sex or lust—it’s about aggression, power, and humiliation, using sex as the weapon. The rapist’s goal is domination.” (To which the journalist Heather MacDonald replies: “The guys who push themselves on women at keggers are after one thing only, and it’s not a reinstatement of the patriarchy.”) (406)

Jumping ahead to Pinker’s discussion of the Moralization Gap, we see that the theory that rape is about power, as opposed to the much more obvious theory that it’s about sex, is an outgrowth of the Myth of Pure Evil, an inflation of the mundane drives that lead some pathetic individuals to commit horrible crimes into eternal cosmic forces, inscrutable and infinitely punishable.

            When feminists impute political motives to rapists, they’re crossing the boundary from Enlightenment morality to the type of moral ideology that inspires dehumanization and violence. The good news is that it’s not difficult to distinguish between the two. From the Enlightenment perspective, rape is indefensibly wrong because it violates the autonomy of the victim—it’s an act of violence perpetrated by one individual against another. From the ideological perspective, every rape must be understood in the context of the historical oppression of women by men; it transcends the individuals involved as a representation of a greater evil. The rape-as-a-political-act theory also comes dangerously close to implying a type of collective guilt, which is a clear violation of individual rights.

Scholars already make the distinction between three different waves of feminism. The first two fall within Pinker’s definition of Rights Revolutions; they encompassed pushes for suffrage, marriage rights, and property rights, and then the rights to equal pay and equal opportunity in the workplace. The third wave is avowedly postmodern, its advocates committed to the ideas that gender is a pure social construct and that suggesting otherwise is an act of oppression. What you come away from Better Angels realizing, even though Pinker doesn’t say it explicitly, is that somewhere between the second and third waves feminists effectively turned against the very ideas and institutions that had been most instrumental in bringing about the historical improvements in women’s lives from the Middle Ages to the turn of the twenty-first century. And so it is with all the other ideologies on the postmodern roster.

Another misguided propaganda tactic that dogged Pinker’s efforts to identify historical trends in violence can likewise be understood as an instance of inflating the severity of crimes on behalf of a moral ideology—and the taboo placed on puncturing the bubble or vitiating the purity of evil with evidence and theories of venial motives. As he explains in the preface, “No one has ever recruited activists to a cause by announcing that things are getting better, and bearers of good news are often advised to keep their mouths shut lest they lull people into complacency” (xxii). Here again the objective researcher can’t escape the appearance of trying to minimize the evil, and therefore risks being accused of looking the other way, or even of complicity. But in an earlier section on genocide Pinker provides the quintessential Enlightenment rationale for the clear-eyed scientific approach to studying even the worst atrocities. He writes,

The effort to whittle down the numbers that quantify the misery can seem heartless, especially when the numbers serve as propaganda for raising money and attention. But there is a moral imperative in getting the facts right, and not just to maintain credibility. The discovery that fewer people are dying in wars all over the world can thwart cynicism among compassion-fatigued news readers who might otherwise think that poor countries are irredeemable hellholes. And a better understanding of what drve the numbers down can steer us toward doing things that make people better off rather than congratulating ourselves on how altruistic we are. (320)

This passage can be taken as the underlying argument of the whole book. And it gestures toward some far-reaching ramifications to the idea that exaggerated numbers are a product of the same impulse that causes us to inflate crimes to the status of pure evil.

Could it be that the nearly universal misperception that violence is getting worse all over the world, that we’re doomed to global annihilation, and that everywhere you look is evidence of the breakdown in human decency—could it be that the false impression Pinker set out to correct with Better Angels is itself a manifestation of a natural urge in all of us to seek out evil and aggrandize ourselves by unconsciously overestimating it? Pinker himself never goes as far as suggesting the mass ignorance of waning violence is a byproduct of an instinct toward self-righteousness. Instead, he writes of the “gloom” about the fate of humanity,

I think it comes from the innumeracy of our journalistic and intellectual culture. The journalist Michael Kinsley recently wrote, “It is a crushing disappointment that Boomers entered adulthood with Americans killing and dying halfway around the world, and now, as Boomers reach retirement and beyond, our country is doing the same damned thing.” This assumes that 5,000 Americans dying is the same damned thing as 58,000 Americans dying, and that a hundred thousand Iraqis being killed is the same damned thing as several million Vietnamese being killed. If we don’t keep an eye on the numbers, the programming policy “If it bleeds it leads” will feed the cognitive shortcut “The more memorable, the more frequent,” and we will end up with what has been called a false sense of insecurity. (296)

Pinker probably has a point, but the self-righteous undertone of Kinsley’s “same damned thing” is unmistakable. He’s effectively saying, I’m such an outstanding moral being the outrageous evilness of the invasion of Iraq is blatantly obvious to me—why isn’t it to everyone else? And that same message seems to underlie most of the statements people make expressing similar sentiments about how the world is going to hell.

            Though Pinker neglects to tie all the strands together, he still manages to suggest that the drive to dominance, ideology, tribal morality, and the Myth of Pure Evil are all facets of the same disastrous flaw in human nature—an instinct for self-righteousness. Progress on the moral front—real progress like fewer deaths, less suffering, and more freedom—comes from something much closer to utilitarian pragmatism than activist idealism. Yet the activist tradition is so thoroughly enmeshed in our university culture that we’re taught to exercise our powers of political righteousness even while engaging in tasks as mundane as reading books and articles. 

            If the decline in violence and the improvement of the general weal in various other areas are attributable to the Enlightenment, then many of the assumptions underlying postmodernism are turned on their heads. If social ills like warfare, racism, sexism, and child abuse exist in cultures untouched by modernism—and they in fact not only exist but tend to be much worse—then science can’t be responsible for creating them; indeed, if they’ve all trended downward with the historical development of all the factors associated with male-dominated western culture, including strong government, market economies, run-away technology, and scientific progress, then postmodernism not only has everything wrong but threatens the progress achieved by the very institutions it depends on, emerged from, and squanders innumerable scholarly careers maligning.

Of course some Enlightenment figures and some scientists do evil things. Of course living even in the most Enlightened of civilizations is no guarantee of safety. But postmodernism is an ideology based on the premise that we ought to discard a solution to our societal woes for not working perfectly and immediately, substituting instead remedies that have historically caused more problems than they solved by orders of magnitude. The argument that there’s a core to the Enlightenment that some of its representatives have been faithless to when they committed atrocities may seem reminiscent of apologies for Christianity based on the fact that Crusaders and Inquisitors weren’t loving their neighbors as Christ enjoined. The difference is that the Enlightenment works—in just a few centuries it’s transformed the world and brought about a reduction in violence no religion has been able to match in millennia. If anything, the big monotheistic religions brought about more violence.

Embracing Enlightenment morality or classical liberalism doesn’t mean we should give up our efforts to make the world a better place. As Pinker describes the transformation he hopes to encourage with Better Angels,

As one becomes aware of the decline of violence, the world begins to look different. The past seems less innocent; the present less sinister. One starts to appreciate the small gifts of coexistence that would have seemed utopian to our ancestors: the interracial family playing in the park, the comedian who lands a zinger on the commander in chief, the countries that quietly back away from a crisis instead of escalating to war. The shift is not toward complacency: we enjoy the peace we find today because people in past generations were appalled by the violence in their time and worked to reduce it, and so we should work to reduce the violence that remains in our time. Indeed, it is a recognition of the decline of violence that best affirms that such efforts are worthwhile. (xxvi)

Since our task for the remainder of this century is to extend the reach of science, literacy, and the recognition of universal human rights farther and farther along the Enlightenment gradient until they're able to grant the same increasing likelihood of a long peaceful life to every citizen of every nation of the globe, and since the key to accomplishing this task lies in fomenting future Rights Revolutions while at the same time recognizing, so as to be better equipped to rein in, our drive for dominance as manifested in our more deadly moral instincts, I for one am glad Steven Pinker has the courage to violate so many of the outrageously counterproductive postmodern taboos while having the grace to resist succumbing himself, for the most part, to the temptation of self-righteousness.

Also read:

THE FAKE NEWS CAMPAIGN AGAINST STEVEN PINKER AND ENLIGHTENMENT NOW

And:

THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND

And:

NAPOLEON CHAGNON'S CRUCIBLE AND THE ONGOING EPIDEMIC OF MORALIZING HYSTERIA IN ACADEMIA

Read More
Dennis Junk Dennis Junk

Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education

Maria Konnikova’s book “Mastermind: How to Think Like Sherlock Holmes” got me really excited because if the science of psychology is ever brought up in discussions of literature, it’s usually the pseudoscience of Sigmund Freud. Konnikova, whose blog went a long way toward remedying that tragedy, wanted to offer up an alternative approach. However, though the book shows great promise, it’s ultimately disappointing.

Whenever he gets really drunk, my brother has the peculiar habit of reciting the plot of one or another of his favorite shows or books. His friends and I like to tease him about it—“Watch out, Dan’s drunk, nobody mention The Wire!”—and the quirk can certainly be annoying, especially if you’ve yet to experience the story first-hand. But I have to admit, given how blotto he usually is when he first sets out on one of his grand retellings, his ability to recall intricate plotlines right down to their minutest shifts and turns is extraordinary. One recent night, during a timeout in an epic shellacking of Notre Dame’s football team, he took up the tale of Django Unchained, which incidentally I’d sat next to him watching just the week before. Tuning him out, I let my thoughts shift to a post I’d read on The New Yorker’s cinema blog The Front Row.

            In “The Riddle of Tarantino,” film critic Richard Brody analyzes the director-screenwriter’s latest work in an attempt to tease out the secrets behind the popular appeal of his creations and to derive insights into the inner workings of his mind. The post is agonizingly—though also at points, I must admit, exquisitely—overwritten, almost a parody of the grandiose type of writing one expects to find within the pages of the august weekly. Bemused by the lavish application of psychoanalytic jargon, I finished the essay pitying Brody for, in all his writerly panache, having nothing of real substance to say about the movie or the mind behind it. I wondered if he knows the scientific consensus on Freud is that his influence is less in the line of, say, a Darwin or an Einstein than of an L. Ron Hubbard.

            What Brody and my brother have in common is that they were both moved enough by their cinematic experience to feel an urge to share their enthusiasm, complicated though that enthusiasm may have been. Yet they both ended up doing the story a disservice, succeeding less in celebrating the work than in blunting its impact. Listening to my brother’s rehearsal of the plot with Brody’s essay in mind, I wondered what better field there could be than psychology for affording enthusiasts discussion-worthy insights to help them move beyond simple plot references. How tragic, then, that the only versions of psychology on offer in educational institutions catering to those who would be custodians of art, whether in academia or on the mastheads of magazines like The New Yorker, are those in thrall to Freud’s cultish legacy.

There’s just something irresistibly seductive about the promise of a scientific paradigm that allows us to know more about another person than he knows about himself. In this spirit of privileged knowingness, Brody faults Django for its lack of moral complexity before going on to make a silly accusation. Watching the movie, you know who the good guys are, who the bad guys are, and who you want to see prevail in the inevitably epic climax. “And yet,” Brody writes,

the cinematic unconscious shines through in moments where Tarantino just can’t help letting loose his own pleasure in filming pain. In such moments, he never seems to be forcing himself to look or to film, but, rather, forcing himself not to keep going. He’s not troubled by representation but by a visual superego that restrains it. The catharsis he provides in the final conflagration is that of purging the world of miscreants; it’s also a refining fire that blasts away suspicion of any peeping pleasure at misdeeds and fuses aesthetic, moral, and political exultation in a single apotheosis.

The strained stateliness of the prose provides a ready distraction from the stark implausibility of the assessment. Applying Occam’s Razor rather than Freud’s at once insanely elaborate and absurdly reductionist ideology, we might guess that what prompted Tarantino to let the camera linger discomfortingly long on the violent misdeeds of the black hats is that he knew we in the audience would be anticipating that “final conflagration.”

The more outrageous the offense, the more pleasurable the anticipation of comeuppance—but the experimental findings that support this view aren’t covered in film or literary criticism curricula, mired as they are in century-old pseudoscience.

I’ve been eagerly awaiting the day when scientific psychology supplants psychoanalysis (as well as other equally, if not more, absurd ideologies) in academic and popular literary discussions. Coming across the blog Literally Psyched on Scientific American’s website about a year ago gave me a great sense of hope. The tagline, “Conceived in literature, tested in psychology,” as well as the credibility conferred by the host site, promised that the most fitting approach to exploring the resonance and beauty of stories might be undergoing a long overdue renaissance, liberated at last from the dominion of crackpot theorists. So when the author, Maria Konnikova, a doctoral candidate at Columbia, released her first book, I made a point to have Amazon deliver it as early as possible.

Mastermind: How to Think Like Sherlock Holmes does indeed follow the conceived-in-literature-tested-in-psychology formula, taking the principles of sound reasoning expounded by what may be the most recognizable fictional character in history and attempting to show how modern psychology proves their soundness. In what she calls a “Prelude” to her book, Konnikova explains that she’s been a Holmes fan since her father read Conan Doyle’s stories to her and her siblings as children.

The one demonstration of the detective’s abilities that stuck with Konnikova the most comes when he explains to his companion and chronicler Dr. Watson the difference between seeing and observing, using as an example the number of stairs leading up to their famous flat at 221B Baker Street. Watson, naturally, has no idea how many stairs there are because he isn’t in the habit of observing. Holmes, preternaturally, knows there are seventeen steps. Ever since being made aware of Watson’s—and her own—cognitive limitations through this vivid illustration (which had a similar effect on me when I first read “A Scandal in Bohemia” as a teenager), Konnikova has been trying to find the secret to becoming a Holmesian observer as opposed to a mere Watsonian seer. Already in these earliest pages, we encounter some of the principle shortcomings of the strategy behind the book. Konnikova wastes no time on the question of whether or not a mindset oriented toward things like the number of stairs in your building has any actual advantages—with regard to solving crimes or to anything else—but rather assumes old Sherlock is saying something instructive and profound.

Mastermind is, for the most part, an entertaining read. Its worst fault in the realm of simple page-by-page enjoyment is that Konnikova often belabors points that upon reflection expose themselves as mere platitudes. The overall theme is the importance of mindfulness—an important message, to be sure, in this age of rampant multitasking. But readers get more endorsement than practical instruction. You can only be exhorted to pay attention to what you’re doing so many times before you stop paying attention to the exhortations. The book’s problems in both the literary and psychological domains, however, are much more serious. I came to the book hoping it would hold some promise for opening the way to more scientific literary discussions by offering at least a glimpse of what they might look like, but while reading I came to realize there’s yet another obstacle to any substantive analysis of stories. Call it the TED effect. For anything to be read today, or for anything to get published for that matter, it has to promise to uplift readers, reveal to them some secret about how to improve their lives, help them celebrate the horizonless expanse of human potential.

Naturally enough, with the cacophony of competing information outlets, we all focus on the ones most likely to offer us something personally useful. Though self-improvement is a worthy endeavor, the overlooked corollary to this trend is that the worthiness intrinsic to enterprises and ideas is overshadowed and diminished. People ask what’s in literature for me, or what can science do for me, instead of considering them valuable in their own right—and instead of thinking, heaven forbid, we may have a duty to literature and science as institutions serving as essential parts of the foundation of civilized society.

In trying to conceive of a book that would operate as a vehicle for her two passions, psychology and Sherlock Holmes, while at the same time catering to readers’ appetite for life-enhancement strategies and spiritual uplift, Konnikova has produced a work in the grip of a bewildering and self-undermining identity crisis. The organizing conceit of Mastermind is that, just as Sherlock explains to Watson in the second chapter of A Study in Scarlet, the brain is like an attic. For Konnikova, this means the mind is in constant danger of becoming cluttered and disorganized through carelessness and neglect. That this interpretation wasn’t what Conan Doyle had in mind when he put the words into Sherlock’s mouth—and that the meaning he actually had in mind has proven to be completely wrong—doesn’t stop her from making her version of the idea the centerpiece of her argument. “We can,” she writes,

learn to master many aspects of our attic’s structure, throwing out junk that got in by mistake (as Holmes promises to forget Copernicus at the earliest opportunity), prioritizing those things we want to and pushing back those that we don’t, learning how to take the contours of our unique attic into account so that they don’t unduly influence us as they otherwise might. (27)

This all sounds great—a little too great—from a self-improvement perspective, but the attic metaphor is Sherlock’s explanation for why he doesn’t know the earth revolves around the sun and not the other way around. He states quite explicitly that he believes the important point of similarity between attics and brains is their limited capacity. “Depend upon it,” he insists, “there comes a time when for every addition of knowledge you forget something that you knew before.” Note here his topic is knowledge, not attention.

It is possible that a human mind could reach and exceed its storage capacity, but the way we usually avoid this eventuality is that memories that are seldom referenced are forgotten. Learning new facts may of course exhaust our resources of time and attention. But the usual effect of acquiring knowledge is quite the opposite of what Sherlock suggests. In the early 1990’s, a research team led by Patricia Alexander demonstrated that having background knowledge in a subject area actually increased participants’ interest in and recall for details in an unfamiliar text. One of the most widely known replications of this finding was a study showing that chess experts have much better recall for the positions of pieces on a board than novices. However, Sherlock was worried about information outside of his area of expertise. Might he have a point there?

The problem is that Sherlock’s vocation demands a great deal of creativity, and it’s never certain at the outset of a case what type of knowledge may be useful in solving it. In the story “The Lion’s Mane,” he relies on obscure information about a rare species of jellyfish to wrap up the mystery. Konnikova cites this as an example of “The Importance of Curiosity and Play.” She goes on to quote Sherlock’s endorsement for curiosity in The Valley of Fear: “Breadth of view, my dear Mr. Mac, is one of the essentials of our profession. The interplay of ideas and the oblique uses of knowledge are often of extraordinary interest” (151). How does she account for the discrepancy? Could Conan Doyle’s conception of the character have undergone some sort of evolution? Alas, Konnikova isn’t interested in questions like that. “As with most things,” she writes about the earlier reference to the attic theory, “it is safe to assume that Holmes was exaggerating for effect” (150). I’m not sure what other instances she may have in mind—it seems to me that the character seldom exaggerates for effect. In any case, he was certainly not exaggerating his ignorance of Copernican theory in the earlier story.

If Konnikova were simply privileging the science at the expense of the literature, the measure of Mastermind’s success would be in how clearly the psychological theories and findings are laid out. Unfortunately, her attempt to stitch science together with pronouncements from the great detective often leads to confusing tangles of ideas. Following her formula, she prefaces one of the few example exercises from cognitive research provided in the book with a quote from “The Crooked Man.” After outlining the main points of the case, she writes,

How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision. (169)

So far she hasn’t gone beyond the obvious. But she does go on to cite a truly remarkable finding that emerged from research by Amos Tversky and Daniel Kahneman in the early 1980’s. People who read a description of a man named Bill suggesting he lacks imagination tended to feel it was less likely that Bill was an accountant than that he was an accountant who plays jazz for a hobby—even though the two points of information in that second description make in inherently less likely than the one point of information in the first. The same result came when people were asked whether it was more likely that a woman named Linda was a bank teller or both a bank teller and an active feminist. People mistook the two-item choice as more likely. Now, is this experimental finding an example of how people fail to sift crucial from incidental facts?

The findings of this study are now used as evidence of a general cognitive tendency known as the conjunction fallacy. In his book Thinking, Fast and Slow, Kahneman explains how more detailed descriptions (referring to Tom instead of Bill) can seem more likely, despite the actual probabilities, than shorter ones. He writes,

The judgments of probability that our respondents offered, both in the Tom W and Linda problems, corresponded precisely to judgments of representativeness (similarity to stereotypes). Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. (159)

So people are confused because the less probable version is actually easier to imagine. But here’s how Konnikova tries to explain the point by weaving it together with Sherlock’s ideas:

Holmes puts it this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories in our minds. (173)

But Sherlock is not referring to our minds’ tendency to mistake coherence for probability, the tendency that has us seeing more detailed and hence less probable stories as more likely. How could he have been? Instead, he’s talking about the importance of independently assessing the facts instead of passively accepting the assessments of others. Konnikova is fudging, and in doing so she’s shortchanging the story and obfuscating the science.

As the subtitle implies, though, Mastermind is about how to think; it is intended as a self-improvement guide. The book should therefore be judged based on the likelihood that readers will come away with a greater ability to recognize and avoid cognitive biases, as well as the ability to sustain the conviction to stay motivated and remain alert. Konnikova emphasizes throughout that becoming a better thinker is a matter of determinedly forming better habits of thought. And she helpfully provides countless illustrative examples from the Holmes canon, though some of these precepts and examples may not be as apt as she’d like. You must have clear goals, she stresses, to help you focus your attention. But the overall purpose of her book provides a great example of a vague and unrealistic end-point. Think better? In what domain? She covers examples from countless areas, from buying cars and phones, to sizing up strangers we meet at a party. Sherlock, of course, is a detective, so he focuses his attention of solving crimes. As Konnikova dutifully points out, in domains other than his specialty, he’s not such a mastermind.

What Mastermind works best as is a fun introduction to modern psychology. But it has several major shortcomings in that domain, and these same shortcomings diminish the likelihood that reading the book will lead to any lasting changes in thought habits. Concepts are covered too quickly, organized too haphazardly, and no conceptual scaffold is provided to help readers weigh or remember the principles in context. Konnikova’s strategy is to take a passage from Conan Doyle’s stories that seems to bear on noteworthy findings in modern research, discuss that research with sprinkled references back to the stories, and wrap up with a didactic and sententious paragraph or two. Usually, the discussion begins with one of Watson’s errors, moves on to research showing we all tend to make similar errors, and then ends admonishing us not to be like Watson. Following Kahneman’s division of cognition into two systems—one fast and intuitive, the other slower and demanding of effort—Konnikova urges us to get out of our “System Watson” and rely instead on our “System Holmes.” “But how do we do this in practice?” she asks near the end of the book,

How do we go beyond theoretically understanding this need for balance and open-mindedness and applying it practically, in the moment, in situations where we might not have as much time to contemplate our judgments as we do in the leisure of our reading?

The answer she provides: “It all goes back to the very beginning: the habitual mindset that we cultivate, the structure that we try to maintain for our brain attic no matter what” (240). Unfortunately, nowhere in her discussion of built-in biases and the correlates to creativity did she offer any step-by-step instruction on how to acquire new habits. Konnikova is running us around in circles to hide the fact that her book makes an empty promise.

Tellingly, Kahneman, whose work on biases Konnikova cites on several occasions, is much more pessimistic about our prospects for achieving Holmesian thought habits. In the introduction to Thinking, Fast and Slow, he says his goal is merely to provide terms and labels for the regular pitfalls of thinking to facilitate more precise gossiping. He writes,

Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and home. (3)

The worshipful attitude toward Sherlock in Mastermind is designed to pander to our vanity, and so the suggestion that we need to rely on others to help us think is too mature to appear in its pages. The closest Konnikova comes to allowing for the importance of input and criticism from other people is when she suggests that Watson is an indispensable facilitator of Sherlock’s process because he “serves as a constant reminder of what errors are possible” (195), and because in walking him through his reasoning Sherlock is forced to be more mindful. “It may be that you are not yourself luminous,” Konnikova quotes from The Hound of the Baskervilles, “but you are a conductor of light. Some people without possessing genius have a remarkable power of stimulating it. I confess, my dear fellow, that I am very much in your debt” (196).

That quote shows one of the limits of Sherlock’s mindfulness that Konnikova never bothers to address. At times throughout Mastermind, it’s easy to forget that we probably wouldn’t want to live the way Sherlock is described as living. Want to be a great detective? Abandon your spouse and your kids, move into a cheap flat, work full-time reviewing case histories of past crimes, inject some cocaine, shoot holes in the wall of your flat where you’ve drawn a smiley face, smoke a pipe until the air is unbreathable, and treat everyone, including your best (only?) friend with casual contempt. Conan Doyle made sure his character casts a shadow. The ideal character Konnikova holds up, with all his determined mindfulness, often bears more resemblance to Kwai Chang Caine from Kung Fu. This isn’t to say that Sherlock isn’t morally complex—readers love him because he’s so clearly a good guy, as selfish and eccentric as he may be. Konnikova cites an instance in which he holds off on letting the police know who committed a crime. She quotes:

Once that warrant was made out, nothing on earth would save him. Once or twice in my career I feel that I have done more real harm by my discovery of the criminal than ever he had done by his crime. I have learned caution now, and I had rather play tricks with the law of England than with my own conscience. Let us know more before we act.

But Konnikova isn’t interested in morality, complex or otherwise, no matter how central moral intuitions are to our enjoyment of fiction. The lesson she draws from this passage shows her at her most sententious and platitudinous:

You don’t mindlessly follow the same preplanned set of actions that you had determined early on. Circumstances change, and with them so does the approach. You have to think before you leap to act, or judge someone, as the case may be. Everyone makes mistakes, but some may not be mistakes as such, when taken in the context of the time and the situation. (243)

Hard to disagree, isn’t it?

To be fair, Konnikova does mention some of Sherlock’s peccadilloes in passing. And she includes a penultimate chapter titled “We’re Only Human,” in which she tells the story of how Conan Doyle was duped by a couple of young girls into believing they had photographed some real fairies. She doesn’t, however, take the opportunity afforded by this episode in the author’s life to explore the relationship between the man and his creation. She effectively says he got tricked because he didn’t do what he knew how to do, it can happen to any of us, so be careful you don’t let it happen to you. Aren’t you glad that’s cleared up? She goes on to end the chapter with an incongruous lesson about how you should think like a hunter. Maybe we should, but how exactly, and when, and at what expense, we’re never told.

Konnikova clearly has a great deal of genuine enthusiasm for both literature and science, and despite my disappointment with her first book I plan to keep following her blog. I’m even looking forward to her next book—confident she’ll learn from the negative reviews she’s bound to get on this one. The tragic blunder she made in eschewing nuanced examinations of how stories work, how people relate to characters, or how authors create them for a shallow and one-dimensional attempt at suggesting a 100 year-old fictional character somehow divined groundbreaking research findings from the end of the Twentieth and beginning of the Twenty-First Centuries calls to mind an exchange you can watch on YouTube between Neil Degrasse Tyson and Richard Dawkins. Tyson, after hearing Dawkins speak in the way he’s known to, tries to explain why many scientists feel he’s not making the most of his opportunities to reach out to the public.

You’re professor of the public understanding of science, not the professor of delivering truth to the public. And these are two different exercises. One of them is putting the truth out there and they either buy your book or they don’t. That’s not being an educator; that’s just putting it out there. Being an educator is not only getting the truth right; there’s got to be an act of persuasion in there as well. Persuasion isn’t “Here’s the facts—you’re either an idiot or you’re not.” It’s “Here are the facts—and here is a sensitivity to your state of mind.” And it’s the facts and the sensitivity when convolved together that creates impact. And I worry that your methods, and how articulately barbed you can be, ends up being simply ineffective when you have much more power of influence than is currently reflected in your output.

Dawkins begins his response with an anecdote that shows that he’s not the worst offender when it comes to simple and direct presentations of the facts.

A former and highly successful editor of New Scientist Magazine, who actually built up New Scientist to great new heights, was asked “What is your philosophy at New Scientist?” And he said, “Our philosophy at New Scientist is this: science is interesting, and if you don’t agree you can fuck off.”

I know the issue is a complicated one, but I can’t help thinking Tyson-style persuasion too often has the opposite of its intended impact, conveying as it does the implicit message that science has to somehow be sold to the masses, that it isn’t intrinsically interesting. At any rate, I wish that Konnikova hadn’t dressed up her book with false promises and what she thought would be cool cross-references. Sherlock Holmes is interesting. Psychology is interesting. If you don’t agree, you can fuck off.

Also read

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

And

THE STORYTELLING ANIMAL: A LIGHT READ WITH WEIGHTY IMPLICATIONS

And

LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE

Also a propos is

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Projecting Power, Competing for Life, & Supply Side Math

If a George Bush or a Mitt Romney swaggers around projecting his strength by making threats and trying to push around intergovernmental organizations, some people will naturally be intimidated and back down. But those probably weren’t the people we needed to worry about in the first place.

Some issues I feel are being skirted in the debates:

1. How the Toughest Guy Projects his Power

The Republican position on national security is that the best way to achieve peace is by “projecting power,” and they are fond of saying that Democrats invite aggression by “projecting weakness.” The idea is that no one will start a fight he knows he won’t win, nor will he threaten to start a fight with someone he knows will call his bluff. This is why Republican presidents often suffer from Cowboy Syndrome.

In certain individual relationships, this type of dynamic actually does establish itself—or rather the dominant individual establishes this type of dynamic. But in the realm of national security we aren’t dealing with individuals. With national security, we’re basically broadcasting to the world the level of respect we have for them. If a George Bush or a Mitt Romney swaggers around projecting his strength by making threats and trying to push around intergovernmental organizations, some people will naturally be intimidated and back down. But those probably weren’t the people we needed to worry about in the first place.

The idea that shouting out to the world that the US is the toughest country around and we’re ready to prove it is somehow going to deter Al Qaeda militants and others like them is dangerously naïve. We can’t hope for all the nations of the world to fall into some analog of Battered Wife Syndrome. Think about it this way, everyone knows that the heavyweight champion MMA guy is the toughest fighter in the world. If you want to project power, there’s no better way to do it than by winning that belt. Now we have to ask ourselves: Do fewer people want to fight the champion? We might also ask: Does a country like Canada get attacked more because of its lame military?

The very reason organizations like Al Qaeda ever came into existence was that America was projecting its power too much. The strategy of projecting power may as well have been devised by teenage boys—and it continues to appeal to people with that mindset.

2. Supplying more Health and Demanding not to Die

Paul Ryan knows that his voucher system for Medicare is going to run into the problem that increasing healthcare costs will quickly surpass whatever amount is allotted to individuals in the vouchers—that’s the source of the savings the program achieves. But it’s not that he wants to shortchange seniors. Rather, he’s applying a principle from his economic ideology, the one that says the best way to control costs is to make providers compete. If people can shop around, the reasoning goes, they’ll flock toward the provider with the lowest prices—the same way we all do with consumer products. Over time, all the providers have to find ways to become more efficient so they can cut costs and stay in business.

Sounds good, right? But the problem is that healthcare services aren’t anything like consumer goods. Supply and demand doesn’t work in the realm of life and death. Maybe, before deciding which insurance company should get our voucher, we’ll do some research. But how do you know what types of services you’re going to need before you sign up? You’re not going to find out that your plan doesn’t cover the service you need until you need the service. And at that point the last thing you’re going to want to do is start shopping around again. Think about it, people shop around for private insurance now--are insurance companies paragons of efficiency? 

Another problem is that you can’t shop around to find better services once industry standards have set in. For example—if you don’t like how impersonal your cell phone service is, can you just drop your current provider and go to another? If you do, you’re just going to run into the same problem again. What’s the lowest price you can pay for cable or internet services? The reason Comcast and Dish Network keep going back and forth with their commercials about whose service is better is that there is fundamentally very little difference.

Finally, insurance is so complicated that only people who can afford accountants or financial advisors, only people who are educated and have the time to research their options, basically only people with resources are going to be able to make prudent decisions. This is why the voucher system, over time, is just going to lead to further disadvantages for the poor and uneducated, bring about increased inequality, and exacerbate all the side effects of inequality, like increased violent crime.

3. Demand Side Never Shows up for the Debate

The reason Romney and Ryan aren’t specifying how they’re going to pay for their tax cuts, while at the same time increasing the budget for the military, while at the same time decreasing the deficit, is that they believe, again based on their economic ideology, that the tax cuts will automatically lead to economic growth. The reasoning is that if people have more money after taxes, they’ll be more likely to spend it. This includes business owners who will put the money toward expanding their businesses, which of course entails hiring new workers. All this cycles around to more money for everyone, more people paying that smaller percentage but on larger incomes, so more revenue comes in, and now we can sit back and watch the deficit go down. This is classic supply side economics.

Sounds good, right? The problem is that businesses only invest in expansion when there’s increasing demand for their products or services, and the tax cuts for lower earners won’t be enough to significantly increase that demand. If there's no demand, rich people don't invest and hire; they buy bigger houses and such. The supply side theory has been around for a long time—and it simply doesn’t work. The only reliable outcome of supply side policies is increasing wealth inequality.

What works is increasing demand—that’s demand side economics. You do this by investing in education, public resources, and infrastructure. Those construction workers building roads and bridges and maintaining parks and monuments get jobs when their companies are hired by the government—meaning they get paid with our tax money. Of course, they get taxed on it, thus helping to support more projects. Meanwhile, unemployment goes down by however many people are hired. These people have more income, and thus create more demand. The business owners expand their businesses—hire more people. As the economy grows, the government can scale back its investment.

Demand side economics can also focus on human capital - including healthcare because it's hard to work when you're sick or dying and you're not going to be creating any demand when you're bankrupt from hospital and insurance payments. Government can also help the economy by investing in people's education, because educated people tend to get better jobs, make more money, and—wait for it, create more demand. (Not to mention innovation.) Job training can work the same way.

Supply side versus demand side is at the heart of most policy debates. The supply side ideology has all kinds of popular advocates, from Ayn Rand to Rush Limbaugh. The demand siders seem more mum, but that might just be because I live in Indiana. In any case, the demand siders have much better evidence supporting their ideas, even though they lose in terms of rhetoric as the knee jerk response to their ideas is to (stupidly, inaccurately) label them socialist. As Bill Clinton pointed out and the fact checkers corroborated, Democrats do a better job creating jobs. 

4. Climate Change?

Also read:
TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY

THE IDIOCY OF OUTRAGE: SAM HARRIS'S RUN-INS WITH BEN AFFLECK AND NOAM CHOMSKY

WHAT'S WRONG WITH THE DARWIN ECONOMY?

FROM RAGS TO REPUBLICAN

Read More
Dennis Junk Dennis Junk

Freud: The Falsified Cipher

Upon entering a graduate program in literature, I was appalled to find that Freud’s influence was alive and well in the department. Didn’t they know that nearly all of Freud’s theories have been disproven? Didn’t they know psychoanalysis is pseudoscience?

[As I'm hard at work on a story, I thought I'd post an essay from my first course as a graduate student on literary criticism. It was in the fall of 2009, and I was shocked and appalled that not only were Freud's ideas still being taught but there was no awareness whatsoever that psychology had moved beyond them. This is my attempt at righting the record while keeping my tone in check.]

The matter of epistemology in literary criticism is closely tied to the question of what end the discipline is supposed to serve. How critics decide what standard of truth to adhere to is determined by the role they see their work playing, both in academia and beyond. Freud stands apart as a literary theorist, professing in his works a commitment to scientific rigor in a field that generally holds belief in even the possibility of objectivity as at best naïve and at worst bourgeois or fascist. For the postmodernists, both science and literature are suspiciously shot through with the ideological underpinnings of capitalist European male hegemony, which they take as their duty to undermine. Their standard of truth, therefore, seems to be whether a theory or application effectively exposes one or another element of that ideology to “interrogation.” Admirable as the values underlying this patently political reading of texts are, the science-minded critic might worry lest such an approach merely lead straight back to the a priori assumptions from which it set forth. Now, a century after Freud revealed the theory and practice of psychoanalysis, his attempt to interpret literature scientifically seems like one possible route of escape from the circularity (and obscurantism) of postmodernism. Unfortunately, Freud’s theories have suffered multiple devastating empirical failures, and Freud himself has been shown to be less a committed scientist than an ingenious fabulist, but it may be possible to salvage from the failures of psychoanalysis some key to a viable epistemology of criticism.

A text dating from early in the development of psychoanalysis shows both the nature of Freud’s methods and some of the most important substance of his supposed discoveries. Describing his theory of the Oedipus complex in The Interpretation of Dreams, Freud refers vaguely to “observations on normal children,” to which he compares his experiences with “psychoneurotics” to arrive at his idea that both display, to varying degrees, “feelings of love and hatred to their parents” (920). There is little to object to in this rather mundane observation, but Freud feels compelled to write that his discovery is confirmed by a legend,

…a legend whose profound and universal power to move can only be understood if the hypothesis I have put forward in regard to the psychology of children has an equally universal validity. (920)

He proceeds to relate the Sophocles drama from which his theory gets its name. In the story, Oedipus is tricked by fate into killing his father and marrying his mother. Freud takes this as evidence that the love and hatred he has observed in children are of a particular kind. According to his theory, any male child is fated to “direct his first sexual impulse towards his mother” and his “first murderous wish against his father” (921). But Freud originally poses this idea as purely hypothetical. What settles the issue is evidence he gleans from dream interpretations. “Our dreams,” he writes, “convince us that this is so” (921). Many men, it seems, confided to him that they dreamt of having sex with their mothers and killing their fathers.

Freud’s method, then, was to seek a thematic confluence between men’s dreams, the stories they find moving, and the behaviors they display as children, which he knew mostly through self-reporting years after the fact. Indeed, the entire edifice of psychoanalysis is purported to have been erected on this epistemic foundation. In a later essay on “The Uncanny,” Freud makes the sources of his ideas even more explicit. “We know from psychoanalytic experience,” he writes, “that the fear of damaging or losing one’s eyes is a terrible one in children” (35). A few lines down, he claims that, “A study of dreams, phantasies and myths has taught us that anxiety about one’s eyes…is a substitute for the dread of being castrated” (36). Here he’s referring to another facet of the Oedipus complex which theorizes that the child keeps his sexual desire for his mother in check because of the threat of castration posed by his jealous father. It is through this fear of his father, which transforms into grudging respect, and then into emulation, that the boy learns his role as a male in society. And it is through the act of repressing his sexual desire for his mother that he first develops his unconscious, which will grow into a general repository of unwanted desires and memories (Eagleton 134).

But what led Freud to this theory of repression, which suggests that we have the ability to willfully forget troubling incidents and drive urges to some portion of our minds to which we have no conscious access? He must have arrived at an understanding of this process in the same stroke that led to his conclusions about the Oedipus complex, because, in order to put forth the idea that as children we all hated one parent and wanted to have sex with the other, he had to contend with the fact that most people find the idea repulsive. What accounts for the dramatic shift between childhood desires and those of adults? What accounts for our failure to remember the earlier stage? The concept of repression had to be firmly established before Freud could make such claims. Of course, he could have simply imported the idea from another scientific field, but there is no evidence he did so. So it seems that he relied on the same methods—psychoanalysis, dream interpretation, and the study of myths and legends—to arrive at his theories as he did to test them. Inspiration and confirmation were one and the same.

Notwithstanding Freud’s claim that the emotional power of the Oedipus legend “can only be understood” if his hypothesis about young boys wanting to have sex with their mothers and kill their fathers has “universal validity,” there is at least one alternative hypothesis which has the advantage of not being bizarre. It could be that the point of Sophocles’s drama was that fate is so powerful it can bring about exactly the eventualities we most desire to avoid. What moves audiences and readers is not any sense of recognition of repressed desires, but rather compassion for the man who despite, even because of, his heroic efforts fell into this most horrible of traps. (Should we assume that the enduring popularity of W.W. Jacobs’s story, “The Monkey’s Paw,” which tells a similar fated story about a couple who inadvertently wish their son dead, proves that all parents want to kill their children?) The story could be moving because it deals with events we would never want to happen. It is true however that this hypothesis fails to account for why people enjoy watching such a tragedy being enacted—but then so does Freud’s. If we have spent our conscious lives burying the memory of our childhood desires because they are so unpleasant to contemplate, it makes little sense that we should find pleasure in seeing those desires acted out on stage. And assuming this alternative hypothesis is at least as plausible as Freud’s, we are left with no evidence whatsoever to support his theory of repressed childhood desires.

To be fair, Freud did look beyond the dreams and myths of men of European descent to test the applicability of his theories. In his book Totem and Taboo he inventories “savage” cultures and adduces the universality among them of a taboo against incest as further proof of the Oedipus complex. He even goes so far as to cite a rival theory put forth by a contemporary:

Westermarck has explained the horror of incest on the ground that “there is an innate aversion to sexual intercourse between persons living very closely together from early youth, and that, as such persons are in most cases related by blood, this feeling would naturally display itself in custom and law as a horror of intercourse between near kin.” (152)

To dismiss Westermarck’s theory, Freud cites J. G. Frazer, who argues that laws exist only to prevent us from doing things we would otherwise do or prod us into doing what we otherwise would not. That there is a taboo against incest must therefore signal that there is no innate aversion, but rather a proclivity, for incest. Here it must be noted that the incest Freud had in mind includes not just lust for the mother but for sisters as well. “Psychoanalysis has taught us,” he writes, again vaguely referencing his clinical method, “that a boy’s earliest choice of objects for his love is incestuous and that those objects are forbidden ones—his mother and sister” (22). Frazer’s argument is compelling, but Freud’s test of the applicability of his theories is not the same as a test of their validity (though it seems customary in literary criticism to conflate the two).

As linguist and cognitive neuroscientist Steven Pinker explains in How the Mind Works, in tests of validity Westermarck beats Freud hands down. Citing the research of Arthur Wolf, he explains that without setting out to do so, several cultures have conducted experiments on the nature of incest aversion. Israeli kibbutzim, in which children grew up in close proximity to several unrelated agemates, and the Chinese and Taiwanese practice of adopting future brides for sons and raising them together as siblings are just two that Wolf examined. When children from the kibbutzim reached sexual maturity, even though there was no discouragement from adults for them to date or marry, they showed a marked distaste for each other as romantic partners. And compared to more traditional marriages, those in which the bride and groom grew up in conditions mimicking siblinghood were overwhelmingly “unhappy, unfaithful, unfecund, and short” (459). The effect of proximity in early childhood seems to apply to parents as well, at least when it comes to fathers’ sexual feelings for their daughters. Pinker cites research that shows the fathers who sexually abuse their daughters tend to be the ones who have spent the least time with them as infants, while the stepdads who actually do spend a lot of time with their stepdaughters are no more likely to abuse them than their biological parents. These studies not only favor Westermarck’s theory; they also provide a counter to Frazer’s objection to it. Human societies are so complex that we often grow up in close proximity with people who are unrelated, or don’t grow up with people who are, and therefore it is necessary for there to be a cultural proscription—a taboo—against incest in addition to the natural mechanism of aversion.

Among biologists and anthropologists, what is now called the Westermarck effect has displaced Freud’s Oedipus complex as the best explanation for incest avoidance. Since Freud’s theory of childhood sexual desires has been shown to be false, the question arises of where this leaves his concept of repression. According to literary critic—and critic of literary criticism—Frederick Crews, repression came to serve in the 1980’s and 90’s a role equivalent to the “spectral evidence” used in the Salem witch trials. Several psychotherapists latched on to the idea that children can store reliable information in their memories, especially when that information is too terrible for them to consciously handle. And the testimony of these therapists has led to many convictions and prison sentences. But the evidence for this notion of repression is solely clinical—modern therapists base their conclusions on interactions with patients, just as Freud did. Unfortunately, researchers outside the clinical setting are unable to find any phenomenon answering to the description of repressed but retrievable memories. Crews points out that there are plenty of people who are known to have survived traumatic experiences: “Holocaust survivors make up the most famous class of such subjects, but whatever group or trauma is chosen, the upshot of well-conducted research is always the same” (158). That upshot:

Unless a victim received a physical shock to the brain or was so starved or sleep deprived as to be thoroughly disoriented at the time, those experiences are typically better remembered than ordinary ones. (159, emphasis in original)

It seems here, as with incest aversion, Freud got the matter exactly wrong—and with devastating fallout for countless families and communities. But Freud was sketchy when it came to whether or not it was memories of actual events that were repressed or just fantasies. The crux of his argument was that we repress unacceptable and inappropriate drives and desires.

And the concept of repressed desires is integral to the use of psychoanalysis in literary criticism. In The Interpretation of Dreams, Freud distinguishes between the manifest content of dreams and their latent content. Having been exiled from consciousness, troublesome desires press against the bounds of the ego, Freud’s notional agent in charge of tamping down uncivilized urges. In sleep, the ego relaxes, allowing the desires of the id, from whence all animal drives emerge, an opportunity for free play. Even in dreams, though, full transparency of the id would be too disconcerting for the conscious mind to accept, so the ego disguises all the elements which surface with a kind of code. Breaking this code is the work of psychoanalytic dream interpretation. It is also the basis for Freud’s analysis of myths and the underlying principle of Freudian literary criticism. (In fact, the distinction between manifest and latent content is fundamental to many schools of literary criticism, though they each have their own version of the true nature of the latent content.) Science writer Steven Johnson compares Freud’s conception of repressed impulses to compressed gas seeping through the cracks of the ego’s defenses, emerging as slips of the tongue or baroque dream imagery. “Build up enough pressure in the chamber, though, and the whole thing explodes—into uncontrolled hysteria, anxiety, madness” (191). The release of pressure, as it were, through dreams and through various artistic media, is sanity-saving.

Johnson’s book, Mind Wide Open: Your Brain and the Neuroscience of Everyday life, takes the popular currency of Freud’s ideas as a starting point for his exploration of modern science. The subtitle is a homage to Freud’s influential work The Psychopathology of Everyday Life. Perhaps because he is not a working scientist, Johnson is able to look past the shaky methodological foundations of psychoanalysis and examine how accurately its tenets map onto the modern findings of neuroscience. Though he sees areas of convergence, like the idea of psychic conflict and that of the unconscious in general, he has to admit in his conclusion that “the actual unconscious doesn’t quite look like the one Freud imagined” (194). Rather than a repository of repressed fantasies, the unconscious is more of a store of implicit, or procedural, knowledge. Johnson explains, “Another word for unconscious is ‘automated’—the things you do so well you don’t even notice doing them” (195). And what happens to all the pressurized psychic energy resulting from our repression of urges? “This is one of those places,” Johnson writes, “where Freud’s metaphoric scaffolding ended up misleading him” (198). Instead of a steam engine, neuroscientists view the brain as type of ecosystem, with each module competing for resources; if the module goes unused—the neurons failing to fire—then the strength of their connections diminishes.

What are the implications of this new conception of how the mind works for the interpretation of dreams and works of art? Without the concept of repressed desires, is it still possible to maintain a distinction between the manifest and latent content of mental productions? Johnson suggests that there are indeed meaningful connections that can be discovered in dreams and slips of the tongue. To explain them, he points again to the neuronal ecosystem, and to the theory that “Neurons that fire together wire together.” He writes:

These connections are not your unconscious speaking in code. They’re much closer to free-associating. These revelations aren’t the work of some brilliant cryptographer trying to get a message to the frontlines without enemy detection. They’re more like echoes, reverberations. One neuronal group fires, and a host of others join in the chorus. (200-201)

Mind Wide Open represents Johnson’s attempt to be charitable to the century-old, and now popularly recognized, ideas of psychoanalysis. But in this description of the shortcomings of Freud’s understanding of the unconscious and how it reveals itself, he effectively discredits the epistemological underpinnings of any application of psychoanalysis to art. It’s not only the content of the unconscious that Freud got outrageously wrong, but the very nature of its operations. And if Freud could so confidently look into dreams and myths and legends and find in them material that simply wasn’t there, it is cause for us to marvel at the power of his preconceptions to distort his perceptions.

Ultimately, psychoanalysis failed to move from the realm of proto-science to that of methodologically well-founded science, and got relegated rather to the back channel of pseudoscience by the hubris of its founder. And yet, if Freud had relied on good science, his program of interpreting literature in terms of the basic themes of human nature, and even his willingness to let literature inform his understanding of those themes, may have matured into a critical repertoire free of the obscurantist excesses and reality-denying absurdities of postmodernism. (Anthropologist Clifford Geertz once answered a postmodernist critic of his work by acknowledging that perfect objectivity is indeed impossible, but then so is a perfectly germ-free operating room; that shouldn’t stop us from trying to be as objective and as sanitary as our best methods allow.)

            Critics could feasibly study the production of novels by not just one or a few authors, but a large enough sample—possibly extending across cultural divides—to analyze statistically. They could pose questions systematically to even larger samples of readers. And they could identify the themes in any poem or novel which demonstrate the essential (in the statistical sense) concerns of humanity that have been studied by behavioral scientists, themes like status-seeking, pair-bonding, jealousy, and even the overwhelming strength of the mother-infant bond. “The human race has produced only one successfully validated epistemology,” writes Frederick Crews (362). That epistemology encompasses a great variety of specific research practices, but they all hold as inviolable the common injunction “to make a sharp separation between hypothesis and evidence” (363). Despite his claims to scientific legitimacy, Freud failed to distinguish himself from other critical theorists by relying too much on his own intuitive powers, a reliance that all but guarantees succumbing to the natural human tendency to discover in complex fields precisely what you’ve come to them seeking.

Also read:

WHY SHAKESPEARE NAUSEATED DARWIN: A REVIEW OF KEITH OATLEY'S "SUCH STUFF AS DREAMS"

NICE GUYS WITH NOTHING TO SAY: BRETT MARTIN’S DIFFICULTY WITH “DIFFICULT MEN” AND THE FAILURE OF ARTS SCHOLARSHIP

GETTING GABRIEL WRONG: PART 1 OF 3

Read More
Dennis Junk Dennis Junk

The Mental Illness Zodiac: Why the DSM 5 Won't Be Anything But More Pseudoscience

That the diagnostic categories are necessarily ambiguous and can’t be tied to any objective criteria like biological markers has been much discussed, as have the corruptions of the mental health industry, including clinical researchers who make their livings treating the same disorders they lobby to have included in the list of official diagnoses. What’s not being discussed, however, is the propensity in humans to take on roles, to play parts, even tragic ones, even horrific ones, without being able to recognize they’re doing so.

            Thinking you can diagnose psychiatric disorders using checklists of symptoms means taking for granted a naïve model of the human mind and human behavior. How discouraging to those in emotional distress, or to those doubting their own sanity, that the guides they turn to for help and put their faith in to know what’s best for them embrace this model. The DSM has taken it for granted since its inception, and the latest version, the DSM 5, due out next year, despite all the impediments to practical usage it does away with, despite all the streamlining, and despite all the efforts to adhere to common sense, only perpetuates the mistake. That the diagnostic categories are necessarily ambiguous and can’t be tied to any objective criteria like biological markers has been much discussed, as have the corruptions of the mental health industry, including pharmaceutical companies’ reluctance to publish failed trials for their blockbuster drugs, and clinical researchers who make their livings treating the same disorders they lobby to have included in the list of official diagnoses. Indeed, there’s good evidence that prognoses for mental disorders have actually gotten worse over the past century. What’s not being discussed, however, is the propensity in humans to take on roles, to play parts, even tragic ones, even horrific ones, without being able to recognize they’re doing so.

            In his lighthearted, mildly satirical but severely important book on self-improvement 59 Seconds: Change Your Life in Under a Minute, psychologist Richard Wiseman describes an experiment he conducted for the British TV show The People Watchers. A group of students spending an evening in a bar with their friends was given a series of tests, and then they were given access to an open bar. The tests included memorizing a list of numbers, walking along a line on the floor, and catching a ruler dropped by experimenters as quickly as possible. Memory, balance, and reaction time—all areas our performance diminishes in predictably as we drink. The outcomes of the tests were well in-keeping with expectation as they were repeated over the course of the evening. All the students did progressively worse the more they drank. And the effects of the alcohol were consistent throughout the entire group of students. It turns out, however, that only half of them were drinking alcohol.

At the start of the study, Wiseman had given half the participants a blue badge and the other half a red badge. The bartenders poured regular drinks for everyone with red badges, but for those with blue ones they made drinks which looked, smelled, and tasted like their alcoholic counterparts but were actually non-alcoholic. Now, were the students with the blue badges faking their drunkenness? They may have been hamming it for the cameras, but that would be true of the ones who were actually drinking too. What they were doing instead was taking on the role—you might even say taking on the symptoms—of being drunk. As Wiseman explains,

Our participants believed that they were drunk, and so they thought and acted in a way that was consistent with their beliefs. Exactly the same type of effect has emerged in medical experiments when people exposed to fake poison ivy developed genuine rashes, those given caffeine-free coffee became more alert, and patients who underwent a fake knee operation reported reduced pain from their “healed” tendons. (204)

After being told they hadn’t actually consumed any alcohol, the students in the blue group “laughed, instantly sobered up, and left the bar in an orderly and amused fashion.” But not all the natural role-playing humans engage in is this innocuous and short-lived.

            In placebo studies like the one Wiseman conducted, participants are deceived. You could argue that actually drinking a convincing replica of alcohol or taking a realistic-looking pill is the important factor behind the effects. People who seek treatment for psychiatric disorders aren’t tricked in this way; so what would cause them to take on the role associated with, say, depression, or bipolar? But plenty of research shows that pills or potions aren’t necessary. We take on different roles in different settings and circumstances all the time. We act much differently at football games and rock concerts than we do at work or school. These shifts are deliberate, though, and we’re aware of them, at least to some degree, when they occur. But many cues are more subtle. It turns out that just being made aware of the symptoms of a disease can make you suspect that you have it. What’s called Medical Student Syndrome afflicts those studying both medical and psychiatric diagnoses. For the most part, you either have a biological disease or you don’t, so the belief that you have one is contingent on the heightened awareness that comes from studying the symptoms. But is there a significant difference between believing you’re depressed and having depression? There answer, according to check-list diagnosis, is no. 

            In America, we all know the symptoms of depression because we’re bombarded with commercials, like the one that uses squiggly circle faces to explain that it’s caused by a deficit of the neurotransmitter serotonin—a theory that had already been ruled out by the time that commercial began to air. More insidious though are the portrayals of psychiatric disorders in movies, TV series, or talk shows—more insidious because they embed the role-playing instructions in compelling stories. These shows profess to be trying to raise awareness so more people will get help to end their suffering. They profess to be trying to remove the stigma so people can talk about their problems openly. They profess to be trying to help people cope. But, from a perspective of human behavior that acknowledges the centrality of role-playing to our nature, all these shows are actually doing is shilling for the mental health industry, and they are probably helping to cause much of the suffering they claim to be trying to assuage.

            Multiple Personality Disorder, or Dissociative Identity Disorder as it’s now called, was an exceedingly rare diagnosis until the late 1970s and early 1980s when its incidence spiked drastically. Before the spike, there were only ever around a hundred cases. Between 1985 and 1995, there were around 40,000 new cases. What happened? There was a book and a miniseries called Sybil starring Sally Field that aired in 1977. Much of the real-life story on which Sybil was based has been cast into doubt through further investigation (or has been shown to be completely fabricated). But if you’re one to give credence to the validity of the DID diagnosis (and you shouldn’t), then we can look at another strange behavioral phenomenon whose incidence spiked after a certain movie hit the box offices in the 1970’s. Prior to the release of The Exorcist, the Catholic church had pretty much consigned the eponymous ritual to the dustbins of history. Lately, though, they’ve had to dust it off.

The Skeptic’s Dictionary says of a TV series devoted to the exorcism ritual, or the play rather, on the Sci-Fi channel,

The exorcists' only prop is a Bible, which is held in one hand while they talk down the devil in very dramatic episodes worthy of Jerry Springer or Jenny Jones. The “possessed” could have been mentally ill, actors, mentally ill actors, drug addicts, mentally ill drug addicts, or they may have been possessed, as the exorcists claimed. All the participants shown being exorcized seem to have seen the movie “The Exorcist” or one of the sequels. They all fell into the role of husky-voiced Satan speaking from the depths, who was featured in the film. The similarities in speech and behavior among the “possessed” has led some psychologists such as Nicholas Spanos to conclude that both “exorcist” and “possessed” are engaged in learned role-playing.

If people can somehow inadvertently fall into the role of having multiple personalities or being possessed by demons, it’s not hard to imagine them hearing about, say, bipolar, briefly worrying that they may have some of the symptoms, and then subsequently taking on the role, even the identity of someone battling bipolar disorder.

            Psychologist Dan McAdams theorizes that everyone creates his or her own “personal myth,” which serves to give life meaning and trajectory. The character we play in our own myth is what we recognize as our identity, what we think of when we try to answer the question “Who am I?” in all its profundity. But, as McAdams explains in The Stories We Live By: Personal Myths and the Making of the Self,

Stories are less about facts and more about meanings. In the subjective and embellished telling of the past, the past is constructed—history is made. History is judged to be true or false not solely with respect to its adherence to empirical fact. Rather, it is judged with respect to such narrative criteria as “believability” and “coherence.” There is a narrative truth in life that seems quite removed from logic, science, and empirical demonstration. It is the truth of a “good story.” (28-9)

The problem when it comes to diagnosing psychiatric disorders is that the checklist approach tries to use objective, scientific criteria, when the only answers they’ll ever get will be in terms of narrative criteria. But why, if people are prone to taking on roles, wouldn’t they take on something pleasant, like kings or princesses?

            Since our identities are made up of the stories we tell about ourselves—even to ourselves—it’s important that those stories be compelling. And if nothing ever goes wrong in the stories we tell, well, they’d be pretty boring. As Jonathan Gottschall writes in The Storytelling Animal: How Stories Make Us Human,

This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television. (171)

Listen to the ways talk show hosts like Oprah talk about mental disorders, and count how many times in an episode she congratulates the afflicted guests for their bravery in keeping up the struggle. Sometimes, the word hero is even bandied about. Troublingly, the people who cast themselves as heroes spreading awareness, countering stigmas, and helping people cope even like to do really counterproductive things like publishing lists of celebrities who supposedly suffer from the disorder in question. Think you might have bipolar? Kay Redfield Jameson thinks you’re in good company. In her book Touched By Fire, she suggests everyone from rocker Curt Cobain to fascist Mel Gibson is in that same boat-full of heroes.

            The reason medical researchers insist a drug must not only be shown to make people feel better but must also be shown to work better than a placebo is that even a sham treatment will make people report feeling better between 60 and 90% of the time, depending on several well-documented factors. What psychiatrists fail to acknowledge is that the placebo dynamic can be turned on its head—you can give people illnesses, especially mental illnesses, merely by suggesting they have the symptoms—or even by increasing their awareness of and attention to those symptoms past a certain threshold. If you tell someone a fact about themselves, they’ll usually believe it, especially if you claim a test, or an official diagnostic manual allowed you to determine the fact. This is how frauds convince people they’re psychics. An experiment you can do yourself involves giving horoscopes to a group of people and asking how true they ring. After most of them endorse their reading, reveal that you changed the labels and they all in fact read the wrong sign’s description.  

            Psychiatric diagnoses, to be considered at all valid, would need to be double-blind, just like drug trials: the patient shouldn’t know the diagnosis being considered; the rater shouldn’t know the diagnosis being considered; only a final scorer, who has no contact with the patient, should determine the diagnosis. The categories themselves are, however, equally problematic. In order to be properly established as valid, they need to have predictive power. Trials would have to be conducted in which subjects assigned to the prospective categories using double-blind protocols were monitored for long periods of time to see if their behavior adheres to what’s expected of the disorder. For instance, bipolar is supposedly marked by cyclical mood swings. Where are the mood diary studies? (The last time I looked for them was six months ago, so if you know of any, please send a link.) Smart phones offer all kinds of possibilities for monitoring and recording behaviors. Why aren’t they being used to do actual science on mental disorders?

            To research the role-playing dimension of mental illness, one (completely unethical) approach would be to design from scratch a really bizarre disorder, publicize its symptoms, maybe make a movie starring Mel Gibson, and monitor incidence rates. Let’s call it Puppy Pregnancy Disorder. We all know dog saliva is chock-full of gametes, right? So, let’s say the disorder is caused when a canine, in a state of sexual arousal of course, bites the victim, thus impregnating her—or even him. Let’s say it affects men too. Wouldn’t that be funny? The symptoms would be abdominal pain, and something just totally out there, like, say, small pieces of puppy feces showing up in your urine. Now, this might be too outlandish, don’t you think? There’s no way we could get anyone to believe this. Unfortunately, I didn’t really make this up. And there are real people in India who believe they have Puppy Pregnancy Disorder.

Also read:


THE STORYTELLING ANIMAL: A LIGHT READ WITH WEIGHTY IMPLICATIONS 

And:

THE SELF-TRANSCENDENCE PRICE TAG: A REVIEW OF ALEX STONE'S FOOLING HOUDINI

Read More
Dennis Junk Dennis Junk

Percy Fawcett’s 2 Lost Cities

David Grann’s “The Lost City of Z,” about Percy Fawcett’s expeditions to find a legendary pre
Columbian city, is an absolute joy to read. But it raises questions about what it is we hope our favorite explorers find in the regrown juggles.

In his surprisingly profound, insanely fun book The Lost City of Z: A Tale of Deadly Obsession in the Amazon, David Grann writes about his visit to a store catering to outdoorspeople in preparation for his trip to research, and to some degree retrace, the last expedition of renowned explorer Percy Fawcett. Grann, a consummate New Yorker, confesses he’s not at all the outdoors type, but once he’s on the trail of a story he does manifest a certain few traits in common with adventurers like Fawcett. Wandering around the store after having been immersed in the storied history of the Royal Geographical Society, Grann observes,

Racks held magazines like Hooked on the Outdoors and Backpacker: The Outdoors at Your Doorstep, which had articles titled “Survive a Bear Attack!” and “America’s Last Wild Places: 31 Ways to Find Solitude, Adventure—and Yourself.” Wherever I turned, there were customers, or “gear heads.” It was as if the fewer opportunities for genuine exploration, the greater the means were for anyone to attempt it, and the more baroque the ways—bungee cording, snowboarding—that people found to replicate the sensation. Exploration, however, no longer seemed aimed at some outward discovery; rather, it was directed inward, to what guidebooks and brochures called “camping and wilderness therapy” and “personal growth through adventure.” (76)

Why do people feel such a powerful attraction to wilderness? And has there really been a shift from outward to inward discovery at the heart of our longings to step away from the paved roads and noisy bustle of civilization? As the element of the extreme makes clear, part of the pull comes from the thrill of facing dangers of one sort or another. But can people really be wired in such a way that many of them are willing to risk dying for the sake of a brief moment of accelerated heart-rate and a story they can lovingly exaggerate into their old age?

The catalogue of dangers Fawcett and his companions routinely encountered in the Amazon is difficult to read about without experiencing a viscerally unsettling glimmer of the sensations associated with each affliction. The biologist James Murray, who had accompanied Ernest Shackleton on his mission to Antarctica in 1907, joined Fawcett’s team for one of its journeys into the South American jungle four years later. This much different type of exploration didn’t turn out nearly as well for him. One of Fawcett’s sturdiest companions, Henry Costin, contracted malaria on that particular expedition and became delirious with the fever. “Murray, meanwhile,” Grann writes,

seemed to be literally coming apart. One of his fingers grew inflamed after brushing against a poisonous plant. Then the nail slid off, as if someone had removed it with pliers. Then his right hand developed, as he put it, a “very sick, deep suppurating wound,” which made it “agony” even to pitch his hammock. Then he was stricken with diarrhea. Then he woke up to find what looked like worms in his knee and arm. He peered closer. They were maggots growing inside him. He counted fifty around his elbow alone. “Very painful now and again when they move,” Murray wrote. (135)

The thick clouds of mosquitoes leave every traveler pocked and swollen and nearly all of them get sick sooner or later. On these journeys, according to Fawcett, “the healthy person was regarded as a freak, an exception, extraordinary” (100). This observation was somewhat boastful; Fawcett himself remained blessedly immune to contagion throughout most of his career as an explorer.

Hammocks are required at night to avoid poisonous or pestilence-carrying ants. Pit vipers abound. The men had to sleep with nets draped over them to ward off the incessantly swarming insects. Fawcett and his team even fell prey to even fell prey tovampire bats. “We awoke to find our hammocks saturated with blood,” he wrote, “for any part of our persons touching the mosquito-nets or protruding beyond them were attacked by the loathsome animals” (127). Such wounds, they knew, could spell their doom the next time they waded into the water of the Amazon. “When bathing,” Grann writes, “Fawcett nervously checked his body for boils and cuts. The first time he swam across the river, he said, ‘there was an unpleasant sinking feeling in the pit of my stomach.’ In addition to piranhas, he dreaded candirus and electric eels, or puraques”(91).

Candirus are tiny catfish notorious for squirming their way up human orifices like the urethra, where they remain lodged to parasitize the bloodstream (although this tendency of theirs turns out to be a myth). But piranhas and eels aren’t even the most menacing monsters in the Amazon. As Grann writes,

One day Fawcett spied something along the edge of the sluggish river. At first it looked like a fallen tree, but it began undulating toward the canoes. It was bigger than an electric eel, and when Fawcett’s companions saw it they screamed. Fawcett lifted his rifle and fired at the object until smoke filled the air. When the creature ceased to move, the men pulled a canoe alongside it. It was an anaconda. In his reports to the Royal Geographical Society, Fawcett insisted that it was longer than sixty feet. (92)

This was likely an exaggeration since the record documented length for an anaconda is just under 27 feet, and yet the men considered their mission a scientific one and so would’ve striven for objectivity. Fawcett even unsheathed his knife to slice off a piece of the snake’s flesh for a specimen jar, but as he broke the skin it jolted back to life and made a lunge at the men in the canoe who panicked and pulled desperately at the oars. Fawcett couldn’t convince his men to return for another attempt.

Though Fawcett had always been fascinated by stories of hidden treasures and forgotten civilizations, the ostensible purpose of his first trip into the Amazon Basin was a surveying mission. As an impartial member of the British Royal Geographical Society, he’d been commissioned by the Bolivian and Brazilian governments to map out their borders so they could avoid a land dispute. But over time another purpose began to consume Fawcett. “Inexplicably,” he wrote, “amazingly—I knew I loved that hell. Its fiendish grasp had captured me, and I wanted to see it again” (116). In 1911, the archeologist Hiram Bingham, with the help of local guides, discovered the colossal ruins of Machu Picchu high in the Peruvian Andes. News of the discovery “fired Fawcett’s imagination” (168), according to Grann, and he began cobbling together evidence he’d come across in the form of pottery shards and local folk histories into a theory about a lost civilization deep in the Amazon, in what many believed to be a “counterfeit paradise,” a lush forest that seemed abundantly capable of sustaining intense agriculture but in reality could only support humans who lived in sparsely scattered tribes.

Percy Harrison Fawcett’s character was in many ways an embodiment of some of the most paradoxical currents of his age. A white explorer determined to conquer unmapped regions, he was nonetheless appalled by his fellow Englishmen’s treatment of indigenous peoples in South America. At the time, rubber was for the Amazon what ivory was for the Belgian Congo, oil is today in the Middle East, and diamonds are in many parts of central and western Africa. When the Peruvian Amazon Company, a rubber outfit whose shares were sold on the London Stock Exchange, attempted to enslave Indians for cheap labor, it lead to violent resistance which culminated in widespread torture and massacre.

Sir Roger Casement, a British consul general who conducted an investigation of the PAC’s practices, determined that this one rubber company alone was responsible for the deaths of thirty thousand Indians. Grann writes,

Long before the Casement report became public, in 1912, Fawcett denounced the atrocities in British newspaper editorials and in meetings with government officials. He once called the slave traders “savages” and “scum.” Moreover, he knew that the rubber boom had made his own mission exceedingly more difficult and dangerous. Even previously friendly tribes were now hostile to foreigners. Fawcett was told of one party of eighty men in which “so many of them were killed with poisoned arrows that the rest abandoned the trip and retired”; other travelers were found buried up to their waists and left to be eaten by alive by fire ants, maggots, and bees. (90)

Fawcett, despite the ever looming threat of attack, was equally appalled by many of his fellow explorers’ readiness to resort to shooting at Indians who approached them in a threatening manner. He had much more sympathy for the Indian Protection Service, whose motto was, “Die if you must, but never kill” (163), but he prided himself on being able to come up with clever ways to entice tribesmen to let his teams pass through their territories without violence. Once, when arrows started raining down on his team’s canoes from the banks, he ordered his men not to flee and instead had one of them start playing his accordion while the rest of them sang to the tune—and it actually worked (148).

But Fawcett was no softy. He was notorious for pushing ahead at a breakneck pace and showing nothing but contempt for members of his own team who couldn’t keep up owing to a lack of conditioning or fell behind owing to sickness. James Murray, the veteran of Shackleton’s Antarctic expedition whose flesh had become infested with maggots, experienced Fawcett’s monomania for maintaining progress firsthand. “This calm admission of the willingness to abandon me,” Murray wrote, “was a queer thing to hear from an Englishman, though it did not surprise me, as I had gauged his character long before” (137). Eventually, Fawcett did put his journey on hold to search out a settlement where they might find help for the dying man. When they came across a frontiersman with a mule, they got him to agree to carry Murray out of the jungle, allowing the rest of the team to continue with their expedition. To everyone’s surprise, Murray, after disappearing for a while, turned up alive—and furious. “Murray accused Fawcett of all but trying to murder him,” Grann writes, “and was incensed that Fawcett had insinuated that he was a coward” (139).

The theory of a lost civilization crystalized in the explorer’s mind when he found a document written by a Portuguese bandeirante—soldier of fortune—describing “a large, hidden, and very ancient city… discovered in the year 1753” (180) while rummaging through old records at the National Library of Brazil. As Grann explains,

Fawcett narrowed down the location. He was sure that he had found proof of archaeological remains, including causeways and pottery, scattered throughout the Amazon. He even believed that there was more than a single ancient city—the one the bandeirante described was most likely, given the terrain, near the eastern Brazilian state of Bahia. But Fawcett, consulting archival records and interviewing tribesmen, had calculated that a monumental city, along with possibly even remnants of its population, was in the jungle surrounding the Xingu River in the Brazilian Mato Grasso. In keeping with his secretive nature, he gave the city a cryptic and alluring name, one that, in all his writings and interviews, he never explained. He called it simply Z. (182)

Fawcett was planning a mission for the specific purpose of finding Z when he was called by the Royal Geographical Society to serve in the First World War. The case for Z had been up till that point mostly based on scientific curiosity, though there was naturally a bit of the Indiana Jones dyad—“fortune and glory”—sharpening his already keen interest. Ever since Hernan Cortes marched into the Aztec city of Tenochtitlan in 1519, and Francisco Pizarro conquered Cuzco, the capital of the Inca Empire, fourteen years later, there had been rumors of a city overflowing with gold called El Dorado, literally “the gilded man,” after an account by the sixteenth century chronicler Gonzalo Fernandez de Oviedo of a king who covered his body every day in gold dust only to wash it away again at night (169-170). It’s impossible to tell how many thousands of men died while searching for that particular lost city.

Fawcett, however, when faced with the atrocities of industrial-scale war, began to imbue Z with an altogether different sort of meaning. As a young man, he and his older brother Edmund had been introduced to Buddhism and the occult by a controversial figure named Helena Petrovna Blavatsky. To her followers, she was simply Madame Blavatsky. “For a moment during the late nineteenth century,” Grann writes, “Blavatsky, who claimed to be psychic, seemed on the threshold of founding a lasting religious movement” (46). It was called theosophy—“wisdom of the gods.” “In the past, Fawcett’s interest in the occult had been largely an expression of his youthful rebellion and scientific curiosity,” Grann explains, “and had contributed to his willingness to defy the prevailing orthodoxies of his own society and to respect tribal legends and religions.” In the wake of horrors like the Battle of the Somme, though, he started taking otherworldly concerns much more seriously. According to Grann, at this point,

his approach was untethered from his rigorous RGS training and acute powers of observation. He imbibed Madame Blavatsky’s most outlandish teachings about Hyperboreans and astral bodies and Lords of the Dark Face and keys to unlocking the universe—the Other World seemingly more tantalizing than the present one. (190)

It was even rumored that Fawcett was basing some of his battlefield tactics on his use of a Ouija board.

Brian Fawcett, Percy’s son and compiler of his diaries and letters into the popular volume Expedition Fawcett, began considering the implications of his father’s shift away from science years after he and Brian’s older brother Jack had failed to return from Fawcett’s last mission in search of Z. Grann writes,

Brian started questioning some of the strange papers that he had found among his father’s collection, and never divulged. Originally, Fawcett had described Z in strictly scientific terms and with caution: “I do not assume that ‘The City’ is either large or rich.” But by 1924 Fawcett had filled his papers with reams of delirious writings about the end of the world and about a mystical Atlantean kingdom, which resembled the Garden of Eden. Z was transformed into “the cradle of all civilizations” and the center of one of Blavatsky’s “White Lodges,” where a group of higher spiritual beings help to direct the fate of the universe. Fawcett hoped to discover a White Lodge that had been there since “the time of Atlantis,” and to attain transcendence. Brian wrote in his diary, “Was Daddy’s whole conception of ‘Z,’ a spiritual objective, and the manner of reaching it a religious allegory?” (299)

Grann suggests that the success of Blavatsky and others like her was a response to the growing influence of science and industrialization. “The rise of science in the nineteenth century had had a paradoxical effect,” he writes:

while it undermined faith in Christianity and the literal word of the Bible, it also created an enormous void for someone to explain the mysteries of the universe that lay beyond microbes and evolution and capitalist greed… The new powers of science to harness invisible forces often made these beliefs seem more credible, not less. If phonographs could capture human voices, and if telegraphs could send messages from one continent to the other, then couldn’t science eventually peel back the Other World? (47)

Even Arthur Conan Doyle, who was a close friend of Fawcett and whose book The Lost World was inspired by Fawcett’s accounts of his expeditions in the Amazon, was an ardent supporter of investigations into the occult. Grann quotes him as saying, “I suppose I am Sherlock Holmes, if anybody is, and I say that the case for spiritualism is absolutely proved” (48).

But pseudoscience—equal parts fraud and self-delusion—was at least a century old by the time H.P. Blavatsky began peddling it, and, tragically, ominously, it’s alive and well today. In the 1780s, electro-magnetism was the invisible force whose nature was being brought to light by science. The German physician Franz Anton Mesmer, from whom we get the term “mesmerize,” took advantage of these discoveries by positing a force called “animal magnetism” that runs through the bodies of all living things. Mesmer spent most of the decade in Paris, and in 1784 King Louis XVI was persuaded to appoint a committee to investigate Mesmer’s claims. One of the committee members, Benjamin Franklin, you’ll recall, knew something about electricity. Mesmer in fact liked to use one of Franklin’s own inventions, the glass harmonica (not that type of harmonica), as a prop for his dramatic demonstrations. The chemist and pioneer of science Antoine Lavoisier was the lead investigator though. (Ten years after serving on the committee, Lavoisier would fall victim to the invention of yet another member, Dr. Guillotine.)

Mesmer claimed that illnesses were caused by blockages in the flow of animal magnetism through the body, and he carried around a stack of printed testimonials on the effectiveness of his cures. If the idea of energy blockage as the cause of sickness sounds familiar to you, so too will Mesmer’s methods for unblocking them. He, or one of his “adepts,” would establish some kind of physical contact so they could find the body’s magnetic poles. It usually involved prolonged eye contact and would eventually lead to a “crisis,” which meant the subject would fall back and begin to shake all over until she (they were predominantly women) lost consciousness. If you’ve seen scenes of faith healers in action, you have the general idea. After undergoing several exposures to this magnetic treatment culminating in crisis, the suffering would supposedly abate and the mesmerist would chalk up another cure. Tellingly, when Mesmer caught wind of some of the experimental methods the committee planned to use he refused to participate. But then a man named Charles Deslon, one of Mesmer’s chief disciples, stepped up.

The list of ways Lavoisier devised to test the effectiveness of Deslon’s ministrations is long and amusing. At one point, he blindfolded a woman Deslon had treated before, telling her she was being magnetized right then and there, even though Deslon wasn’t even in the room. The suggestion alone was nonetheless sufficient to induce a classic crisis. In another experiment, the men replaced a door in Franklin’s house with a paper partition and had a seamstress who was supposed to be especially sensitive to magnetic effects sit in a chair with its back against the paper. For half an hour, an adept on the other side of the partition attempted to magnetize her through the paper, but all the while she just kept chatting amiably with the gentlemen in the room. When the adept finally revealed himself, though, he was able to induce a crisis in her immediately. The ideas of animal magnetism and magnetic cures were declared a total sham.

Lafayette, who brought French reinforcements to the Americans in the early 1780s, hadn’t heard about the debunking and tried to introduce the practice of mesmerism to the newly born country. But another prominent student of the Enlightenment, Thomas Jefferson, would have none of it.

Madame Blavatsky was cagey enough never to allow the supernatural abilities she claimed to have be put to the test. But around the same time Fawcett was exploring the Amazon another of Conan Doyle’s close friends, the magician and escape artist Harry Houdini, was busy conducting explorations of his own into the realm of spirits. They began in 1913 when Houdini’s mother died and, grief-stricken, he turned to mediums in an effort to reconnect with her. What happened instead was that, one after another, he caught out every medium in some type of trickery and found he was able to explain the deceptions behind all the supposedly supernatural occurrences of the séances he attended. Seeing the spiritualists as fraudsters exploiting the pain of their marks, Houdini became enraged. He ended up attending hundreds of séances, usually disguised as an old lady, and as soon as he caught the medium performing some type of trickery he would stand up, remove the disguise, and proclaim, “I am Houdini, and you are a fraud.”

Houdini went on to write an exposé, A Magician among the Spirits, and he liked to incorporate common elements of séances into his stage shows to demonstrate how easy they were for a good magician to recreate. In 1922, two years before Fawcett disappeared with his son Jack while searching for Z, Scientific American Magazine asked Houdini to serve on a committee to further investigate the claims of spiritualists. The magazine even offered a cash prize to anyone who could meet some basic standards of evidence to establish the validity of their claims. The prize went unclaimed. After Houdini declared one of Conan Doyle's favorite mediums a fraud, the two men had a bitter falling out, the latter declaring the prior an enemy of his cause. (Conan Doyle was convinced Houdini himself must've had supernatural powers and was inadvertently using them to sabotage the mediums.) The James Randi Educational Foundation, whose founder also began as a magician but then became an investigator of paranormal claims, currently offers a considerably larger cash prize (a million dollars) to anyone who can pass some well-designed test and prove they have psychic powers. To date, a thousand applicants have tried to win the prize, but none have made it through preliminary testing.

So Percy Fawcett was searching, it seems, for two very different cities; one was based on evidence of a pre-Columbian society and the other was a product of his spiritual longing. Grann writes about a businessman who insists Fawcett disappeared because he actually reached this second version of Z, where he transformed into some kind of pure energy, just as James Redfield suggests happened to the entire Mayan civilization in his New Age novel The Celestine Prophecy. Apparently, you can take pilgrimages to a cave where Fawcett found this portal to the Other World. The website titled “The Great Web of Percy Harrison Fawcett” enjoins visitors: “Follow your destiny to Ibez where Colonel Fawcett lives an everlasting life.”

           Today’s spiritualists and pseudoscientists rely more heavily on deliberately distorted and woefully dishonest references to quantum physics than they do on magnetism. But the differences are only superficial. The fundamental shift that occurred with the advent of science was that ideas could now be divided—some with more certainty than others—into two categories: those supported by sound methods and a steadfast devotion to following the evidence wherever it leads and those that emerge more from vague intuitions and wishful thinking. No sooner had science begun to resemble what it is today than people started trying to smuggle their favorite superstitions across the divide.

Not much separates New Age thinking from spiritualism—or either of them from long-established religion. They all speak to universal and timeless human desires. Following the evidence wherever it leads often means having to reconcile yourself to hard truths. As Carl Sagan writes in his indispensable paean to scientific thinking, Demon-Haunted World,

Pseudoscience speaks to powerful emotional needs that science often leaves unfulfilled. It caters to fantasies about personal powers we lack and long for… In some of its manifestations, it offers satisfaction for spiritual hungers, cures for disease, promises that death is not the end. It reassures us of our cosmic centrality and importance… At the heart of some pseudoscience (and some religion also, New Age and Old) is the idea that wishing makes it so. How satisfying it would be, as in folklore and children’s stories, to fulfill our heart’s desire just by wishing. How seductive this notion is, especially when compared with the hard work and good luck usually required to achieve our hopes. (14)

As the website for one of the most recent New Age sensations, The Secret, explains, “The Secret teaches us that we create our lives with every thought every minute of every day.” (It might be fun to compare The Secret to Madame Blavatsky’s magnum opus The Secret Doctrine—but not my kind of fun.)

That spiritualism and pseudoscience satisfy emotional longings raises the question: what’s the harm in entertaining them? Isn’t it a little cruel for skeptics like Lavoisier, Houdini, and Randi to go around taking the wrecking ball to people’s beliefs, which they presumably depend on for consolation, meaning, and hope? Indeed, the wildfire of credulity, charlatanry, and consumerist epistemology—whereby you’re encouraged to believe whatever makes you look and feel the best—is no justification for hostility toward believers. The hucksters, self-deluded or otherwise, who profit from promulgating nonsense do however deserve, in my opinion, to be very publicly humiliated. Sagan points out too that when we simply keep quiet in response to other people making proclamations we know to be absurd, “we abet a general climate in which skepticism is considered impolite, science tiresome, and rigorous thinking somehow stuffy and inappropriate” (298). In such a climate,

Spurious accounts that snare the gullible are readily available. Skeptical treatments are much harder to find. Skepticism does not sell well. A bright and curious person who relies entirely on popular culture to be informed about something like Atlantis is hundreds or thousands of times more likely to come upon a fable treated uncritically than a sober and balanced assessment. (5)

Consumerist epistemology is also the reason why creationism and climate change denialism are immune from refutation—and is likely responsible for the difficulty we face in trying to bridge the political divide. No one can decide what should constitute evidence when everyone is following some inner intuitive light to the truth. On a more personal scale, you forfeit any chance you have at genuine discovery—either outward or inward—when you drastically lower the bar for acceptable truths to make sure all the things you really want to be true can easily clear it.

On the other hand, there are also plenty of people out there given to rolling their eyes anytime they’re informed of strangers’ astrological signs moments after meeting them (the last woman I met is a Libra). It’s not just skeptics and trained scientists who sense something flimsy and immature in the characters of New Agers and the trippy hippies. That’s probably why people are so eager to take on burdens and experience hardship in the name of their beliefs. That’s probably at least part of the reason too why people risk their lives exploring jungles and wildernesses. If a dude in a tie-dye shirt says he discovered some secret, sacred truth while tripping on acid, you’re not going to take him anywhere near as seriously as you do people like Joseph Conrad, who journeyed into the heart of darkness, or Percy Fawcett, who braved the deadly Amazon in search of ancient wisdom.

The story of the Fawcett mission undertaken in the name of exploration and scientific progress actually has a happy ending—one you don’t have to be a crackpot or a dupe to appreciate. Fawcett himself may not have had the benefit of modern imaging and surveying tools, but he was also probably too distracted by fantasies of White Lodges to see much of the evidence at his feet. David Grann made a final stop on his own Amazon journey to seek out the Kuikuro Indians and the archeologist who was staying with them, Michael Heckenberger. Grann writes,

Altogether, he had uncovered twenty pre-Columbian settlements in the Xingu, which had been occupied roughly between A.D. 800 and A.D. 1600. The settlements were about two to three miles apart and were connected by roads. More astounding, the plazas were laid out along cardinal points, from east to west, and the roads were positioned at the same geometric angles. (Fawcett said that Indians had told him legends that described “many streets set at right angles to one another.”) (313)

These were the types of settlements Fawcett had discovered real evidence for. They probably wouldn’t have been of much interest to spiritualists, but their importance to the fields of archeology and anthropology are immense. Grann records from his interview:

“Anthropologists,” Heckenberger said, “made the mistake of coming into the Amazon in the twentieth century and seeing only small tribes and saying, ‘Well, that’s all there is.’ The problem is that, by then, many Indian populations had already been wiped out by what was essentially a holocaust from European contact. That’s why the first Europeans in the Amazon described such massive settlements that, later, no one could ever find.” (317)

Carl Sagan describes a “soaring sense of wonder” as a key ingredient to both good science and bad. Pseudoscience triggers our wonder switches with heedless abandon. But every once in a while findings that are backed up with solid evidence are just as satisfying. “For a thousand years,” Heckenberger explains to Grann,

the Xinguanos had maintained artistic and cultural traditions from this highly advanced, highly structured civilization. He said, for instance, that the present-day Kuikuro village was still organized along the east and west cardinal points and its paths were aligned at right angles, though its residents no longer knew why this was the preferred pattern. Heckenberger added that he had taken a piece of pottery from the ruins and shown it to a local maker of ceramics. It was so similar to present-day pottery, with its painted exterior and reddish clay, that the potter insisted it had been made recently…. “To tell you the honest-to-God truth, I don’t think there is anywhere in the world where there isn’t written history where the continuity is so clear as right here,” Heckenberger said. (318)

[The PBS series "Secrets of the Dead" devoted a show to Fawcett.]

Also read

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

And:

The Self-Transcendence Price Tag: A Review of Alex Stone's Fooling Houdini

Read More
Dennis Junk Dennis Junk

What's the Point of Difficult Reading?

For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them.

You sit reading the first dozen or so pages of some celebrated classic and gradually realize that having to sort out how the ends of the long sentences fix to their beginnings is taking just enough effort to distract you entirely from the setting or character you’re supposed to be getting to know. After a handful of words you swear are made up and a few tangled metaphors you find yourself riddling over with nary a resolution, the dread sinks in. Is the whole book going to be like this? Is it going to be one of those deals where you get to what’s clearly meant to be a crucial turning point in the plot but for you is just another riddle without a solution, sending you paging back through the forest of verbiage in search of some key succession of paragraphs you spaced out while reading the first time through? Then you wonder if you’re missing some other kind of key, like maybe the story’s an allegory, a reference to some historical event like World War II or some Revolution you once had to learn about but have since lost all recollection of. Maybe the insoluble similes are allusions to some other work you haven’t read or can’t recall. In any case, you’re not getting anything out of this celebrated classic but frustration leading to the dual suspicion that you’re too ignorant or stupid to enjoy great literature and that the whole “great literature” thing is just a conspiracy to trick us into feeling dumb so we’ll defer to the pseudo-wisdom of Ivory Tower elites.

If enough people of sufficient status get together and agree to extol a work of fiction, they can get almost everyone else to agree. The readers who get nothing out of it but frustration and boredom assume that since their professors or some critic in a fancy-pants magazine or the judges of some literary award committee think it’s great they must simply be missing something. They dutifully continue reading it, parrot a few points from a review that sound clever, and afterward toe the line by agreeing that it is indeed a great work of literature, clearly, even if it doesn’t speak to them personally. For instance, James Joyce’s Ulysses, utterly nonsensical to anyone without at least a master’s degree, tops the Modern Library’s list of 100 best novels in the English language. Responding to the urging of his friends to write out an explanation of the novel, Joyce scoffed, boasting,

I’ve put in so many enigmas and puzzles that it will keep the professors busy for centuries arguing over what I meant, and that’s the only way of ensuring one’s immortality.

He was right. To this day, professors continue to love him even as Ulysses and the even greater monstrosity Finnegan’s Wake do nothing but bore and befuddle everyone else—or else, more fittingly, sit inert or unchecked-out on the shelf, gathering well-deserved dust.

Joyce’s later novels are not literature; they are lengthy collections of loosely connected literary puzzles. But at least his puzzles have actual solutions—or so I’m told. Ulysses represents the apotheosis of the tradition in literature called modernism. What came next, postmodernism, is even more disconnected from the universal human passion for narrative. Even professors aren’t sure what to do with it, so they simply throw their hands up, say it’s great, and explain that the source of its greatness is its very resistance to explanation. Jonathan Franzen, whose 2001 novel The Corrections represented a major departure from the postmodernism he began his career experimenting with, explained the following year in The New Yorker how he’d turned away from the tradition. He’d been reading the work of William Gaddis “as a kind of penance” (101) and not getting any meaning out of it. Of the final piece in the celebrated author’s oeuvre, Franzen writes,

The novel is an example of the particular corrosiveness of literary postmodernism. Gaddis began his career with a Modernist epic about the forgery of masterpieces. He ended it with a pomo romp that superficially resembles a masterpiece but punishes the reader who tries to stay with it and follow its logic. When the reader finally says, Hey, wait a minute, this is a mess, not a masterpiece, the book instantly morphs into a performance-art prop: its fraudulence is the whole point! And the reader is out twenty hours of good-faith effort. (111)

In other words, reading postmodern fiction means not only forgoing the rewards of narratives, having them replaced by the more taxing endeavor of solving multiple riddles in succession, but those riddles don’t even have answers. What’s the point of reading this crap? Exactly. Get it?

You can dig deeper into the meaningless meanderings of pomos and discover there is in fact an ideology inspiring all the infuriating inanity. The super smart people who write and read this stuff point to the willing, eager complicity of the common reader in the propagation of all the lies that sustain our atrociously unjust society (but atrociously unjust compared to what?). Franzen refers to this as the Fallacy of the Stupid Reader,

wherein difficulty is a “strategy” to protect art from cooptation and the purpose of art is to “upset” or “compel” or “challenge” or “subvert” or “scar” the unsuspecting reader; as if the writer’s audience somehow consisted, again and again, of Charlie Browns running to kick Lucy’s football; as if it were a virtue in a novelist to be the kind of boor who propagandizes at friendly social gatherings. (109)

But if the author is worried about art becoming a commodity does making the art shitty really amount to a solution? And if the goal is to make readers rethink something they take for granted why not bring the matter up directly, or have a character wrestle with it, or have a character argue with another character about it? The sad fact is that these authors probably just suck, that, as Franzen suspects, “literary difficulty can operate as a smoke screen for an author who has nothing interesting, wise, or entertaining to say” (111).

Not all difficulty in fiction is a smoke screen though. Not all the literary emperors are naked. Franzen writes that “there is no headache like the headache you get from working harder on deciphering a text than the author, by all appearances, has worked on assembling it.” But the essay, titled “Mr. Difficult,” begins with a reader complaint sent not to Gaddis but to Franzen himself. And the reader, a Mrs. M. from Maryland, really gives him the business:

Who is it that you are writing for? It surely could not be the average person who enjoys a good read… The elite of New York, the elite who are beautiful, thin, anorexic, neurotic, sophisticated, don’t smoke, have abortions tri-yearly, are antiseptic, live in penthouses, this superior species of humanity who read Harper’s and The New Yorker. (100)

In this first part of the essay, Franzen introduces a dilemma that sets up his explanation of why he turned away from postmodernism—he’s an adherent of the “Contract model” of literature, whereby the author agrees to share, on equal footing, an entertaining or in some other way gratifying experience, as opposed to the “Status model,” whereby the author demonstrates his or her genius and if you don’t get it, tough. But his coming to a supposed agreement with Mrs. M. about writers like Gaddis doesn’t really resolve Mrs. M.’s conflict with him.

The Corrections, after all, the novel she was responding to, represents his turning away from the tradition Gaddis wrote in. (It must be said, though, that Freedom, Franzen’s next novel, is written in a still more accessible style.)

The first thing we must do to respond properly to Mrs. M. is break down each of Franzen’s models into two categories. The status model includes writers like Gaddis whose difficulty serves no purpose but to frustrate and alienate readers. But Franzen’s own type specimen for this model is Flaubert, much of whose writing, though difficult at first, rewards any effort to re-read and further comprehend with a more profound connection. So it is for countless other writers, the one behind number two on the Modern Library’s ranking for instance—Fitzgerald and Gatsby. As for the contract model, Franzen admits,

Taken to its free-market extreme, Contract stipulates that if a product is disagreeable to you the fault must be the product’s. If you crack a tooth on a hard word in a novel, you sue the author. If your professor puts Dreiser on your reading list, you write a harsh student evaluation… You’re the consumer; you rule. (100)

Franzen, in declaring himself a “Contract kind of person,” assumes that the free-market extreme can be dismissed for its extremity. But Mrs. M. would probably challenge him on that. For many, particularly right-leaning readers, the market not only can but should be relied on to determine which books are good and which ones belong in some tiny niche. When the Modern Library conducted a readers' poll to create a popular ranking to balance the one made by experts, the ballot was stuffed by Ayn Rand acolytes and scientologists. Mrs. M. herself leaves little doubt as to her political sympathies. For her and her fellow travelers, things like literature departments, National Book Awards—like the one The Corrections won—Nobels and Pulitzers are all an evil form of intervention into the sacred workings of the divine free market, un-American, sacrilegious, communist. According to this line of thinking, authors aren’t much different from whores—except of course literal whoring is condemned in the bible (except when it isn’t).

A contract with readers who score high on the personality dimension of openness to new ideas and experiences (who tend to be liberal), those who have spent a lot of time in the past reading books like The Great Gatsby or Heart of Darkness or Lolita (the horror!), those who read enough to have developed finely honed comprehension skills—that contract is going to look quite a bit different from one with readers who attend Beck University, those for whom Atlas Shrugged is the height of literary excellence. At the same time, though, the cult of self-esteem is poisoning schools and homes with the idea that suggesting that a student or son or daughter is anything other than a budding genius is a form of abuse. Heaven forbid a young person feel judged or criticized while speaking or writing. And if an author makes you feel the least bit dumb or ignorant, well, it’s an outrage—heroes like Mrs. M. to the rescue.

One of the problems with the cult of self-esteem is that anticipating criticism tends to make people more, not less creative. And the link between low self-esteem and mental disorders is almost purely mythical. High self-esteem is correlated with school performance, but as far as researchers can tell it’s the performance causing the esteem, not the other way around. More invidious, though, is the tendency to view anything that takes a great deal of education or intelligence to accomplish as an affront to everyone less educated or intelligent. Conservatives complain endlessly about class warfare and envy of the rich—the financially elite—but they have no qualms about decrying intellectual elites and condemning them for flaunting their superior literary achievements. They see the elitist mote in the eye of Nobel laureates without noticing the beam in their own.

         What’s the point of difficult reading? Well, what’s the point of running five or ten miles? What’s the point of eating vegetables as opposed to ice cream or Doritos? Difficulty need not preclude enjoyment. And discipline in the present is often rewarded in the future. It very well may be that the complexity of the ideas you’re capable of understanding is influenced by how many complex ideas you attempt to understand. No matter how vehemently true believers in the magic of markets insist otherwise, markets don’t have minds. And though an individual’s intelligence need not be fixed a good way to ensure children never get any smarter than they already are is to make them feel fantastically wonderful about their mediocrity. We just have to hope that despite these ideological traps there are enough people out there determined to wrap their minds around complex situations depicted in complex narratives about complex people told in complex language, people who will in the process develop the types of minds and intelligence necessary to lead the rest of our lazy asses into a future that’s livable and enjoyable. For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them. How do we get the rewards without wasting our time?

Read More
Dennis Junk Dennis Junk

Can’t Win for Losing: Why There Are So Many Losers in Literature and Why It Has to Change

David Lurie’s position in Disgrace is similar to that of John Proctor in The Crucible (although this doesn’t come out nearly as much in the movie version). And it’s hard not to see feminism in its current manifestations—along with Marxism and postcolonialism—as a pernicious new breed of McCarthyism infecting academia and wreaking havoc with men and literature alike.

Ironically, the author of The Golden Notebook, celebrating its 50th anniversary this year, and considered by many a “feminist bible,” happens to be an outspoken critic of feminism. When asked in a 1982 interview with Lesley Hazelton about her response to readers who felt some of her later works were betrayals of the women whose cause she once championed, Doris Lessing replied,

What the feminists want of me is something they haven't examined because it comes from religion. They want me to bear witness. What they would really like me to say is, ‘Ha, sisters, I stand with you side by side in your struggle toward the golden dawn where all those beastly men are no more.’ Do they really want people to make oversimplified statements about men and women? In fact, they do. I've come with great regret to this conclusion.

Lessing has also been accused of being overly harsh—“castrating”—to men, too many of whom she believes roll over a bit too easily when challenged by women aspiring to empowerment. As a famous novelist, however, who would go on to win the Nobel prize in literature in 2007, she got to visit a lot of schools, and it gradually dawned on her that it wasn’t so much that men were rolling over but rather that they were being trained from childhood to be ashamed of their maleness. In a lecture she gave to the Edinburgh book festival in 2001, she said,

Great things have been achieved through feminism. We now have pretty much equality at least on the pay and opportunities front, though almost nothing has been done on child care, the real liberation. We have many wonderful, clever, powerful women everywhere, but what is happening to men? Why did this have to be at the cost of men? I was in a class of nine- and 10-year-olds, girls and boys, and this young woman was telling these kids that the reason for wars was the innately violent nature of men. You could see the little girls, fat with complacency and conceit while the little boys sat there crumpled, apologising for their existence, thinking this was going to be the pattern of their lives.

Lessing describes how the teacher kept casting glances expectant of her approval as she excoriated these impressionable children. 

Elaine Blair, in “Great American Losers,” an essay that’s equal parts trenchant and infuriatingly obtuse, describes a dynamic in contemporary fiction that’s similar to the one Lessing saw playing out in the classroom.

The man who feels himself unloved and unlovable—this is a character that we know well from the latest generation or two of American novels. His trials are often played for sympathetic laughs. His loserdom is total: it extends to his stunted career, his squalid living quarters, his deep unease in the world.

At the heart of this loserdom is his auto-manifesting knowledge that women don’t like him. As opposed to men of earlier generations who felt entitled to a woman’s respect and admiration, Blair sees this modern male character as being “the opposite of entitled: he approaches women cringingly, bracing for a slap.” This desperation on the part of male characters to avoid offending women, to prove themselves capable of sublimating their own masculinity so they can be worthy of them, finds its source in the authors themselves. Blair writes,

Our American male novelists, I suspect, are worried about being unloved as writers—specifically by the female reader. This is the larger humiliation looming behind the many smaller fictional humiliations of their heroes, and we can see it in the way the characters’ rituals of self-loathing are tacitly performed for the benefit of an imagined female audience.

Blair quotes a review David Foster Wallace wrote of a John Updike novel to illustrate how conscious males writing literature today are of their female readers’ hostility toward men who write about sex and women without apologizing for liking sex and women—sometimes even outside the bounds of caring, committed relationships. Labeling Updike as a “Great Male Narcissist,” a distinction he shares with writers like Philip Roth and Norman Mailer, Wallace writes,

Most of the literary readers I know personally are under forty, and a fair number are female, and none of them are big admirers of the postwar GMNs. But it’s John Updike in particular that a lot of them seem to hate. And not merely his books, for some reason—mention the poor man himself and you have to jump back:

“Just a penis with a thesaurus.”

“Has the son of a bitch ever had one unpublished thought?”

“Makes misogyny seem literary the same way Rush 

[Limbaugh] makes fascism seem funny.”

And trust me: these are actual quotations, and I’ve heard even

worse ones, and they’re all usually accompanied by the sort of

facial expressions where you can tell there’s not going to be

any profit in appealing to the intentional fallacy or talking

about the sheer aesthetic pleasure of Updike’s prose.

Since Wallace is ready to “jump back” at the mere mention of Updike’s name, it’s no wonder he’s given to writing about characters who approach women “cringingly, bracing for a slap.”

Blair goes on to quote from Jonathan Franzen’s novel The Corrections, painting a plausible picture of male writers who fear not only that their books will be condemned if too misogynistic—a relative term which has come to mean "not as radically feminist as me"—but they themselves will be rejected. In Franzen’s novel, Chip Lambert has written a screenplay and asked his girlfriend Julia to give him her opinion. She holds off doing so, however, until after she breaks up with him and is on her way out the door. “For a woman reading it,” she says, “it’s sort of like the poultry department. Breast, breast, breast, thigh, leg” (26). Franzen describes his character’s response to the critique:

It seemed to Chip that Julia was leaving him because “The Academy Purple” had too many breast references and a draggy opening, and that if he could correct these few obvious problems, both on Julia’s copy of the script and, more important, on the copy he’d specially laser-printed on 24-pound ivory bond paper for [the film producer] Eden Procuro, there might be hope not only for his finances but also for his chances of ever again unfettering and fondling Julia’s own guileless, milk-white breasts. Which by this point in the day, as by late morning of almost every day in recent months, was one of the last activities on earth in which he could still reasonably expect to take solace for his failures. (28)

If you’re reading a literary work like The Corrections, chances are you’ve at some point sat in a literature class—or even a sociology or culture studies class—and been instructed that the proper way to fulfill your function as a reader is to critically assess the work in terms of how women (or minorities) are portrayed. Both Chip and Julia have sat through such classes. And you’re encouraged to express disapproval, even outrage if something like a traditional role is enacted—or, gasp, objectification occurs. Blair explains how this affects male novelists:

When you see the loser-figure in a novel, what you are seeing is a complicated bargain that goes something like this: yes, it is kind of immature and boorish to be thinking about sex all the time and ogling and objectifying women, but this is what we men sometimes do and we have to write about it. We fervently promise, however, to avoid the mistake of the late Updike novels: we will always, always, call our characters out when they’re being self-absorbed jerks and louts. We will make them comically pathetic, and punish them for their infractions a priori by making them undesirable to women, thus anticipating what we imagine will be your judgments, female reader. Then you and I, female reader, can share a laugh at the characters’ expense, and this will bring us closer together and forestall the dreaded possibility of your leaving me.

In other words, these male authors are the grownup versions of those poor school boys Lessing saw forced to apologize for their own existence. Indeed, you can feel this dynamic, this bargain, playing out when you’re reading these guys’ books. Blair’s description of the problem is spot on. Her theory of what caused it, however, is laughable.

Because of the GMNs, these two tendencies—heroic virility and sexist condescension—have lingered in our minds as somehow yoked together, and the succeeding generations of American male novelists have to some degree accepted the dyad as truth. Behind their skittishness is a fearful suspicion that if a man gets what he wants, sexually speaking, he is probably exploiting someone.

The dread of slipping down the slope from attraction to exploitation has nothing to do with John Updike. Rather, it is embedded in terms at the very core of feminist ideology. Misogyny, for instance, is frequently deemed an appropriate label for men who indulge in lustful gazing, even in private. And the term objectification implies that the female whose subjectivity isn’t being properly revered is the victim of oppression. The main problem with this idea—and there are several—is that the term objectification is synonymous with attraction. The deluge of details about the female body in fiction by male authors can just as easily be seen as a type of confession, an unburdening of guilt by the offering up of sins. The female readers respond by assigning the writers some form of penance, like never writing, never thinking like that again without flagellating themselves.

The conflict between healthy male desire and disapproving feminist prudery doesn’t just play out in the tortured psyches of geeky American male novelists. A.S. Byatt, in her Booker prize-winning novel Possession, satirizes the plight of scholars steeped in literary theories for being “papery” and sterile. But the novel ends with a male scholar named Roland overcoming his theory-induced self-consciousness to initiate sex with another scholar named Maud. Byatt describes the encounter:

And very slowly and with infinite gentle delays and delicate diversions and variations of indirect assault Roland finally, to use an outdated phrase, entered and took possession of all her white coolness that grew warm against him, so that there seemed to be no boundaries, and he heard, towards dawn, from a long way off, her clear voice crying out, uninhibited, unashamed, in pleasure and triumph. (551)

The literary critic Monica Flegel cites this passage as an example of how Byatt’s old-fashioned novel features “such negative qualities of the form as its misogyny and its omission of the lower class.” Flegel is particularly appalled by how “stereotypical gender roles are reaffirmed” in the sex scene. “Maud is reduced in the end,” Flegel alleges, “to being taken possession of by her lover…and assured that Roland will ‘take care of her.’” How, we may wonder, did a man assuring a woman he would take care of her become an act of misogyny?

Perhaps critics like Flegel occupy some radical fringe; Byatt’s book was after all a huge success with audiences and critics alike, and it did win Byatt the Booker. The novelist Martin Amis, however, isn’t one to describe his assaults as indirect. He routinely dares to feature men who actually do treat women poorly in his novels—without any authorial condemnation.

Martin Goff, the non-intervening director of the Booker Prize committee, tells the story of the 1989 controversy over whether or not Amis’s London Fields should be on the shortlist. Maggie Gee, a novelist, and Helen McNeil, a professor, simply couldn’t abide Amis’s treatment of his women characters. “It was an incredible row,” says Goff.

Maggie and Helen felt that Amis treated women appallingly in the book. That is not to say they thought books which treated women badly couldn't be good, they simply felt that the author should make it clear he didn't favour or bless that sort of treatment. Really, there was only two of them and they should have been outnumbered as the other three were in agreement, but such was the sheer force of their argument and passion that they won. David [Lodge] has told me he regrets it to this day, he feels he failed somehow by not saying, “It's two against three, Martin's on the list”.

In 2010, Amis explained his career-spanning failure to win a major literary award, despite enjoying robust book sales, thus:

There was a great fashion in the last century, and it's still with us, of the unenjoyable novel. And these are the novels which win prizes, because the committee thinks, “Well it's not at all enjoyable, and it isn't funny, therefore it must be very serious.”

Brits like Hilary Mantel, and especially Ian McEwan are working to turn this dreadful trend around. But when McEwan dared to write a novel about a neurosurgeon who prevails in the end over an afflicted, less privileged tormenter he was condemned by critic Jennifer Szalai in the pages of Harper’s Magazine for his “blithe, bourgeois sentiments.” If you’ve read Saturday, you know the sentiments are anything but blithe, and if you read Szalai’s review you’ll be taken aback by her articulate blindness.

Amis is probably right in suggesting that critics and award committees have a tendency to mistake misery for profundity. But his own case, along with several others like it, hint at something even more disturbing, a shift in the very idea of what role fictional narratives play in our lives.

The sad new reality is that, owing to the growing influence of ideologically extreme and idiotically self-righteous activist professors, literature is no longer read for pleasure and enrichment—it’s no longer even read as a challenging exercise in outgroup empathy. Instead, reading literature is supposed by many to be a ritual of male western penance. Prior to taking an interest in literary fiction, you must first be converted to the proper ideologies, made to feel sufficiently undeserving yet privileged, the beneficiary of a long history of theft and population displacement, the scion and gene-carrier of rapists and genocidaires—the horror, the horror. And you must be taught to systematically overlook and remain woefully oblivious of all the evidence that the Enlightenment was the best fucking thing that ever happened to the human species. Once you’re brainwashed into believing that so-called western culture is evil and that you’ve committed the original sin of having been born into it, you’re ready to perform your acts of contrition by reading horrendously boring fiction that forces you to acknowledge and reflect upon your own fallen state.

Fittingly, the apotheosis of this new literary tradition won the Booker in 1999, and its author, like Lessing, is a Nobel laureate. J.M. Coetzee’s Disgrace chronicles in exquisite free indirect discourse the degradation of David Lurie, a white professor in Cape Town, South Africa, beginning with his somewhat pathetic seduction of black student, a crime for which he pays with the loss of his job, his pension, and his reputation, and moving on to the aftermath of his daughter’s rape at the hands of three black men who proceed to rob her, steal his car, douse him with spirits and light him on fire. What’s unsettling about the novel—and it is a profoundly unsettling novel—is that its structure implies that everything that David and Lucy suffer flows from his original offense of lusting after a young black woman. This woman, Melanie, is twenty years old, and though she is clearly reluctant at first to have sex with her teacher there’s never any force involved. At one point, she shows up at David’s house and asks to stay with him. It turns out she has a boyfriend who is refusing to let her leave him without a fight. It’s only after David unheroically tries to wash his hands of the affair to avoid further harassment from this boyfriend—while stooping so low as to insist that Melanie make up a test she missed in his class—that she files a complaint against him.

David immediately comes clean to university officials and admits to taking advantage of his position of authority. But he stalwartly refuses to apologize for his lust, or even for his seduction of the young woman. This refusal makes him complicit, the novel suggests, in all the atrocities of colonialism. As he’s awaiting a hearing to address Melanie’s complaint, David gets a message:

On campus it is Rape Awareness Week. Women Against Rape, WAR, announces a twenty-four-hour vigil in solidarity with “recent victims”. A pamphlet is slipped under his door: ‘WOMEN SPEAK OUT.’ Scrawled in pencil at the bottom is a message: ‘YOUR DAYS ARE OVER, CASANOVA.’ (43)

During the hearing, David confesses to doctoring the attendance ledgers and entering a false grade for Melanie. As the attendees become increasingly frustrated with what they take to be evasions, he goes on to confess to becoming “a servant of Eros” (52). But this confession only enrages the social sciences professor Farodia Rassool:

Yes, he says, he is guilty; but when we try to get specificity, all of a sudden it is not abuse of a young woman he is confessing to, just an impulse he could not resist, with no mention of the pain he has caused, no mention of the long history of exploitation of which this is part. (53)

There’s also no mention, of course, of the fact that already David has gone through more suffering than Melanie has, or that her boyfriend deserves a great deal of the blame, or that David is an individual, not a representative of his entire race who should be made to answer for the sins of his forefathers.

After resigning from his position in disgrace, David moves out to the country to live with his daughter on a small plot of land. The attack occurs only days after he’s arrived. David wants Lucy to pursue some sort of justice, but she refuses. He wants her to move away because she’s clearly not safe, but she refuses. She even goes so far as to accuse him of being in the wrong for believing he has any right to pronounce what happened an injustice—and for thinking it is his place to protect his daughter. And if there’s any doubt about the implication of David’s complicity she clears it up. As he’s pleading with her to move away, they begin talking about the rapists’ motivation. Lucy says to her father,

When it comes to men and sex, David, nothing surprises me anymore. Maybe, for men, hating the woman makes sex more exciting. You are a man, you ought to know. When you have sex with someone strange—when you trap her, hold her down, get her under you, put all your weight on her—isn’t it a bit like killing? Pushing the knife in; exiting afterwards, leaving the body behind covered in blood—doesn’t it feel like murder, like getting away with murder? (158)

The novel is so engrossing and so disturbing that it’s difficult to tell what the author’s position is vis à vis his protagonist’s degradation or complicity. You can’t help sympathizing with him and feeling his treatment at the hands of Melanie, Farodia, and Lucy is an injustice. But are you supposed to question that feeling in light of the violence Melanie is threatened with and Lucy is subjected to? Are you supposed to reappraise altogether your thinking about the very concept of justice in light of the atrocities of history? Are we to see David Lurie as an individual or as a representative of western male colonialism, deserving of whatever he’s made to suffer and more?

Personally, I think David Lurie’s position in Disgrace is similar to that of John Proctor in The Crucible (although this doesn’t come out nearly as much in the movie version). And it’s hard not to see feminism in its current manifestations—along with Marxism and postcolonialism—as a pernicious new breed of McCarthyism infecting academia and wreaking havoc with men and literature alike. It’s really no surprise at all that the most significant developments in the realm of narratives lately haven’t occurred in novels at all. Insofar as the cable series contributing to the new golden age of television can be said to adhere to a formula, it’s this: begin with a bad ass male lead who doesn’t apologize for his own existence and has no qualms about expressing his feelings toward women. As far as I know, these shows are just as popular with women viewers as they are with the guys.

When David first arrives at Lucy’s house, they take a walk and he tells her a story about a dog he remembers from a time when they lived in a neighborhood called Kenilworth.

It was a male. Whenever there was a bitch in the vicinity it would get excited and unmanageable, and with Pavlovian regularity the owners would beat it. This went on until the poor dog didn’t know what to do. At the smell of a bitch it would chase around the garden with its ears flat and its tail between its legs, whining, trying to hide…There was something so ignoble in the spectacle that I despaired. One can punish a dog, it seems to me, for an offence like chewing a slipper. A dog will accept the justice of that: a beating for a chewing. But desire is another story. No animal will accept the justice of being punished for following its instincts.

Lucy breaks in, “So males must be allowed to follow their instincts unchecked? Is that the moral?” David answers,

No, that is not the moral. What was ignoble about the Kenilworth spectacle was that the poor dog had begun to hate its own nature. It no longer needed to be beaten. It was ready to punish itself. At that point it would be better to shoot it.

“Or have it fixed,” Lucy offers. (90)

Also Read:

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

And:
FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

Read More
Dennis Junk Dennis Junk

Madness and Bliss: Critical vs Primitive Readings in A.S. Byatt's Possession: A Romance 2

Critics responded to A.S. Byatt’s challenge to their theories in her book Possession by insisting that the work fails to achieve high-brow status and fails to do anything but bolster old-fashioned notions about stories and language—not to mention the roles of men and women. What they don’t realize is how silly their theories come across to the non-indoctrinated.

Read part one.

The critical responses to the challenges posed by Byatt in Possession fit neatly within the novel’s satire. Louise Yelin, for instance, unselfconsciously divides the audience for the novel into “middlebrow readers” and “the culturally literate” (38), placing herself in the latter category. She overlooks Byatt’s challenge to her methods of criticism and the ideologies underpinning them, for the most part, and suggests that several of the themes, like ventriloquism, actually support poststructuralist philosophy. Still, Yelin worries about the novel’s “homophobic implications” (39). (A lesbian, formerly straight character takes up with a man in the end, and Christabel LaMotte’s female lover commits suicide after the dissolution of their relationship, but no one actually expresses any fear or hatred of homosexuals.) Yelin then takes it upon herself to “suggest directions that our work might take” while avoiding the “critical wilderness” Byatt identifies. She proposes a critical approach to a novel that “exposes its dependencies on the bourgeois, patriarchal, and colonial economies that underwrite” it (40). And since all fiction fails to give voice to one or another oppressed minority, it is the critic’s responsibility to “expose the complicity of those effacements in the larger order that they simultaneously distort and reproduce” (41). This is not in fact a response to Byatt’s undermining of critical theories; it is instead an uncritical reassertion of their importance.

Yelin and several other critics respond to Possession as if Byatt had suggested that “culturally literate” readers should momentarily push to the back of their minds what they know about how literature is complicit in various forms of political oppression so they can get more enjoyment from their reading. This response is symptomatic of an astonishing inability to even imagine what the novel is really inviting literary scholars to imagine—that the theories implicating literature are flat-out wrong. Monica Flegel for instance writes that “What must be privileged and what must be sacrificed in order for Byatt’s Edenic reading (and living) state to be achieved may give some indication of Byatt’s own conventionalizing agenda, and the negative enchantment that her particular fairy tale offers” (414). Flegel goes on to show that she does in fact appreciate the satire on academic critics; she even sympathizes with the nostalgia for simpler times, before political readings became mandatory. But she ends her critical essay with another reassertion of the political, accusing Byatt of “replicating” through her old-fashioned novel “such negative qualities of the form as its misogyny and its omission of the lower class.” Flegel is particularly appalled by Maud’s treatment in the final scene, since, she claims, “stereotypical gender roles are reaffirmed” (428). “Maud is reduced in the end,” Flegel alleges, “to being taken possession of by her lover…and assured that Roland will ‘take care of her’” (429). This interpretation places Flegel in the company of the feminists in the novel who hiss at Maud for trying to please men, forcing her to bind her head.

Flegel believes that her analysis proves Byatt is guilty of misogyny and mistreatment of the poor. “Byatt urges us to leave behind critical readings and embrace reading for enjoyment,” she warns her fellow critics, “but the narrative she offers shows just how much is at stake when we leave criticism behind” (429). Flegel quotes Yelin to the effect that Possession is “seductive,” and goes on to declaim that

it is naïve, and unethical, to see the kind of reading that Byatt offers as happy. To return to an Edenic state of reading, we must first believe that such a state truly existed and that it was always open to all readers of every class, gender, and race. Obviously, such a belief cannot be held, not because we have lost the ability to believe, but because such a space never really did exist. (430)

In her preening self-righteous zealotry, Flegel represents a current in modern criticism that’s only slightly more extreme than that represented by Byatt’s misguided but generally harmless scholars. The step from using dubious theories to decode alleged justifications for political oppression in literature to Flegel’s frightening brand of absurd condemnatory moralizing leveled at authors and readers alike is a short one.

Another way critics have attempted to respond to Byatt’s challenge is by denying that she is in fact making any such challenge. Christien Franken suggests that Byatt’s problems with theories like poststructuralism stem from her dual identity as a critic and an author. In a lecture Byatt once gave titled “Identity and the Writer” which was later published as an essay, Franken finds what she believes is evidence of poststructuralist thinking, even though Byatt denies taking the theory seriously. Franken believes that in the essay, “the affinity between post-structuralism and her own thought on authorship is affirmed and then again denied” (18). Her explanation is that the critic in A.S. Byatt begins her lecture “Identity and the Writer” with a recognition of her own intellectual affinity with post-structuralist theories which criticize the paramount importance of “the author.”

The writer in Byatt feels threatened by the same post-structuralist criticism. (17)

Franken claims that this ambivalence runs throughout all of Byatt’s fiction and criticism. But Ann Marie Adams disagrees, writing that “When Byatt does delve into poststructuralist theory in this essay, she does so only to articulate what ‘threatens’ and ‘beleaguers’ her as a writer, not to productively help her identify the true ‘identity’ of the writer” (349). In Adams view, Byatt belongs in the humanist tradition of criticism going back to Matthew Arnold and the romantics. In her own response to Byatt, Adams manages to come closer than any of her fellow critics to being able to imagine that the ascendant literary theories are simply wrong. But her obvious admiration for Byatt doesn’t prevent her from suggesting that “Yelin and Flegel are right to note that the conclusion of Possession, with its focus on closure and seeming transcendence of critical anxiety, affords a particularly ‘seductive’ and ideologically laden pleasure to academic readers” (120). And, while she seems to find some value in Arnoldian approaches, she fails to engage in any serious reassessment of the theories Byatt targets.

Frederick Holmes, in his attempt to explain Byatt’s attitude toward history as evidenced by the novel, agrees with the critics who see in Possession clear signs of the author’s embrace of postmodernism in spite of the parody and explicit disavowals. “It is important to acknowledge,” he writes,

that the liberation provided by Roland’s imagination from the previously discussed sterility of his intellectual sophistication is never satisfactorily accounted for in rational terms. It is not clear how he overcomes the post-structuralist positions on language, authorship, and identity. His claim that some signifiers are concretely attached to signifieds is simply asserted, not argued for. (330)

While Holmes is probably mistaken in taking this absence of rational justification as a tacit endorsement of the abandoned theory, the observation is the nearest any of the critics comes to a rebuttal of Byatt’s challenge. What Holmes is forgetting, though, is that structuralist and poststructuralist theorists themselves, from Saussure through Derrida, have been insisting on the inadequacy of language to describe the real world, a radical idea that flies in the face of every human’s lived experience, without ever providing any rational, empirical, or even coherent support for the departure.

The stark irony to which Holmes is completely oblivious is that he’s asking for a rational justification to abandon a theory that proclaims such justification impossible. The burden remains on poststructuralists to prove that people shouldn’t trust their own experiences with language. And it is precisely this disconnect with experience that Byatt shows to be problematic.

Holmes, like the other critics, simply can’t imagine that critical theories have absolutely no validity, so he’s forced to read into the novel the same chimerical ambivalence Franken tries so desperately to prove.

Roland’s dramatic alteration is validated by the very sort of emotional or existential experience that critical theory has conditioned him to dismiss as insubstantial. We might account for Roland’s shift by positing, not a rejection of his earlier thinking, but a recognition that his psychological well-being depends on his living as if such powerful emotional experiences had an unquestioned reality. (330)

Adams quotes from an interview in which Byatt discusses her inspiration for the characters in Possession, saying, “poor moderns are always asking themselves so many questions about whether their actions are real and whether what they say can be thought to be true […] that they become rather papery and are miserably aware of this” (111-2). Byatt believes this type of self-doubt is unnecessary. Indeed, Maud’s notion that “there isn’t a unitary ego” (290) and Roland’s thinking of the “idea of his ‘self’ as an illusion” (459)—not to mention Holmes’s conviction that emotional experiences are somehow unreal—are simple examples of the reductionist fallacy. While it is true that an individual’s consciousness and sense of self rest on a substrate of unconscious mental processes and mechanics that can be traced all the way down to the firing of neurons, to suggest this mechanical accounting somehow delegitimizes selfhood is akin to saying that water being made up of hydrogen and oxygen atoms means the feeling of wetness can only be an illusion.

       Just as silly are the ideas that romantic love is a “suspect ideological construct” (290), as Maud calls it, and that “the expectations of Romance control almost everyone in the Western world” (460), as Roland suggests. Anthropologist Helen Fisher writes in her book Anatomy of Love, “some Westerners have come to believe that romantic love is an invention of the troubadours… I find this preposterous. Romantic love is far more widespread” (49). After a long list of examples of love-strickenness from all over the world from west to east to everywhere in-between, Fisher concludes that it “must be a universal human trait” (50). Scientists have found empirical support as well for Roland’s discovery that words can in fact refer to real things. Psychologist Nicole Speer and her colleagues used fMRI to scan people’s brains as they read stories. The actions and descriptions on the page activated the same parts of the brain as witnessing or perceiving their counterparts in reality. The researchers report, “Different brain regions track different aspects of a story, such as a character’s physical location or current goals. Some of these regions mirror those involved when people perform, imagine, or observe similar real-world activities” (989).

       Critics like Flegel insist on joyless reading because happy endings necessarily overlook the injustices of the world. But this is like saying anyone who savors a meal is complicit in world hunger (or for that matter anyone who enjoys reading about a character savoring a meal). If feminist poststructuralists were right about how language functions as a vehicle for oppressive ideologies, then the most literate societies would be the most oppressive, instead of the other way around. Jacques Lacan is the theorist Byatt has the most fun with in Possession—and he is also the main target of the book Fashionable Nonsense:Postmodern Intellectuals’ Abuse of Science by the scientists Alan Sokal and Jean Bricmont. “According to his disciples,” they write, Lacan “revolutionized the theory and practice of psychoanalysis; according to his critics, he is a charlatan and his writings are pure verbiage” (18). After assessing Lacan’s use of concepts in topological mathematics, like the Mobius strip, which he sets up as analogies for various aspects of the human psyche, Sokal and Bricmont conclude that Lacan’s ideas are complete nonsense. They write,

The most striking aspect of Lacan and his disciples is probably their attitude toward science, and the extreme privilege they accord to “theory”… at the expense of observations and experiments… Before launching into vast theoretical generalizations, it might be prudent to check the empirical adequacy of at least some of its propositions. But, in Lacan’s writings, one finds mainly quotations and analyses of texts and concepts. (37)

Sokal and Bricmont wonder if the abuses of theorists like Lacan “arise from conscious fraud, self-deception, or perhaps a combination of the two” (6). The question resonates with the poem Randolph Henry Ash wrote about his experience exposing a supposed spiritualist as a fraud, in which he has a mentor assure her protégée, a fledgling spiritualist with qualms about engaging in deception, “All Mages have been tricksters” (444).

There’s even some evidence that Byatt is right about postmodern thinking making academics into “papery” people. In a 2006 lecture titled “The Inhumanity of the Humanities,” William van Peer reports on research he conducted with a former student comparing the emotional intelligence of students in the humanities to students in the natural sciences.

Although the psychologists Raymond Mar and Keith Oately (407) have demonstrated that reading fiction increases empathy and emotional intelligence, van Peer found that humanities students had no advantage in emotional intelligence over students of natural science. In fact, there was a weak trend in the opposite direction—this despite the fact that the humanities students were reading more fiction. Van Peer attributes the deficit to common academic approaches to literature:

Consider the ills flowing from postmodern approaches, the “posthuman”: this usually involves the hegemony of “race/class/gender” in which literary texts are treated with suspicion. Here is a major source of that loss of emotional connection between student and literature. How can one expect a certain humanity to grow in students if they are continuously instructed to distrust authors and texts? (8)

Whether it derives from her early reading of Arnold and his successors, as Adams suggests, or simply from her own artistic and readerly sensibilities, Byatt has an intense desire to revive that very humanity so many academics sacrifice on the altar of postmodern theory. Critical theory urges students to assume that any discussion of humanity, or universal traits, or human nature can only be exclusionary, oppressive, and, in Flegel’s words, “naïve” and “unethical.” The cognitive linguist Steven Pinker devotes his book The Blank Slate: The Modern Denial of Human Nature to debunking the radical cultural and linguistic determinism that is the foundation of modern literary theory. In a section on the arts, Pinker credits Byatt for playing a leading role in what he characterizes as a “revolt”:

Museum-goers have become bored with the umpteenth exhibit on the female body featuring dismembered torsos or hundreds of pounds of lard chewed up and spat out by the artist. Graduate students in the humanities are grumbling in emails and conference hallways about being locked out of the job market unless they write in gibberish while randomly dropping the names of authorities like Foucault and Butler. Maverick scholars are doffing the blinders that prevented them from looking at exciting developments in the sciences of human nature. And younger artists are wondering how the art world got itself into the bizarre place in which beauty is a dirty word. (Pinker 416)

There are resonances with Roland Mitchell’s complaints about how psychoanalysis transforms perceptions of landscapes in Pinker’s characterization of modern art. And the idea that beauty has become a dirty word is underscored by the critical condemnations of Byatt’s “fairy tale ending.” Pinker goes on to quote Byatt’s response in New York Times Magazine to the question of what the best story ever told was.

Her answer—The Arabian Nights:

The stories in “The Thousand and One Nights”… are about storytelling without ever ceasing to be stories about love and life and death and money and food and other human necessities. Narration is as much a part of human nature as breath and the circulation of the blood. Modernist literature tried to do away with storytelling, which it thought vulgar, replacing it with flashbacks, epiphanies, streams of consciousness. But storytelling is intrinsic to biological time, which we cannot escape. Life, Pascal said, is like living in a prison from which every day fellow prisoners are taken away to be executed. We are all, like Scheherazade, under sentences of death, and we all think of our lives as narratives, with beginnings, middles, and ends. (quoted in Pinker 419)

Byatt’s satire of papery scholars and her portrayals of her characters’ transcendence of nonsensical theories are but the simplest and most direct ways she celebrates the power of language to transport readers—and the power of stories to possess them. Though she incorporates an array of diverse genres, from letters to poems to diaries, and though some of the excerpts’ meanings subtly change in light of discoveries about their authors’ histories, all these disparate parts nonetheless “hook together,” collaborating in the telling of this magnificent tale. This cooperation would be impossible if the postmodern truism about the medium being the message were actually true. Meanwhile, the novel’s intimate engagement with the mythologies of wide-ranging cultures thoroughly undermines the paradigm according to which myths are deterministic “repeating patterns” imposed on individuals, showing instead that these stories simultaneously emerge from and lend meaning to our common human experiences. As the critical responses to Possession make abundantly clear, current literary theories are completely inadequate in any attempt to arrive at an understanding of Byatt’s work. While new theories may be better suited to the task, it is incumbent on us to put forth a good faith effort to imagine the possibility that true appreciation of this and other works of literature will come only after we’ve done away with theory altogether.

Read More
Dennis Junk Dennis Junk

Madness and Bliss: Critical versus Primitive Readings in A.S. Byatt’s Possession: a Romance

Possession can be read as the novelist’s narrative challenge to the ascendant critical theories, an “undermining of facile illusions” about language and culture and politics—a literary refutation of current approaches to literary criticism.

Part 1 of 2

“You have one of the gifts of the novelist at least,” Christabel LaMotte says to her cousin Sabine de Kercoz in A.S. Byatt’s Possession: a Romance, “you persist in undermining facile illusions” (377). LaMotte is staying with her uncle and cousin, Sabine later learns, because she is carrying the child of the renowned, and married, poet Randolph Henry Ash. The affair began when the two met at a breakfast party where they struck up an impassioned conversation that later prompted Ash to instigate a correspondence. LaMotte too was a poet, so each turned out to be an ideal reader for the other’s work. Just over a hundred years after this initial meeting, in the present day of Byatt’s narrative, the literary scholar Roland Mitchell finds two drafts of Ash’s first letter to LaMotte tucked away in the pages of a book he’s examining for evidence about the great poet’s life, and the detective work begins.

Roland, an unpaid research assistant financially dependent on the girlfriend he’s in a mutually unfulfilling relationship with, is overtaken with curiosity and embarks on a quest to piece together the story of what happened between LaMotte and Ash. Knowing next to nothing about LaMotte, Mitchell partners with the feminist scholar Maud Bailey, who one character describes as “a chilly mortal” (159), and a stilted romance develops between them as they seek out the clues to the earlier, doomed relationship. Through her juxtaposition of the romance between the intensely passionate, intensely curious nineteenth century couple and the subdued, hyper-analytic, and sterile modern one, the novelist Byatt does some undermining of facile illusions of her own.

       Both of the modern characters are steeped in literary theory, but Byatt’s narrative suggests that their education and training is more a hindrance than an aid to true engagement with literature, and with life. It is only by breaking with professional protocol—by stealing the drafts of the letter from Ash to LaMotte—and breaking away from his mentor and fellow researchers that Roland has a chance to read, and experience, the story that transforms him. “He had been taught that language was essentially inadequate, that it could never speak what was there, that it only spoke itself” (513). But over the course of the story Roland comes to believe that this central tenet of poststructuralism is itself inadequate, along with the main tenets of other leading critical theories, including psychoanalysis. Byatt, in a later book of criticism, counts herself among the writers of fiction who “feel that powerful figures in the modern critical movements feel almost a gladiatorial antagonism to the author and the authority the author claims” (6).

Indeed, Possession can be read as the novelist’s narrative challenge to the ascendant critical theories, an “undermining of facile illusions” about language and culture and politics—a literary refutation of current approaches to literary criticism. In the two decades since the novel’s publication, critics working in these traditions have been unable to adequately respond to Byatt’s challenge because they’ve been unable to imagine that their ideas are not simply impediments to pleasurable reading but that they’re both wrong and harmful to the creation and appreciation of literature.

       The possession of the title refers initially to how the story of LaMotte and Ash’s romance takes over Maud and Roland—in defiance of the supposed inadequacy of language. If words only speak themselves, then true communication would be impossible. But, as Roland says to Maud after they’ve discovered some uncanny correspondences between each of the two great poets’ works and the physical setting the modern scholars deduce they must’ve visited together, “People’s minds do hook together” (257). This hooking-together is precisely what inspires them to embark on their mission of discovery in the first place. “I want to—to—follow the path,” Maud says to Roland after they’ve read the poets’ correspondence together.

I feel taken over by this. I want to know what happened, and I want it to be me that finds out. I thought you were mad when you came to Lincoln with your piece of stolen letter.

Now I feel the same. It isn’t professional greed. It’s something more primitive. (239)

Roland interrupts to propose the label “Narrative curiosity” for her feeling of being taken over, to which she responds, “Partly” (239). Later in the story, after several more crucial discoveries, Maud proposes revealing all they’ve learned to their academic colleagues and returning to their homes and their lives. Roland worries doing so would mean going back “Unenchanted.” “Are we enchanted?” Maud replies. “I suppose we must start thinking again, sometime” (454). But it’s the primitive, enchanted, supposedly unthinking reading of the biographical clues about the poets that has brought the two scholars to where they are, and their journey ends up resulting in a transformation that allows Maud and Roland to experience the happy ending LaMotte and Ash were tragically deprived of.

Before discovering and being possessed by the romance of the nineteenth century poets, both Maud and Roland were living isolated and sterile lives. Maud, for instance, always has her hair covered in a kind of “head-binding” and twisted in tightly regimented braids that cause Roland “a kind of sympathetic pain on his own skull-skin” (282). She later reveals that she has to cover it because her fellow feminists always assume she’s “dyeing it to please men.” “It’s exhausting,” Roland has just said. “When everything’s a deliberate political stance. Even if it’s interesting” (295). Maud’s bound head thus serves as a symbol (if read in precisely the type of way Byatt’s story implicitly admonishes her audience to avoid) of the burdensome and even oppressive nature of an ideology that supposedly works for the liberation and wider consciousness of women.

Meanwhile, Roland is troubling himself about the implications of his budding romantic feelings for Maud. He has what he calls a “superstitious dread” of “repeating patterns,” a phrase he repeats over and over again throughout the novel. Thinking of his relations with Maud, he muses,

“Falling in love,” characteristically, combs the appearances of the world, and of the particular lover’s history, out of a random tangle and into a coherent plot. Roland was troubled that the opposite might be true. Finding themselves in a plot, they might suppose it appropriate to behave as though it was a sort of plot. And that would be to compromise some kind of integrity they had set out with. (456)

He later wrestles with the idea that “a Romance was one of the systems that controlled him, as the expectations of Romance control almost everyone in the Western world” (460). Because of his education, he cannot help doubting his own feelings, suspecting that giving in to their promptings would have political implications, and worrying that doing so would result in a comprising of his integrity (which he must likewise doubt) and his free will. Roland’s self-conscious lucubration forms a stark contrast to what Randolph Henry Ash wrote in an early letter to his wife Ellen: “I cannot get out of my mind—as indeed, how should I wish to, whose most ardent desire is to be possessed entirely by the pure thought of you—I cannot get out of my mind the entire picture of you” (500). It is only by reading letters like this, and by becoming more like Ash, turning away in the process from his modern learning, that Roland can come to an understanding of himself and accept his feelings for Maud as genuine and innocent.

Identity for modern literary scholars, Byatt suggests, is a fraught and complicated issue. At different points in the novel, both Maud and Roland engage in baroque, abortive efforts to arrive at a sense of who they are. Maud, reflecting on how another scholar’s writing about Ash says more about the author than about the subject, meditates,

Narcissism, the unstable self, the fractured ego, Maud thought, who am I? A matrix for a susurration of texts and codes? It was both a pleasant and an unpleasant idea, this requirement that she think of herself as intermittent and partial. There was the question of the awkward body. The skin, the breath, the eyes, the hair, their history, which did seem to exist. (273)

Roland later echoes this head-binding poststructuralist notion of the self as he continues to dither over whether or not he should act on his feelings for Maud.

Roland had learned to see himself, theoretically, as a crossing-place for a number of systems, all loosely connected. He had been trained to see his idea of his “self” as an illusion, to be replaced by a discontinuous machinery and electrical message-network of various desires, ideological beliefs and responses, language forms and hormones and pheromones. Mostly he liked this. He had no desire for any strenuous Romantic self-assertion. (459)

But he mistakes that lack of desire for self-assertion as genuine, when it fact it is borne of his theory-induced self-doubt. He will have to discover in himself that very desire to assert or express himself if he wants to escape his lifeless, menial occupation and end his sexless isolation. He and Maud both have to learn how to integrate their bodies and their desires into their conceptions of themselves.

Unfortunately, thinking about sex is even more fraught with exhausting political implications for Byatt’s scholars than thinking about the self. While on a trek to retrace the steps they believe LaMotte and Ash took in the hills of Yorkshire, Roland considers the writing of a psychoanalytic theorist. Disturbed, he asks Maud, “Do you never have the sense that our metaphors eat up our world?” (275). He goes on to explain, that no matter what they tried to discuss,

It all reduced like boiling jam to—human sexuality… And then, really, what is it, what is this arcane power we have, when we see everything is human sexuality? It’s really powerlessness… We are so knowing… Everything relates to us and so we’re imprisoned in ourselves—we can’t see things. (276)

The couple is coming to realize that they can in fact see things, the same things that the couple whose story they're tracking down saw over a century ago. This budding realization inspires in Roland an awareness of how limiting, even incapacitating, the dubious ideas of critical theorizing can be. Through the distorting prism of psychoanalysis, “Sexuality was like thick smoked glass; everything took on the same blurred tint through it. He could not imagine a pool with stones and water” (278).

The irony is that for all the faux sophistication of psychoanalytic sexual terminology it engenders in both Roland and Maud nothing but bafflement and aversion to actual sex. Roland highlights this paradox later, thinking,

They were children of a time and culture that mistrusted love, “in love,” romantic love, romance in toto, and which nevertheless in revenge proliferated sexual language, linguistic sexuality, analysis, dissection, deconstruction, exposure. (458)

Maud sums up the central problem when she says to Roland, “And desire, that we look into so carefully—I think all the looking-into has some very odd effects on the desire” (290). In that same scene, while still in Yorkshire trying to find evidence of LaMotte’s having accompanied Ash on his trip, the two modern scholars discover they share a fantasy, not a sexual fantasy, but one involving “An empty clean bed,” “An empty bed in an empty room,” and they wonder if “they’re symptomatic of whole flocks of exhausted scholars and theorists” (290-1).

Guided by their intense desire to be possessed by the two poets of the previous century, Maud and Roland try to imagine how they would have seen the world, and in so doing they try to imagine what it would be like not to believe in the poststructuralist and psychoanalytic theories they’ve been inculcated with. At first Maud tells Roland, “We live in the truth of what Freud discovered. Whether or not we like it. However we’ve modified it. We aren’t really free to suppose—to imagine—he could possibly have been wrong about human nature” (276). But after they’ve discovered a cave with a pool whose reflected light looks like white fire, a metaphor that both LaMotte and Ash used in poems written around the time they would’ve come to that very place, prompting Maud to proclaim, “She saw this. I’m sure she saw this” (289), the two begin trying in earnest to imagine what it would be like to live without their theories. Maud explains to Roland,

We know all sorts of things, too—about how there isn’t a unitary ego—how we’re made up of conflicting, interacting systems of things—and I suppose we believe that? We know we’re driven by desire, but we can’t see it as they did, can we? We never say the word Love, do we—we know it’s a suspect ideological construct—especially Romantic Love—so we have to make a real effort of imagination to know what it felt like to be them, here, believing in these things—Love—themselves—that what they did mattered—(290)

       Though many critics have pointed out how the affair between LaMotte and Ash parallels the one between Maud and Roland, in some way the trajectories of the two relationships run in opposite directions. For instance, LaMotte leaves Ash as even more of a “chilly mortal” (310) than she was when she first met him. It turns out the term derives from a Mrs. Cammish, who lodged LaMotte and Ash while they were on their trip, and was handed down to the Lady Bailey, Maud’s relative, who applies it to her in a conversation with Roland. And whereas the ultimate falling out between LaMotte and Ash comes in the wake of Ash exposing a spiritualist, whose ideas and abilities LaMotte had invested a great deal of faith in, as a fraud, Roland’s counterpart disillusionment, his epiphany that literary theory as he has learned it is a fraud, is what finally makes the consummation of his relationship with Maud possible. Maud too has to overcome, to a degree, her feminist compunctions to be with Roland. Noting how this chilly mortal is warming over the course of their quest, Roland thinks how, “It was odd to hear Maud Bailey talking wildly of madness and bliss” (360). But at last she lets her hair down.

Sabine’s journal of the time her cousin Christabel stayed with her and her father on the Brittany coast, where she’d sought refuge after discovering she was pregnant, offers Roland and Maud a glimpse at how wrongheaded it can be to give precedence to their brand of critical reading over what they would consider a more primitive approach. Ironically, it is the young aspiring writer who gives them this glimpse as she chastises her high-minded poet cousin for her attempts to analyze and explain the meanings of the myths and stories she’s grown up with. “The stories come before the meanings,” Sabine insists to Christabel. “I do not believe all these explanations. They diminish. The idea of Woman is less than brilliant Vivien, and the idea of Merlin will not allegorise into male wisdom. He is Merlin” (384). These words come from the same young woman who LaMotte earlier credited for her persistence “in undermining facile illusions” (377).

Readers of Byatt’s novel, though not Maud and Roland, both of whom likely already know of the episode, learn about how Ash attended a séance and, reaching up to grab a supposedly levitating wreath, revealed it to be attached to a set of strings connected to the spiritualist. In a letter to Ruskin read for Byatt’s readers by another modern scholar, Ash expresses his outrage that someone would exploit the credulity and longing of the bereaved, especially mothers who’ve lost children. “If this is fraud, playing on a mother’s harrowed feelings, it is wickedness indeed” (423). He also wonders what the ultimate benefit would be if spiritualist studies into other realms proved to be valid. “But if it were so, if the departed spirits were called back—what good does it do? Were we meant to spend our days sitting and peering into the edge of the shadows?” (422). LaMotte and Ash part ways for good after his exposure of the spiritualist as a charlatan because she is so disturbed by the revelation. And, for the reader, the interlude serves as a reminder of past follies that today are widely acknowledged to have depended on trickery and impassioned credulity. So it might be for the ideas of Freud and Derrida and Lacan.

Roland arrives at the conclusion that this is indeed the case. Having been taught that language is inadequate and only speaks itself, he gradually comes to realize that this idea is nonsense. Reflecting on how he was taught that language couldn’t speak about what really existed in the world, he suddenly realizes that he’s been disabused of the idea. “What happened to him was that the ways in which it could be said had become more interesting than the idea that it could not” (513). He has learned through his quest to discover what had occurred between LaMotte and Ash that “It is possible for a writer to make, or remake at least, for a reader, the primary pleasures of eating, or drinking, or looking on, or sex.” People’s minds do in fact “hook together,” as he’d observed earlier, and they do it through language. The novel’s narrator intrudes to explain here near the end of the book what Roland is coming to understand.

Now and then there are readings that make the hairs on the neck, the non-existent pelt, stand on end and tremble, when every word burns and shines hard and clear and infinite and exact, like stones of fire, like points of stars in the dark—readings when the knowledge that we shall know the writing differently or better or satisfactorily, runs ahead of any capacity to say what we know, or how. In these readings, a sense of that text has appeared to be wholly new, never before seen, is followed, almost immediately, by the sense that it was always there, that we the readers, knew it was always there, and have always known it was as it was, though we have now for the first time recognised, become fully cognisant of, our knowledge. (512) (Neuroscientists agree.)

The recognition the narrator refers to—which Roland is presumably experiencing in the scene—is of a shared human nature, and shared human experience, the notions of which are considered by most literary critics to be politically reactionary.

Though he earlier claimed to have no desire to assert himself, Roland discovers he has a desire to write poetry. He decides to turn away from literary scholarship altogether and become a poet. He also asserts himself by finally taking charge and initiating sex with Maud.

And very slowly and with infinite gentle delays and delicate diversions and variations of indirect assault Roland finally, to use an outdated phrase, entered and took possession of all her white coolness that grew warm against him, so that there seemed to be no boundaries, and he heard, towards dawn, from a long way off, her clear voice crying out, uninhibited, unashamed, in pleasure and triumph. (551)

This is in fact, except for postscript focusing on Ash, the final scene of the novel, and it represents Roland’s total, and Maud’s partial transcendence of the theories and habits that hitherto made their lives so barren and lonely.

Read part 2

Related posts:

Read

POSTSTRUCTURALISM: BANAL WHEN IT'S NOT BUSY BEING ABSURD

Read

CAN’T WIN FOR LOSING: WHY THERE ARE SO MANY LOSERS IN LITERATURE AND WHY IT HAS TO CHANGE

Or

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

Who Needs Complex Narratives? : Tim Parks' Enlightened Cynicism

Identities can be burdensome, as Parks intimates his is when he reveals his story has brought him to a place where he’s making a living engaging in an activity that serves no need—and yet he can’t bring himself seek out other employment. In an earlier, equally fascinating post titled "The Writer's Job," Parks interprets T.S. Eliot's writing about writers as suggesting that "only those who had real personality, special people like himself, would appreciate what a burden personality was and wish to shed it."

One of my professors asked our class last week how many of us were interested in writing fiction of our own. She was trying to get us to consider the implications of using one strategy for telling a story based on your own life over another. But I was left thinking instead about the implications of nearly everyone in the room raising a hand. Is the audience for any aspiring author’s work composed exclusively of other aspiring authors? If so, does that mean literature is no more than a exclusive society of the published and consumed forever screening would-be initiates, forever dangling the prize of admission to their ranks, allowing only the elite to enter, and effectively sealed off from the world of the non-literary?

Most of our civilization has advanced beyond big books. People still love their stories, but everyone’s time is constrained, and the choices of entertainment are infinite. Reading The Marriage Plot is an extravagance. Reading Of Human Bondage, the book we’re discussing in my class, is only for students of college English and the middle-class white guys trying to impress them. Nevertheless, Jonathan Franzen, whose written two lengthy, too lengthy works of fiction that enjoy a wide readership, presumably made up primarily of literary aspirants like me (I read and enjoyed both), told an Italian interviewer that “There is an enormous need for long, elaborate, complex stories, such as can only be written by an author concentrating alone, free from the deafening chatter of Twitter.”

British author Tim Parks quotes Franzen in a provocative post at The New York Review of Books titled “Do We Need Stories?” Parks notes that “as a novelist it is convenient to think that by the nature of the job one is on the side of the good, supplying an urgent and general need.” Though he’s written some novels of his own, and translated several others from Italian to English, Parks suspects that Franzen is wrong, that as much as we literary folk may enjoy them, we don’t really need complex narratives. We should note that just as Franzen is arguing on behalf of his own vocation Parks is arguing against his, thus effecting a type of enlightened cynicism toward his own work and that of others in the same field. “Personally,” he says, “I fear I’m too enmired in narrative and self narrative to bail out now. I love an engaging novel, I love a complex novel; but I am quite sure I don’t need it.”

         Parks’ argument is fascinating for what it reveals about what many fiction writers and aficionados believe they’re doing when they’re telling stories. It’s also fascinating for what it represents about authors and their attitudes toward writing. Parks rubs up against some profound insights, but then succumbs to some old-fashioned humanities nonsense. Recalling a time when he served as a judge for a literary award, Parks quotes the case made by a colleague on behalf of his or her favored work, which is excellent, it was insisted, “because it offers complex moral situations that help us get a sense of how to live and behave.” As life becomes increasingly complex, then, fraught with distractions like those incessant tweets, we need fictional accounts of complex moral dilemmas to help us train our minds to be equal to the task of living in the modern world. Parks points out two problems with this view: fiction isn’t the only source of stories, and behind all that complexity is the author’s take on the moral implications of the story’s events which readers must decide whether to accept or reject. We can’t escape complex moral dilemmas, so we may not really need any simulated training. And we have to pay attention lest we discover our coach has trained us improperly. The power of stories can, as Parks suggests, be “pernicious.” “In this view of things, rather than needing stories we need to learn how to smell out their drift and resist them.” (Yeah, but does anyone read Ayn Rand who isn't already convinced?)

But Parks doesn’t believe the true goal of either authors or readers is moral development or practical training. Instead, complex narratives give pleasure because they bolster our belief in complex selves. Words like God, angel, devil, and ghost, Parks contends, have to come with stories attached to them to be meaningful because they don’t refer to anything we can perceive. From this premise of one-word stories, he proceeds,

Arguably the most important word in the invented-referents category is “self.” We would like the self to exist perhaps, but does it really? What is it? The need to surround it with a lexical cluster of reinforcing terms—identity, character, personality, soul—all with equally dubious referents suggests our anxiety. The more words we invent, the more we feel reassured that there really is something there to refer to.

When my classmates and I raised our hands and acknowledged our shared desire to engage in the creative act of storytelling, what we were really doing, according to Parks, was expressing our belief in that fictional character we refer to reverentially as ourselves. 

One of the accomplishments of the novel, which as we know blossomed with the consolidation of Western individualism, has been to reinforce this ingenious invention, to have us believe more and more strongly in this sovereign self whose essential identity remains unchanged by all vicissitudes. Telling the stories of various characters in relation to each other, how something started, how it developed, how it ended, novels are intimately involved with the way we make up ourselves. They reinforce a process we are engaged in every moment of the day, self creation. They sustain the idea of a self projected through time, a self eager to be a real something (even at the cost of great suffering) and not an illusion.

Parks is just as much a product of that “Western individualism” as the readers he’s trying to enlighten as to the fictional nature of their essential being. As with his attempt at undermining the ultimate need for his own profession, there’s a quality of self-immolation in this argument—except of course there’s nothing, really, to immolate.

What exactly, we may wonder, is doing the reading, is so desperate to believe in its own reality? And why is that belief in its own reality so powerful that this thing, whatever it may be, is willing to experience great suffering to reinforce it? Parks suggests the key to the self is some type of unchanging and original coherence. So we like stories because we like characters who are themselves coherent and clearly delineated from other coherent characters.

The more complex and historically dense the stories are, the stronger the impression they give of unique and protracted individual identity beneath surface transformations, conversions, dilemmas, aberrations. In this sense, even pessimistic novels—say, J.M. Coetzee’s Disgrace—can be encouraging: however hard circumstances may be, you do have a self, a personal story to shape and live. You are a unique something that can fight back against all the confusion around. You have pathos.

In this author’s argument for the superfluity of authors, the centrality of pain and suffering to the story of the self is important to note. He makes the point even more explicit, albeit inadvertently, when he says, “If we asked the question of, for example, a Buddhist priest, he or she would probably tell us that it is precisely this illusion of selfhood that makes so many in the West unhappy.”

I don’t pretend to have all the questions surrounding our human fascination with narrative—complex and otherwise—worked out, but I do know Parks’ premise is faulty.

Unlike many professional scholars in the Humanities, Parks acknowledges that at least some words can refer to things in the world. But he goes wrong when he assumes that if there exists no physical object to refer to the word must have a fictional story attached to it. There is good evidence, for instance, that our notions of God and devils and spirits are not in fact based on stories, though stories clearly color their meanings. Our interactions with invisible beings are based on the same cognitive mechanisms that help us interact with completely visible fellow humans. What psychologists call theory of mind, our reading of intentions and mental states into others, likely extends into realms where no mind exists to have intentions and states. That’s where our dualistic philosophy comes from.

While Parks is right in pointing out that the words God and self don’t have physical referents—though most of us, I assume, think of our bodies as ourselves to some degree—he’s completely wrong in inferring these words only work as fictional narratives. People assume, wrongly, that God is a real being because they have experiences with him. In the same way, the self isn’t an object but an experience—and a very real experience. (Does the word fun have to come with a story attached?) The consistency across time and circumstance, the sense of unified awareness, these are certainly exaggerated at times. So too is our sense of transformation though, as anyone knows who’s discovered old writings from an earlier stage of life and thought, “Wow, I was thinking about the same stuff back then as I am now—even my writing style is similar!”

Parks is wrong too about so-called Western society, as pretty much everyone who uses that term is. It’s true that some Asian societies have a more collectivist orientation, but I’ve heard rumors that a few Japanese people actually enjoy reading novels. (The professor of the Brittish Lit course I'm taking is Chinese.) Those Buddhists monks are deluded too. Ruut Veenhoven surveyed 43 nations in the early 1990s and discovered that as individualism increases, so too does happiness. “There is no pattern of diminishing returns,” Veenhoven writes. “This indicates that individualization has not yet passed its optimum.” What this means is that, assuming Parks is right in positing that novel-reading increases individualism, reading novels could make you happier. Unfortunately, a lot of high-brow, literary authors would bristle at this idea because it makes of their work less a heroic surveying of the abyss and more of a commodity.

Parks doesn’t see any meaningful distinction between self and identity, but psychologists would use the latter term to label his idea of a coherent self-story. Dan McAdams is the leading proponent of the idea that in addition to a unified and stable experience of ourselves we each carry with us a story whose central theme is our own uniqueness and how it developed. He writes in his book The Stories We Live By: Personal Myths and the Making of the Self that identity is “an inner story of the self that integrates the reconstructed past, perceived present, and anticipated future to provide a life with unity, purpose, and meaning.” But we don’t just tell these stories to ourselves, nor are we solely interested in our own story. One of the functions of identity is to make us seem compelling and attractive to other people. Parks, for instance, tells us the story of how he provides a service, writing and translating, he understands isn’t necessary to anyone. And, if you’re like me, at least for a moment, you’re impressed with his ability to shoulder the burden of this enlightened cynicism. He’s a bit like those Buddhist monks who go to such great lengths to eradicate their egos.

The insight that Parks never quite manages to arrive at is that suffering is integral to stories of the self. If my story of myself, my identity, doesn’t feature any loss or conflict, then it’s not going to be very compelling to anyone. But what’s really compelling are the identities which somehow manage to cause the self whose stories they are their own pain. Identities can be burdensome, as Parks intimates his is when he reveals his story has brought him to a place where he’s making a living engaging in an activity that serves no need—and yet he can’t bring himself seek out other employment. In an earlier, equally fascinating post titled "The Writer's Job," Parks interprets T.S. Eliot's writing about writers as suggesting that "only those who had real personality, special people like himself, would appreciate what a burden personality was and wish to shed it."

If we don’t suffer for our identities, then we haven’t earned them. Without the pain of initiation, we don’t really belong. We’re not genuinely who we claim to be. We’re tourists. We’re poseurs. Mitt Romney, for instance, is thought to be an inauthentic conservative because he hasn’t shown sufficient willingness to lose votes—and possibly elections—for the sake of his convictions. We can’t help but assume equivalence between cost and value. If your identity doesn’t entail some kind of cost, well, then it’s going to come off as cheap. So a lot of people play up, or even fabricate, the suffering in their lives.

What about Parks’ question? Are complex narratives necessary? Maybe, like identities, the narratives we tell, as well as the narratives we enjoy, work as costly signals, so that the complexity of the stories you like serves as a reliable indication of the complexity of your identity. If you can truly appreciate a complex novel, you can truly appreciate a complex individual. Maybe our complicated modern civilization, even with its tweets and Kindles, is more a boon than a hindrance to complexity and happiness. What this would mean is that if two people on the subway realize they’re both reading the same complex narrative they can be pretty sure they’re compatible as friends or lovers. Either that, or they’re both English professors and they have no idea what’s going on, in which case they’re still compatible but they’ll probably hate each other regardless.

At least, that's the impression I get from David Lodge's Small World, the latest complex narrative assigned in my English Lit course taught by a professor from an Eastern society.

Also read:

WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

And:

WHAT'S THE POINT OF DIFFICULT READING?

Read More
Dennis Junk Dennis Junk

Intuition vs. Science: What's Wrong with Your Thinking, Fast and Slow

Kahneman has no faith in our ability to clean up our thinking. He’s an expert on all the ways thinking goes awry, and even he catches himself making all the common mistakes time and again. So he proposes a way around the impenetrable wall of cognitive illusion and self-justification. If all the people gossiping around the water cooler are well-versed in the language of biases and heuristics and errors of intuition, we may all benefit because anticipating gossip can have a profound effect on behavior. No one wants to be spoken of as the fool.

From Completely Useless to Moderately Useful

            In 1955, a twenty-one-year-old Daniel Kahneman was assigned the formidable task of creating an interview procedure to assess the fitness of recruits for the Israeli army. Kahneman’s only qualification was his bachelor’s degree in psychology, but the state of Israel had only been around for seven years at the time so the Defense Forces were forced to satisfice. In the course of his undergraduate studies, Kahneman had discovered the writings of a psychoanalyst named Paul Meehl, whose essays he would go on to “almost memorize” as a graduate student. Meehl’s work gave Kahneman a clear sense of how he should go about developing his interview technique.

If you polled psychologists today to get their predictions for how successful a young lieutenant inspired by a book written by a psychoanalyst would be in designing a personality assessment protocol—assuming you left out the names—you would probably get some dire forecasts. But Paul Meehl wasn’t just any psychoanalyst, and Daniel Kahneman has gone on to become one of the most influential psychologists in the world. The book whose findings Kahneman applied to his interview procedure was Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence, which Meehl lovingly referred to as “my disturbing little book.” Kahneman explains,

Meehl reviewed the results of 20 studies that had analyzed whether clinical predictions based on the subjective impressions of trained professionals were more accurate than statistical predictions made by combining a few scores or ratings according to a rule. In a typical study, trained counselors predicted the grades of freshmen at the end of the school year. The counselors interviewed each student for forty-five minutes. They also had access to high school grades, several aptitude tests, and a four-page personal statement. The statistical algorithm used only a fraction of this information: high school grades and one aptitude test. (222)

The findings for this prototypical study are consistent with those arrived at by researchers over the decades since Meehl released his book:

The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment. No exception has been convincingly documented. (223)       

            Kahneman designed the interview process by coming up with six traits he thought would have direct bearing on a soldier’s success or failure, and he instructed the interviewers to assess the recruits on each dimension in sequence. His goal was to make the process as systematic as possible, thus reducing the role of intuition. The response of the recruitment team will come as no surprise to anyone: “The interviewers came close to mutiny” (231). They complained that their knowledge and experience were being given short shrift, that they were being turned into robots. Eventually, Kahneman was forced to compromise, creating a final dimension that was holistic and subjective. The scores on this additional scale, however, seemed to be highly influenced by scores on the previous scales.

When commanding officers evaluated the new recruits a few months later, the team compared the evaluations with their predictions based on Kahneman’s six scales. “As Meehl’s book had suggested,” he writes, “the new interview procedure was a substantial improvement over the old one… We had progressed from ‘completely useless’ to ‘moderately useful’” (231).   

            Kahneman recalls this story at about the midpoint of his magnificent, encyclopedic book Thinking, Fast and Slow. This is just one in a long series of run-ins with people who don’t understand or can’t accept the research findings he presents to them, and it is neatly woven into his discussions of those findings. Each topic and each chapter feature a short test that allows you to see where you fall in relation to the experimental subjects. The remaining thread in the tapestry is the one most readers familiar with Kahneman’s work most anxiously anticipated—his friendship with AmosTversky, with whom he shared the Nobel prize in economics in 2002.

Most of the ideas that led to experiments that led to theories which made the two famous and contributed to the founding of an entire new field, behavioral economics, were borne of casual but thrilling conversations both found intrinsically rewarding in their own right. Reading this book, as intimidating as it appears at a glance, you get glimmers of Kahneman’s wonder at the bizarre intricacies of his own and others’ minds, flashes of frustration at how obstinately or casually people avoid the implications of psychology and statistics, and intimations of the deep fondness and admiration he felt toward Tversky, who died in 1996 at the age of 59.

Pointless Punishments and Invisible Statistics

            When Kahneman begins a chapter by saying, “I had one of the most satisfying eureka experiences of my career while teaching flight instructors in the Israeli Air Force about the psychology of effective training” (175), it’s hard to avoid imagining how he might have relayed the incident to Amos years later. It’s also hard to avoid speculating about what the book might’ve looked like, or if it ever would have been written, if he were still alive. The eureka experience Kahneman had in this chapter came about, as many of them apparently did, when one of the instructors objected to his assertion, in this case that “rewards for improved performance work better than punishment of mistakes.” The instructor insisted that over the long course of his career he’d routinely witnessed pilots perform worse after praise and better after being screamed at. “So please,” the instructor said with evident contempt, “don’t tell us that reward works and punishment does not, because the opposite is the case.” Kahneman, characteristically charming and disarming, calls this “a joyous moment of insight” (175).

            The epiphany came from connecting a familiar statistical observation with the perceptions of an observer, in this case the flight instructor. The problem is that we all have a tendency to discount the role of chance in success or failure. Kahneman explains that the instructor’s observations were correct, but his interpretation couldn’t have been more wrong.

What he observed is known as regression to the mean, which in that case was due to random fluctuations in the quality of performance. Naturally, he only praised a cadet whose performance was far better than average. But the cadet was probably just lucky on that particular attempt and therefore likely to deteriorate regardless of whether or not he was praised. Similarly, the instructor would shout into the cadet’s earphones only when the cadet’s performance was unusually bad and therefore likely to improve regardless of what the instructor did. The instructor had attached a causal interpretation to the inevitable fluctuations of a random process. (175-6)

The roster of domains in which we fail to account for regression to the mean is disturbingly deep. Even after you’ve learned about the phenomenon it’s still difficult to recognize the situations you should apply your understanding of it to. Kahneman quotes statistician David Freedman to the effect that whenever regression becomes pertinent in a civil or criminal trial the side that has to explain it will pretty much always lose the case. Not understanding regression, however, and not appreciating how it distorts our impressions has implications for even the minutest details of our daily experiences. “Because we tend to be nice to other people when they please us,” Kahneman writes, “and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty” (176). Probability is a bitch.

The Illusion of Skill in Stock-Picking

            Probability can be expensive too. Kahneman recalls being invited to give a lecture to advisers at an investment firm. To prepare for the lecture, he asked for some data on the advisers’ performances and was given a spreadsheet for investment outcomes over eight years. When he compared the numbers statistically, he found that none of the investors was consistently more successful than the others. The correlation between the outcomes from year to year was nil. When he attended a dinner the night before the lecture “with some of the top executives of the firm, the people who decide on the size of bonuses,” he knew from experience how tough a time he was going to have convincing them that “at least when it came to building portfolios, the firm was rewarding luck as if it were a skill.” Still, he was amazed by the execs’ lack of shock:

We all went on calmly with our dinner, and I have no doubt that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions—and thereby threaten people’s livelihood and self-esteem—are simply not absorbed. (216)

The scene that follows echoes the first chapter of Carl Sagan’s classic paean to skepticism Demon-Haunted World, where Sagan recounts being bombarded with questions about science by a driver who was taking him from the airport to an auditorium where he was giving a lecture. He found himself explaining to the driver again and again that what he thought was science—Atlantis, aliens, crystals—was, in fact, not. "As we drove through the rain," Sagan writes, "I could see him getting glummer and glummer. I was dismissing not just some errant doctrine, but a precious facet of his inner life" (4). In Kahneman’s recollection of his drive back to the airport after his lecture, he writes of a conversation he had with his own driver, one of the execs he’d dined with the night before. 

He told me, with a trace of defensiveness, “I have done very well for the firm and no one can take that away from me.” I smiled and said nothing. But I thought, “Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it? (216)

Blinking at the Power of Intuitive Thinking

            It wouldn’t surprise Kahneman at all to discover how much stories like these resonate. Indeed, he must’ve considered it a daunting challenge to conceive of a sensible, cognitively easy way to get all of his vast knowledge of biases and heuristics and unconscious, automatic thinking into a book worthy of the science—and worthy too of his own reputation—while at the same time tying it all together with some intuitive overarching theme, something that would make it read more like a novel than an encyclopedia.

Malcolm Gladwell faced a similar challenge in writing Blink: the Power of Thinking without Thinking, but he had the advantages of a less scholarly readership, no obligation to be comprehensive, and the freedom afforded to someone writing about a field he isn’t one of the acknowledged leaders and creators of. Ultimately, Gladwell’s book painted a pleasing if somewhat incoherent picture of intuitive thinking. The power he refers to in the title is over the thoughts and actions of the thinker, not, as many must have presumed, to arrive at accurate conclusions.

It’s entirely possible that Gladwell’s misleading title came about deliberately, since there’s a considerable market for the message that intuition reigns supreme over science and critical thinking. But there are points in his book where it seems like Gladwell himself is confused. Robert Cialdini, Steve Marin, and Noah Goldstein cover some of the same research Kahneman and Gladwell do, but their book Yes!: 50 Scientifically Proven Ways to be Persuasive is arranged in a list format, with each chapter serving as its own independent mini-essay.

Early in Thinking, Fast and Slow, Kahneman introduces us to two characters, System 1 and System 2, who pass the controls of our minds back and forth between themselves according the expertise and competency demanded by current exigency or enterprise. System 1 is the more intuitive, easygoing guy, the one who does what Gladwell refers to as “thin-slicing,” the fast thinking of the title. System 2 works deliberately and takes effort on the part of the thinker. Most people find having to engage their System 2—multiply 17 by 24—unpleasant to one degree or another.

The middle part of the book introduces readers to two other characters, ones whose very names serve as a challenge to the field of economics. Econs are the beings market models and forecasts are based on. They are rational, selfish, and difficult to trick. Humans, the other category, show inconsistent preferences, changing their minds depending on how choices are worded or presented, are much more sensitive to the threat of loss than the promise of gain, are sometimes selfless, and not only can be tricked with ease but routinely trick themselves. Finally, Kahneman introduces us to our “Two Selves,” the two ways we have of thinking about our lives, either moment-to-moment—experiences he, along with Mihaly Csikzentmihhalyi (author of Flow) pioneered the study of—or in abstract hindsight. It’s not surprising at this point that there are important ways in which the two selves tend to disagree.

Intuition and Cerebration

  The Econs versus Humans distinction, with its rhetorical purpose embedded in the terms, is plenty intuitive. The two selves idea, despite being a little too redolent of psychoanalysis, also works well. But the discussions about System 1 and System 2 are never anything but ethereal and abstruse. Kahneman’s stated goal was to discuss each of the systems as if they were characters in a plot, but he’s far too concerned with scientifically precise definitions to run with the metaphor. The term system is too bloodless and too suggestive of computer components; it’s too much of the realm of System 2 to be at all satisfying to System 1. The collection of characteristics Thinking links to the first system (see a list below) is lengthy and fascinating and not easily summed up or captured in any neat metaphor. But we all know what Kahneman is talking about. We could use mythological figures, perhaps Achilles or Orpheus for System 1 and Odysseus or Hephaestus for System 2, but each of those characters comes with his own narrative baggage. Not everyone’s System 1 is full of rage like Achilles, or musical like Orpheus. Maybe we could assign our System 1s idiosyncratic totem animals.

But I think the most familiar and the most versatile term we have for System 1 is intuition. It is a hairy and unpredictable beast, but we all recognize it. System 2 is actually the harder to name because people so often mistake their intuitions for logical thought. Kahneman explains why this is the case—because our cognitive resources are limited our intuition often offers up simple questions as substitutes from more complicated ones—but we must still have a term that doesn’t suggest complete independence from intuition and that doesn’t imply deliberate thinking operates flawlessly, like a calculator. I propose cerebration. The cerebral cortex rests on a substrate of other complex neurological structures. It’s more developed in humans than in any other animal. And the way it rolls trippingly off the tongue is as eminently appropriate as the swish of intuition. Both terms work well as verbs too. You can intuit, or you can cerebrate. And when your intuition is working in integrated harmony with your cerebration you are likely in the state of flow Csikzentmihalyi pioneered the study of.

While Kahneman’s division of thought into two systems never really resolves into an intuitively manageable dynamic, something he does throughout the book, which I initially thought was silly, seems now a quite clever stroke of brilliance. Kahneman has no faith in our ability to clean up our thinking. He’s an expert on all the ways thinking goes awry, and even he catches himself making all the common mistakes time and again. In the introduction, he proposes a way around the impenetrable wall of cognitive illusion and self-justification. If all the people gossiping around the water cooler are well-versed in the language describing biases and heuristics and errors of intuition, we may all benefit because anticipating gossip can have a profound effect on behavior. No one wants to be spoken of as the fool.

Kahneman writes, “it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own.” It’s not easy to tell from his straightforward prose, but I imagine him writing lines like that with a wry grin on his face. He goes on,

Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and at home. (3)

So we encourage the education of others to trick ourselves into trying to be smarter in their eyes. Toward that end, Kahneman ends each chapter with a list of sentences in quotation marks—lines you might overhear passing that water cooler if everyone where you work read his book.  I think he’s overly ambitious. At some point in the future, you may hear lines like “They’re counting on denominator neglect” (333) in a boardroom—where people are trying to impress colleagues and superiors—but I seriously doubt you’ll hear it in the break room. Really, what he’s hoping is that people will start talking more like behavioral economists. Though some undoubtedly will, Thinking, Fast and Slow probably won’t ever be as widely read as, say, Freud’s lurid pseudoscientific On the Interpretation of Dreams. That’s a tragedy.

Still, it’s pleasant to think about a group of friends and colleagues talking about something other than football and American Idol. Characteristics of System 1 (105): Try to come up with a good metaphor.·

generates impressions, feelings, and inclinations; when endorsed by System 2 these become beliefs, attitudes, and intentions·

operates automatically and quickly, with little or no effort, and no sense of voluntary control·

can be programmed by System 2 to mobilize attention when particular patterns are detected (search) ·

executes skilled responses and generates skilled intuitions, after adequate training·

creates a coherent pattern of activated ideas in associative memory·

links a sense of cognitive ease to illusions of truth, pleasant feelings, and reduced vigilance·

distinguishes the surprising from the normal·

infers and invents causes and intentions·

neglects ambiguity and suppresses doubt·

is biased to believe and confirm·

exaggerates emotional consistency (halo effect)·

focuses on existing evidence and ignores absent evidence (WYSIATI)·

generates a limited set of basic assessments·

represents sets by norms and prototypes, does not integrate·

matches intensities across scales (e.g., size and loudness)·

computes more than intended (mental shotgun)·

sometimes substitutes an easier question for a difficult one (heuristics) ·

is more sensitive to changes than to states (prospect theory)·

overweights low probabilities.

shows diminishing sensitivity to quantity (psychophysics)·

responds more strongly to losses than to gains (loss aversion)·

frames decision problems narrowly, in isolation from one another

Also read:

LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

Read More