Politics

The Idiocy of Outrage: Sam Harris's Run-ins with Ben Affleck and Noam Chomsky

Affleck, Harris, Maher
(5,608 words. Link to printable version.)

        Every time Sam Harris engages in a public exchange of ideas, be it a casual back-and-forth or a formal debate, he has to contend with an invisible third party whose obnoxious blubbering dispels, distorts, or simply drowns out nearly every word he says. You probably wouldn’t be able to infer the presence of this third party from Harris’s own remarks or demeanor. What you’ll notice, though, is that fellow participants in the discussion, be they celebrities like Ben Affleck or eminent scholars like Noam Chomsky, respond to his comments—even to his mere presence—with a level of rancor easily mistakable for blind contempt. This reaction will baffle many in the audience. But it will quickly dawn on anyone familiar with Harris’s ongoing struggle to correct pernicious mischaracterizations of his views that these people aren’t responding to Harris at all, but rather to the dimwitted and evil caricature of him promulgated by unscrupulous journalists and intellectuals.

In his books on religion and philosophy, Harris plies his unique gift for cutting through unnecessary complications to shine a direct light on the crux of the issue at hand. Topics that other writers seem to go out of their way to make abstruse he manages to explore with jolting clarity and refreshing concision. But this same quality to his writing which so captivates his readers often infuriates academics, who feel he’s cheating by breezily refusing to represent an issue in all its grand complexity while neglecting to acknowledge his indebtedness to past scholars. That he would proceed in such a manner to draw actual conclusions—and unorthodox ones at that—these scholars see as hubris, made doubly infuriating by the fact that his books enjoy such a wide readership outside of academia. So, whether Harris is arguing on behalf of a scientific approach to morality or insisting we recognize that violent Islamic extremism is motivated not solely by geopolitical factors but also by straightforward readings of passages in Islamic holy texts, he can count on a central thread of the campaign against him consisting of the notion that he’s a journeyman hack who has no business weighing in on such weighty matters.
Sam Harris

Philosophers and religious scholars are of course free to challenge Harris’s conclusions, and it’s even possible for them to voice their distaste for his style of argumentation without necessarily violating any principles of reasoned debate. However, whenever these critics resort to moralizing, we must recognize that by doing so they’re effectively signaling the end of any truly rational exchange. For Harris, this often means a substantive argument never even gets a chance to begin. The distinction between debating morally charged topics on the one hand, and condemning an opponent as immoral on the other, may seem subtle, or academic even. But it’s one thing to argue that a position with moral and political implications is wrong; it’s an entirely different thing to become enraged and attempt to shout down anyone expressing an opinion you deem morally objectionable. Moral reasoning, in other words, can and must be distinguished from moralizing. Since the underlying moral implications of the issue are precisely what are under debate, giving way to angry indignation amounts to a pulling of rank—an effort to silence an opponent through the exercise of one’s own moral authority, which reveals a rather embarrassing sense of one’s own superior moral standing.

Unfortunately, it’s far too rarely appreciated that a debate participant who gets angry and starts wagging a finger is thereby demonstrating an unwillingness or an inability to challenge a rival’s points on logical or evidentiary grounds. As entertaining as it is for some to root on their favorite dueling demagogue in cable news-style venues, anyone truly committed to reason and practiced in its application realizes that in a debate the one who loses her cool loses the argument. This isn’t to say we should never be outraged by an opponent’s position. Some issues have been settled long enough, their underlying moral calculus sufficiently worked through, that a signal of disgust or contempt is about the only imaginable response. For instance, if someone were to argue, as Aristotle did, that slavery is excusable because some races are naturally subservient, you could be forgiven for lacking the patience to thoughtfully scrutinize the underlying premises. The problem, however, is that prematurely declaring an end to the controversy and then moving on to blanket moral condemnation of anyone who disagrees has become a worryingly common rhetorical tactic. And in this age of increasingly segmented and polarized political factions it’s more important than ever that we check our impulse toward sanctimony—even though it’s perhaps also harder than ever to do so.

Once a proponent of some unpopular idea starts to be seen as not merely mistaken but dishonest, corrupt, or bigoted, then playing fair begins to seem less obligatory for anyone wishing to challenge that idea. You can learn from casual Twitter browsing or from reading any number of posts on Salon.com that Sam Harris advocates a nuclear first strike against radical Muslims, supported the Bush administration’s use of torture, and carries within his heart an abiding hatred of Muslim people, all billion and a half of whom he believes are virtually indistinguishable from the roughly 20,000 militants making up ISIS. You can learn these things, none of which is true, because some people dislike Harris’s ideas so much they feel it’s justifiable, even imperative, to misrepresent his views, lest the true, more reasonable-sounding versions reach a wider receptive audience. And it’s not just casual bloggers and social media mavens who feel no qualms about spreading what they know to be distortions of Harris’s views; religious scholar Reza Aslan and journalist Glenn Greenwald both saw fit to retweet the verdict that he is a “genocidal fascist maniac,” accompanied by an egregiously misleading quote as evidence—even though Harris had by then discussed his views at length with both of these men.

 It’s easy to imagine Ben Affleck doing some cursory online research to prep for his appearance on Real Time with Bill Maher and finding plenty of savory tidbits to prejudice him against Harris before either of them stepped in front of the cameras. But we might hope that a scholar of Noam Chomsky’s caliber wouldn’t be so quick to form an opinion of someone based on hearsay. Nonetheless, Chomsky responded to Harris’s recent overture to begin an email exchange to help them clear up their misconceptions about each other’s ideas by writing: “Perhaps I have some misconceptions about you. Most of what I’ve read of yours is material that has been sent to me about my alleged views, which is completely false”—this despite Harris having just quoted Chomsky calling him a “religious fanatic.” We must wonder, where might that characterization have come from if he’d read so little of Harris’s work?

 Political and scholarly discourse would benefit immensely from a more widespread recognition of our natural temptation to recast points of intellectual disagreement as moral offenses, a temptation which makes it difficult to resist the suspicion that anyone espousing rival beliefs is not merely mistaken but contemptibly venal and untrustworthy. In philosophy and science, personal or so-called ad hominem accusations and criticisms are considered irrelevant and thus deemed out of bounds—at least in principle. But plenty of scientists and academics of every stripe routinely succumb to the urge to moralize in the midst of controversy. Thus begins the lamentable process by which reasoned arguments are all but inevitably overtaken by competing campaigns of character assassination. In service to these campaigns, we have an ever growing repertoire of incendiary labels with ever lengthening lists of criteria thought to reasonably warrant their application, so if you want to discredit an opponent all that’s necessary is a little creative interpretation, and maybe some selective quoting.

The really tragic aspect of this process is that as scrupulous and fair-minded as any given interlocutor may be, it’s only ever a matter of time before an unpopular message broadcast to a wider audience is taken up by someone who feels duty-bound to kill the messenger—or at least to besmirch the messenger’s reputation. And efforts at turning thoughtful people away from troublesome ideas before they ever even have a chance to consider them all too often meet with success, to everyone’s detriment. Only a small percentage of unpopular ideas may merit acceptance, but societies can’t progress without them.

Once we appreciate that we’re all susceptible to this temptation to moralize, the next most important thing for us to be aware of is that it becomes more powerful the moment we begin to realize ours are the weaker arguments. People in individualist cultures already tend to more readily rate themselves as exceptionally moral than as exceptionally intelligent. Psychologists call this tendency the Muhammed Ali effect (because the famous boxer once responded to a journalist’s suggestion that he’d purposely failed an Army intelligence test by quipping, “I only said I was the greatest, not the smartest”). But when researchers Jens Möller and Karel Savyon had study participants rate themselves after performing poorly on an intellectual task, they found that the effect was even more pronounced. Subjects in studies of the Muhammed Ali effect report believing that moral traits like fairness and honesty are more socially desirable than intelligence. They also report believing these traits are easier for an individual to control, while at the same time being more difficult to measure. Möller and Savyon theorize that participants in their study were inflating their already inflated sense of their own moral worth to compensate for their diminished sense of intellectual worth. While researchers have yet to examine whether this amplification of the effect makes people more likely to condemn intellectual rivals on moral grounds, the idea that a heightened estimation of moral worth could make us more likely to assert our moral authority seems a plausible enough extrapolation from the findings. 

            That Ben Affleck felt intimated by the prospect of having to intelligently articulate his reasons for rejecting Harris’s positions, however, seems less likely than that he was prejudiced to the point of outrage against Harris sometime before encountering him in person. At one point in the interview he says, “You’re making a career out of ISIS, ISIS, ISIS,” a charge of pandering that suggests he knows something about Harris’s work (though Harris doesn't discuss ISIS in any of his books). Unfortunately, Affleck’s passion and the sneering tone of his accusations were probably more persuasive for many in the audience than any of the substantive points made on either side. But, amid Affleck’s high dudgeon, it’s easy to sift out views that are mainstream among liberals. The argument Harris makes at the outset of the segment that first sets Affleck off—though it seemed he’d already been set off by something—is in fact a critique of those same views. He says,

When you want to talk about the treatment of women and homosexuals and freethinkers and public intellectuals in the Muslim world, I would argue that liberals have failed us. [Affleck breaks in here to say, “Thank God you’re here.”] And the crucial point of confusion is that we have been sold this meme of Islamophobia, where every criticism of the doctrine of Islam gets conflated with bigotry toward Muslims as people.

This is what Affleck says is “gross” and “racist.” The ensuing debate, such as it is, focuses on the appropriateness—and morality—of criticizing the Muslim world for crimes only a subset of Muslims are guilty of. But how large is that subset?

Harris (along with Maher) makes two important points: first, he states over and over that it’s Muslim beliefs he’s criticizing, not the Muslim people, so if a particular Muslim doesn’t hold to the belief in question he or she is exempt from the criticism. Harris is ready to cite chapter and verse of Islamic holy texts to show that the attitudes toward women and homosexuals he objects to aren’t based on the idiosyncratic characters of a few sadistic individuals but are rather exactly what’s prescribed by religious doctrine. A passage from his book The End of Faith makes the point eloquently.

It is not merely that we are war with an otherwise peaceful religion that has been “hijacked” by extremists. We are at war with precisely the vision of life that is prescribed to all Muslims in the Koran, and further elaborated in the literature of the hadith, which recounts the sayings and actions of the Prophet. A future in which Islam and the West do not stand on the brink of mutual annihilation is a future in which most Muslims have learned to ignore most of their canon, just as most Christians have learned to do. (109-10)

But most secularists and moderate Christians in the U.S. have a hard time appreciating how seriously most Muslims take their Koran. There are of course passages in the Bible that are simply obscene, and Christians have certainly committed their share of atrocities at least in part because they believed their God commanded them to. But, whereas almost no Christians today advocate stoning their brothers, sisters, or spouses to death for coaxing them to worship other gods (Deuteronomy 13:6 8-15), a significant number of people in Islamic populations believe apostates and “innovators” deserve to have their heads lopped off.

            The second point Harris makes is that, while Affleck is correct in stressing how few Muslims make up or support the worst of the worst groups like Al Qaeda and ISIS, the numbers who believe women are essentially the property of their fathers and husbands, that homosexuals are vile sinners, or that atheist bloggers deserve to be killed are much higher. “We have to empower the true reformers in the Muslim world to change it,” as Harris insists. The journalist Nicholas Kristof says this is a mere “caricature” of the Muslim world. But Harris’s goal has never been to promote a negative view of Muslims, and he at no point suggests his criticisms apply to all Muslims, all over the world. His point, as he stresses multiple times, is that Islamic doctrine is inspiring large numbers of people to behave in appalling ways, and this is precisely why he’s so vocal in his criticisms of those doctrines.

Part of the difficulty here is that liberals (including this one) face a dilemma anytime they’re forced to account for the crimes of non-whites in non-Western cultures. In these cases, their central mission of standing up for the disadvantaged and the downtrodden runs headlong into their core principle of multiculturalism, which makes it taboo for them to speak out against another society’s beliefs and values. Guys like Harris are permitted to criticize Christianity when it’s used to justify interference in women’s sexual decisions or discrimination against homosexuals, because a white Westerner challenging white Western culture is just the system attempting to correct itself. But when Harris speaks out against Islam and the far worse treatment of women and homosexuals—and infidels and apostates—it prescribes, his position is denounced as “gross” and “racist” by the likes of Ben Affleck, with the encouragement of guys like Reza Aslan and Glenn Greenwald. A white American male casting his judgment on a non-Western belief system strikes them as the first step along the path to oppression that ends in armed invasion and possibly genocide. (Though, it should be noted, multiculturalists even attempt to silence female critics of Islam from the Muslim world.)

The biggest problem with this type of slippery-slope presumption isn’t just that it’s sloppy thinking—rejecting arguments because of alleged similarities to other, more loathsome ideas, or because of some imagined consequence should those ideas fall into the wrong hands. The biggest problem is that it time and again provides a rationale for opponents of an idea to silence and defame anyone advocating it. Unless someone is explicitly calling for mistreatment or aggression toward innocents who pose no threat, there’s simply no way to justify violating anyone’s rights to free inquiry and free expression—principles that should supersede multiculturalism because they’re the foundation and guarantors of so many other rights. Instead of using our own delusive moral authority in an attempt to limit discourse within the bounds we deem acceptable, we have a responsibility to allow our intellectual and political rivals the space to voice their positions, trusting in our fellow citizens’ ability to weigh the merits of competing arguments. 

But few intellectuals are willing to admit that they place multiculturalism before truth and the right to seek and express it. And, for those who are reluctant to fly publically into a rage or to haphazardly apply any of the growing assortment of labels for the myriad varieties of bigotry, there are now a host of theories that serve to reconcile competing political values. The multicultural dilemma probably makes all of us liberals too quick to accept explanations of violence or extremism—or any other bad behavior—emphasizing the role of external forces, whether it’s external to the individual or external to the culture. Accordingly, to combat Harris’s arguments about Islam, many intellectuals insist that religion simply does not cause violence. They argue instead that the real cause is something like resource scarcity, a history of oppression, or the prolonged occupation of Muslim regions by Western powers.

            If the arguments in support of the view that religion plays a negligible role in violence were as compelling as proponents insist they are, then it’s odd that they should so readily resort to mischaracterizing Harris’s positions when he challenges them. Glenn Greenwald, a journalist who believes religion is such a small factor that anyone who criticizes Islam is suspect, argues his case against Harris within an almost exclusively moral framework—not is Harris right, but is he an anti-Muslim? The religious scholar Reza Aslan quotes Harris out of context to give the appearance that he advocates preemptive strikes against Muslim groups. But Aslan’s real point of disagreement with Harris is impossible to pin down. He writes,

After all, there’s no question that a person’s religious beliefs can and often do influence his or her behavior. The mistake lies in assuming there is a necessary and distinct causal connection between belief and behavior.

Since he doesn’t explain what he means by “necessary and distinct,” we’re left with little more than the vague objection that religion’s role in motivating violence is more complex than some people seem to imagine. To make this criticism apply to Harris, however, Aslan is forced to erect a straw man—and to double down on the tactic after Harris has pointed out his error, suggesting that his misrepresentation is deliberate.

Few commenters on this debate appreciate just how radical Aslan’s and Greenwald’s (and Karen Armstrong’s) positions are. The straw men notwithstanding, Harris readily admits that religion is but one of many factors that play a role in religious violence. But this doesn’t go far enough for Aslan and Greenwald. While they acknowledge religion must fit somewhere in the mix, they insist its role is so mediated and mixed up with other factors that its influence is all but impossible to discern. Religion in their minds is a pure social construct, so intricately woven into the fabric of a culture that it could never be untangled. As evidence of this irreducible complexity, they point to the diverse interpretations of the Koran made by the wide variety of Muslim groups all over the world. There’s an undeniable kernel of truth in this line of thinking. But is religion really reconstructed from scratch in every culture?

One of the corollaries of this view is that all religions are essentially equal in their propensity to inspire violence, and therefore, if adherents of one particular faith happen to engage in disproportionate levels of violence, we must look to other cultural and political factors to explain it. That would also mean that what any given holy text actually says in its pages is completely immaterial. (This from a scholar who sticks to a literal interpretation of a truncated section of a book even though the author assures him he’s misreading it.) To highlight the absurdity of this idea, Harris likes to cite the Jains as an example. Mahavira, a Jain patriarch, gave this commandment: “Do not injure, abuse, oppress, enslave, insult, torment, or kill any creature or living being.” How plausible is the notion that adherents of this faith are no more and no less likely to commit acts of violence than those whose holy texts explicitly call for them to murder apostates? “Imagine how different our world might be if the Bible contained this as its central precept” (23), Harris writes in Letter to a Christian Nation.

            Since the U.S. is in fact a Christian nation, and since it has throughout its history displaced, massacred, invaded, occupied, and enslaved people from nearly every corner of the globe, many raise the question of what grounds Harris, or any other American, has for judging other cultures. And this is where the curious email exchange Harris began with the linguist and critic of American foreign policy Noam Chomsky takes up. Harris reached out to Chomsky hoping to begin an exchange that might help to clear up their differences, since he figured they have a large number of readers in common. Harris had written critically of Chomsky’s book about 9/11 in End of Faith, his own book on the topic of religious extremism written some time later. Chomsky’s argument seems to have been that the U.S. routinely commits atrocities on a scale similar to that of 9/11, and that the Al Qaeda attacks were an expectable consequence of our nation’s bullying presence in global affairs. Instead of dealing with foreign threats then, we should be concentrating our efforts on reforming our own foreign policy. But Harris points out that, while it’s true the U.S. has caused the deaths of countless innocents, the intention of our leaders wasn’t to kill as many people as possible to send a message of terror, making such actions fundamentally different from those of the Al Qaeda terrorists.

The first thing to note in the email exchange is that Harris proceeds on the assumption that any misunderstanding of his views by Chomsky is based on an honest mistake, while Chomsky immediately takes for granted that Harris’s alleged misrepresentations are deliberate (even though, since Harris sends him the excerpt from his book, that would mean he’s presenting the damning evidence of his own dishonesty). In other words, Chomsky switches into moralizing mode at the very outset of the exchange. The substance of the disagreement mainly concerns the U.S.’s 1998 bombing of the al-Shifa pharmaceutical factory in Sudan. According to Harris’s book, Chomsky argues this attack was morally equivalent to the attacks by Al Qaeda on 9/11. But in focusing merely on body counts, Harris charges that Chomsky is neglecting the far more important matter of intention.

Noam Chomsky
Chomsky insists after reading the excerpt, however, that he never claimed the two attacks were morally equivalent, and that furthermore he in fact did consider, and write at length about, the intentions of the Clinton administration officials who decided to bomb al-Shifa—just not in the book cited by Harris. In this other book, which Chomsky insists Harris is irresponsible for not having referenced, he argues that the administration’s claim that it received intelligence about the factory manufacturing chemical weapons was a lie and that the bombing was actually meant as retaliation for an earlier attack on the U.S. Embassy. Already at this point in the exchange Chomsky is writing to Harris as if he were guilty of dishonesty, unscholarly conduct, and collusion in covering up the crimes of the American government. 

But which is it? Is Harris being dishonest when he says Chomsky is claiming moral equivalence? Or is he being dishonest when he fails to cite an earlier source arguing that in fact what the U.S. did was morally worse? The more important question, however, is why does Chomsky assume Harris is being dishonest, especially in light of how complicated his position is? Here’s what Chomsky writes in response to Harris pressing him to answer directly the question about moral equivalence:

Clinton bombed al-Shifa in reaction to the Embassy bombings, having discovered no credible evidence in the brief interim of course, and knowing full well that there would be enormous casualties. Apologists may appeal to undetectable humanitarian intentions, but the fact is that the bombing was taken in exactly the way I described in the earlier publication which dealt the question of intentions in this case, the question that you claimed falsely that I ignored: to repeat, it just didn’t matter if lots of people are killed in a poor African country, just as we don’t care if we kill ants when we walk down the street. On moral grounds, that is arguably even worse than murder, which at least recognizes that the victim is human. That is exactly the situation.

Most of the rest of the exchange consists of Harris trying to figure out Chomsky’s views on the role of intention in moral judgment, and Chomsky accusing Harris of dishonesty and evasion for not acknowledging and exploring the implications of the U.S.’s culpability in the al-Shifa atrocity. When Harris tries to explain his view on the bombing by describing a hypothetical scenario in which one group stages an attack with the intention of killing as many people as possible, comparing it to another scenario in which a second group stages an attack with the intention of preventing another, larger attack, killing as few people as possible in the process, Chomsky will have none it. He insists Harris’s descriptions are “so ludicrous as to be embarrassing,” because they’re nothing like what actually happened. We know Chomsky is an intelligent enough man to understand perfectly well how a thought experiment works. So we’re left asking, what accounts for his mindless pounding on the drum of the U.S.’s greater culpability? And, again, why is he so convinced Harris is carrying on in bad faith?

What seems to be going on here is that Chomsky, a long-time critic of American foreign policy, actually began with the conclusion he sought to arrive at. After arguing for decades that the U.S. was the ultimate bad guy in the geopolitical sphere, his first impulse after the attacks of 9/11 was to salvage his efforts at casting the U.S. as the true villain. Toward that end, he lighted on al-Shifa as the ideal crime to offset any claim to innocent victimhood. He’s actually been making this case for quite some time, and Harris is by no means the first to insist that the intentions behind the two attacks should make us judge them very differently. Either Chomsky felt he knew enough about Harris to treat him like a villain himself, or he has simply learned to bully and level accusations against anyone pursuing a line of questions that will expose the weakness of his idea—he likens Harris’s arguments at one point to “apologetics for atrocities”—a tactic he keeps getting away with because he has a large following of liberal academics who accept his moral authority.

Harris saw clear to the end-game of his debate with Chomsky, and it’s quite possible Chomsky in some murky way did as well. The reason he was so sneeringly dismissive of Harris’s attempts to bring the discussion around to intentions, the reason he kept harping on how evil America had been in bombing al-Shifa, is that by focusing on this one particular crime he was avoiding the larger issue of competing ideologies. Chomsky’s account of the bombing is not as certain as he makes out, to say the least. An earlier claim he made about a Human Rights Watch report on the death toll, for instance, turned out to be completely fictitious. But even if the administration really was lying about its motives, it’s noteworthy that a lie was necessary. When Bin Laden announced his goals, he did so loudly and proudly. 

Chomsky’s one defense of his discounting of the attackers’ intentions (yes, he defends it, even though he accused Harris of being dishonest for pointing it out) is that everyone claims to have good intentions, so intentions simply don’t matter. This is shockingly facile coming from such a renowned intellectual—it would be shockingly facile coming from anyone. Of course Harris isn’t arguing that we should take someone’s own word for whether their intentions are good or bad. What Harris is arguing is that we should examine someone’s intentions in detail and make our own judgment about them. Al Qaeda’s plan to maximize terror by maximizing the death count of their attacks can only be seen as a good intention in the context of the group’s extreme religious ideology. That’s precisely why we should be discussing and criticizing that ideology, criticism which should extend to the more mainstream versions of Islam it grew out of.

Taking a step back from the particulars, we see that Chomsky believes the U.S. is guilty of far more and far graver acts of terror than any of the groups or nations officially designated as terrorist sponsors, and he seems unwilling to even begin a conversation with anyone who doesn’t accept this premise. Had he made some iron-clad case that the U.S. really did treat the pharmaceutical plant, and the thousands of lives that depended on its products, as pawns in some amoral game of geopolitical chess, he could have simply directed Harris to the proper source, or he could have reiterated key elements of that case. Regardless of what really happened with al-Shifa, we know full well what Al Qaeda’s intentions were, and Chomsky could have easily indulged Harris in discussing hypotheticals had he not feared that doing so would force him to undermine his own case. Is Harris an apologist for American imperialism? Here’s a quote from the section of his book discussing Chomsky's ideas:

We have surely done some terrible things in the past. Undoubtedly, we are poised to do terrible things in the future. Nothing I have written in this book should be construed as a denial of these facts, or as defense of state practices that are manifestly abhorrent. There may be much that Western powers, and the United States in particular, should pay reparations for. And our failure to acknowledge our misdeeds over the years has undermined our credibility in the international community. We can concede all of this, and even share Chomsky’s acute sense of outrage, while recognizing that his analysis of our current situation in the world is a masterpiece of moral blindness.

To be fair, lines like this last one are inflammatory, so it was understandable that Chomsky was miffed, up to a point. But Harris is right to point to his moral blindness, the same blindness that makes Aslan, Affleck, and Greenwald unable to see that the specific nature of beliefs and doctrines and governing principles actually matters. If we believe it’s evil to subjugate women, abuse homosexuals, and murder freethinkers, the fact that our country does lots of horrible things shouldn’t stop us from speaking out against these practices to people of every skin color, in every culture, on every part of the globe.

            Sam Harris is no passive target in all of this. In a debate, he gives as good or better than he gets, and he has a penchant for finding the most provocative way to phrase his points—like calling Islam “the motherlode of bad ideas.” He doesn’t hesitate to call people out for misrepresenting his views and defaming him as a person, but I’ve yet to see him try to win an argument by going after the person making it. And I’ve never seen him try to sabotage an intellectual dispute with a cheap performance of moral outrage, or discredit opponents by fixing them with labels they don't deserve. Reading his writings and seeing him lecture or debate, you get the sense that he genuinely wants to test the strength of ideas against each other and see what new insight such exchanges may bring. That’s why it’s frustrating to see these discussions again and again go off the rails because his opponent feels justified in dismissing and condemning him based on inaccurate portrayals, from an overweening and unaccountable sense of self-righteousness.

Ironically, honoring the type of limits to calls for greater social justice that Aslan and Chomsky take as sacrosanct—where the West forebears to condescend to the rest—serves more than anything else to bolster the sense of division and otherness that makes many in the U.S. care so little about things like what happened in al-Shifa. As technology pushes on the transformation of our far-flung societies and diverse cultures into a global community, we ought naturally to start seeing people from Northern Africa and the Middle East—and anywhere else—not as scary and exotic ciphers, but as fellow citizens of the world, as neighbors even. This same feeling of connection that makes us all see each other as more human, more worthy of each other’s compassion and protection, simultaneously opens us up to each other’s criticisms and moral judgments. Chomsky is right that we Americans are far too complacent about our country’s many crimes. But opening the discussion up to our own crimes opens it likewise to other crimes that cannot be tolerated anywhere on the globe, regardless of the culture, regardless of any history of oppression, and regardless too of any sanction delivered from the diverse landscape of supposedly sacred realms. 

Other popular posts like this:


Medieval vs Enlightened: Sorry, Medievalists, Dan Savage Was Right

            A letter from an anonymous scholar of the medieval period to the sex columnist Dan Savage has been making the rounds of social media lately. Responding to a letter from a young woman asking how she should handle sex for the first time with her Muslim boyfriend, who happened to be a virgin, Savage wrote, “If he’s still struggling with the sex-negative, woman-phobic zap that his upbringing (and a medieval version of his faith) put on his head, he needs to work through that crap before he gets naked with you.” The anonymous writer bristles in bold lettering at Savage’s terminology: “I’m a medievalist, and this is one of the things about our current discourse on religion that drives me nuts. Contemporary radical Christianity, Judaism, and Islam are all terrible, but none of them are medieval, especially in terms of sexuality.” Oddly, however, the letter, published under the title, “A Medievalist Schools Dan on Medieval Attitudes toward Sex,” isn’t really as much about correcting popular misconceptions about sex in the Middle Ages as it is about promoting a currently fashionable but highly dubious way of understanding radical religion in the various manifestations we see today.

            While the medievalist’s overall argument is based far more on ideology than actual evidence, the letter does make one important and valid point. As citizens of a technologically advanced secular democracy, it’s tempting for us to judge other cultures by the standards of our own. Just as each of us expects every young person we encounter to follow a path to maturity roughly identical to the one we’ve taken ourselves, people in advanced civilizations tend to think of less developed societies as occupying one or another of the stages that brought us to our own current level of progress. This not only inspires a condescending attitude toward other cultures; it also often leads to an overly simplified understanding of our own culture’s history. The letter to Savage explains:

I’m not saying that the Middle Ages was a great period of freedom (sexual or otherwise), but the sexual culture of 12th-century France, Iraq, Jerusalem, or Minsk did not involve the degree of self-loathing brought about by modern approaches to sexuality. Modern sexual purity has become a marker of faith, which it wasn’t in the Middle Ages. (For instance, the Bishop of Winchester ran the brothels in South London—for real, it was a primary and publicly acknowledged source of his revenue—and one particularly powerful Bishop of Winchester was both the product of adultery and the father of a bastard, which didn’t stop him from being a cardinal and papal legate.) And faith, especially in modern radical religion, is a marker of social identity in a way it rarely was in the Middle Ages.

If we imagine the past as bad dream of sexual repression from which our civilization has only recently awoken, historical tidbits about the prevalence and public acceptance of prostitution may come as a surprise. But do these revelations really undermine any characterization of the period as marked by religious suppression of sexual freedom?
Dan Savage

            Obviously, the letter writer’s understanding of the Middle Ages is more nuanced than most of ours, but the argument reduces to pointing out a couple of random details to distract us from the bigger picture. The passage quoted above begins with an acknowledgement that the Middle Ages was not a time of sexual freedom, and isn’t it primarily that lack of freedom that Savage was referring to when he used the term medieval? The point about self-loathing is purely speculative if taken to apply to the devout generally, and simply wrong with regard to ascetics who wore hairshirts, flagellated themselves, or practiced other forms of mortification of the flesh. In addition, we must wonder how much those prostitutes enjoyed the status conferred on them by the society that was supposedly so accepting of their profession; we must also wonder if this medievalist is aware of what medieval Islamic scholars like Imam Malik (711-795) and Imam Shafi (767-820) wrote about homosexuality. The letter writer is on shaky ground yet again with regard to the claim that sexual purity wasn’t a marker of faith (though it’s hard to know precisely what the phrase even means). There were all kinds of strange prohibitions in Christendom against sex on certain days of the week, certain times of the year, and in any position outside of missionary. Anyone watching the BBC’s adaptation of Wolf Hall knows how much virginity was prized in women—as King Henry VIII could only be wed to a woman who’d never had sex with another man. And there’s obviously an Islamic tradition of favoring virgins, or else why would so many of them be promised to martyrs? Finally, of course faith wasn’t a marker of social identity—nearly everyone in every community was of the same faith. If you decided to take up another set of beliefs, chances are you’d have been burned as a heretic or beheaded as an apostate.

            The letter writer is eager to make the point that the sexual mores espoused by modern religious radicals are not strictly identical to the ones people lived according to in the Middle Ages. Of course, the varieties of religion in any one time aren’t even identical to each other. Does anyone really believe otherwise? The important question is whether there’s enough similarity between modern religious beliefs on the one hand and medieval religious beliefs on the other for the use of the term to be apposite. And the answer is a definitive yes. So what is the medievalist’s goal in writing to correct Savage? The letter goes on,

The Middle Eastern boyfriend wasn’t taught a medieval version of his faith, and radical religion in the West isn’t a retreat into the past—it is a very modern way of conceiving identity. Even something like ISIS is really just interested in the medieval borders of their caliphate; their ideology developed out of 18th- and 19th-century anticolonial sentiment. The reason why this matters (beyond medievalists just being like, OMG no one gets us) is that the common response in the West to religious radicalism is to urge enlightenment, and to believe that enlightenment is a progressive narrative that is ever more inclusive. But these religions are responses to enlightenment, in fact often to The Enlightenment.

The Enlightenment, or Age of Reason, is popularly thought to have been the end of the Middle or so-called Dark Ages. The story goes that the medieval period was a time of Catholic oppression, feudal inequality, stunted innovation, and rampant violence. Then some brilliant philosophers woke the West up to the power of reason, science, and democracy, thus marking the dawn of the modern world. Historians and academics of various stripes like to sneer at this story of straightforward scientific and moral progress. It’s too simplistic. It ignores countless atrocities perpetrated by those supposedly enlightened societies. And it undergirds an ugly contemptuousness toward less advanced cultures. But is the story of the Enlightenment completely wrong?

            The medievalist letter writer makes no bones about the source of his ideas, writing in a parenthetical, “Michel Foucault does a great job of talking about these developments, and modern sexuality, including homosexual and heterosexual identity, as well—and I’m stealing and watering down his thoughts here.” Foucault, though he eschewed the label, is a leading figure in poststructuralist and postmodern schools of thought. His abiding interest throughout his career was with the underlying dynamics of social power as they manifested themselves in the construction of knowledge. He was one of those French philosophers who don’t believe in things like objective truth, human nature, or historical progress of any kind.

Michel Foucault
Foucault and the scores of scholars inspired by his work take it as their mission to expose all the hidden justifications for oppression in our culture’s various media for disseminating information. Why they would bother taking on this mission in the first place, though, is a mystery, beginning as they do from the premise that any notion of moral progress can only be yet another manifestation of one group’s power over another. If you don’t believe in social justice, why pursue it? If you don’t believe in truth, why seek it out? And what are Foucault’s ideas about the relationship between knowledge and power but theories of human nature? Despite this fundamental incoherence, many postmodern academics today are eager to the point of zealousness when it comes to opportunities to chastise scientists, artists, and other academics for alleged undercurrents in their work of sexism, racism, homophobia, Islamophobia, or some other oppressive ideology. Few sectors of academia remain untouched by this tradition, and its influence leads legions of intellectuals to unselfconsciously substitute sanctimony for real scholarship.

            So how do Foucault and the medievalist letter writer view the Enlightenment? The letter refers vaguely to “concepts of mass culture and population.” Already, it seems we’re getting far afield of how most historians and philosophers characterize the Enlightenment, not to mention how most Enlightenment figures themselves described their objectives. The letter continues,

Its narrative depends upon centralized control: It gave us the modern army, the modern prison, the mental asylum, genocide, and totalitarianism as well as modern science and democracy. Again, I’m not saying that I’d prefer to live in the 12th century (I wouldn’t), but that’s because I can imagine myself as part of that center. Educated, well-off Westerners generally assume that they are part of the center, that they can affect the government and contribute to the progress of enlightenment. This means that their identity is invested in the social form of modernity.

It’s true that the terms Enlightenment and Dark Ages were first used by Western scholars in the nineteenth century as an exercise in self-congratulation, and it’s also true that any moral progress that was made over the period occurred alongside untold atrocities. But neither of these complications to the oversimplified version of the narrative establishes in any way that the Enlightenment never really occurred—as the letter writer’s repeated assurances that it’s preferable to be alive today ought to make clear. What’s also clear is that this medievalist is deliberately conflating enlightenment with modernity, so that all the tragedies and outrages of the modern world can be laid at the feet of enlightenment thinking. How else could he describe the enlightenment as being simultaneously about both totalitarianism and democracy? But not everything that happened after the Enlightenment was necessarily caused by it, and nor should every social institution that arose from the late 19th to the early 20th century be seen as representative of enlightenment thinking.

            The medievalist letter writer claims that being “part of the center” is what makes living in the enlightened West preferable to living in the 12th century. But there’s simply no way whoever wrote the letter actually believes this. If you happen to be poor, female, a racial or religious minority, a homosexual, or a member of any other marginalized group, you’d be far more loath to return to the Middle Ages than those of us comfortably ensconced in this notional center, just as you’d be loath to relocate to any society not governed by Enlightenment principles today. The medievalist insists that groups like ISIS follow an ideology that dates to the 18th and 19th centuries and arose in response to colonialism, implying that Islamic extremism would be just another consequence of the inherently oppressive nature of the West and its supposedly enlightened ideas. “Radical religion,” from this Foucauldian perspective,

offers a social identity to those excluded (or who feel excluded) from the dominant system of Western enlightenment capitalism. It is a modern response to a modern problem, and by making it seem like some medieval holdover, we cover up the way in which our social power produces the conditions for this kind of identity, and make violence appear as the only response for these recalcitrant “holdouts.”

This is the position of scholars and journalists like Reza Aslan and Glenn Greenwald as well. It’s emblematic of the same postmodern ideology that forces on us the conclusion that if chimpanzees are violent to one another, it must be the result of contact with primatologists and other humans; if indigenous people in traditionalist cultures go to war with their neighbors, it must be owing to contact with missionaries and anthropologists; and if radical Islamists are killing their moderate co-religionists, kidnapping women, or throwing homosexuals from rooftops, well, it can only be the fault of Western colonialism. Never mind that these things are prescribed by holy texts dating from—you guessed it—the Middle Ages. The West, to postmodernists, is the source of all evil, because the West has all the power.
Directionality in Societal Development

But the letter writer’s fear that thinking of radical religion as a historical holdover will inevitably lead us to conclude military action is the only solution is based on an obvious non sequitur. There’s simply no reason someone who sees religious radicalism as medieval must advocate further violence to stamp it out. And that brings up another vital question: what solution do the postmodernists propose for things like religious violence in the Middle East and Africa? They seem to think that if they can only convince enough people that Western culture is inherently sexist, racist, violent, and so on—basically a gargantuan engine of oppression—then every geopolitical problem will take care of itself somehow.

            If it’s absurd to believe that everything that comes from the West is good and pure and true just because it comes from the West, it’s just as absurd to believe that everything that comes from the West is evil and tainted and false for the same reason. Had the medievalist spent some time reading the webpage on the Enlightenment so helpfully hyperlinked to in the letter, whoever it is may have realized how off-the-mark Foucault’s formulation was. The letter writer gets it exactly wrong in the part about mass culture and population, since the movement is actually associated with individualism, including individual rights. But what best distinguishes enlightenment thinking from medieval thinking, in any region or era, is the conviction that knowledge, justice, and better lives for everyone in the society are achievable through the application of reason, science, and skepticism, while medieval cultures rely instead on faith, scriptural or hierarchical authority, and tradition. The two central symbols of the Enlightenment are Galileo declaring that the church was wrong to dismiss the idea of a heliocentric cosmos and the Founding Fathers appending the Bill of Rights to the U.S. Constitution. You can argue that it’s only owing to a history of colonialism that Western democracies today enjoy the highest standard of living among all the nations of the globe. But even the medievalist letter writer attests to how much better it is to live in enlightened countries today than in the same countries in the Middle Ages.

            The postmodernism of Foucault and his kindred academics is not now, and has not ever been, compelling on intellectual grounds, which leaves open the question of why so many scholars have turned against the humanist and Enlightenment ideals that once gave them their raison d’être. I can’t help suspecting that the appeal of postmodernism stems from certain religious qualities of the worldview, qualities that ironically make it resemble certain aspects of medieval thought: the bowing to the authority of celebrity scholars (mostly white males), the cloistered obsession with esoteric texts, rituals of expiation and self-abasement, and competitive finger-wagging. There’s even a core belief in something very like original sin; only in this case it consists of being born into the ranks of a privileged group whose past members were guilty of some unspeakable crime. Postmodern identity politics seems to appeal most strongly to whites with an overpowering desire for acceptance by those less fortunate, as if they were looking for some kind of forgiveness or redemption only the oppressed have the power to grant. That’s why these academics are so quick to be persuaded they should never speak up unless it’s on behalf of some marginalized group, as if good intentions were proof against absurdity. As safe and accommodating and well-intentioned as this stance sounds, though, in practice it amounts to little more than moral and intellectual cowardice.

Life really has gotten much better since the Enlightenment, and it really does continue to get better for an increasing number of formerly oppressed groups of people today. All this progress has been made, and continues being made, precisely because there are facts and ideas—scientific theories, human rights, justice, and equality—that transcend the social conditions surrounding their origins. Accepting this reality doesn’t in any way mean seeing violence as the only option for combatting religious extremism, despite many academics’ insistence to the contrary. Nor does it mean abandoning the cause of political, cultural, and religious pluralism. But, if we continue disavowing the very ideals that have driven this progress, however fitfully and haltingly it has occurred, if we continue denying that it can even be said to have occurred at all, then what hope can we possibly have of pushing it even further along in the future?   



And: Napoleon Chagnon's Crucible and the Ongoing Epidemic of Moralizing Hysteria in Academia

On ISIS's explicit avowal of adherence to medieval texts: "What ISIS Really Wants" by Graeme Wood of the Atlantic

Are 1 in 5 Women Really Sexually Assaulted on College Campuses?

Falsely Accused Duke Lacrosse Players
            If you were a university administrator and you wanted to know how prevalent a particular experience was for students on campus, you would probably conduct a survey that asked a few direct questions about that experience—foremost among them the question of whether the student had at some point had the experience you’re interested in. Obvious, right? Recently, we’ve been hearing from many news media sources, and even from President Obama himself, that one in five college women experience sexual assault at some time during their tenure as students. It would be reasonable to assume that the surveys used to arrive at this ratio actually asked the participants directly whether or not they had been assaulted. 

            But it turns out the web survey that produced the one-in-five figure did no such thing. Instead, it asked students whether they had had any of several categories of experience the study authors later classified as sexual assault, or attempted sexual assault, in their analysis. This raises the important question of how we should define sexual assault when we’re discussing the issue—along with the related question of why we’re not talking about a crime that’s more clearly defined, like rape. Of course, whatever you call it, sexual violence is such a horrible crime that most of us are willing to forgive anyone who exaggerates the numbers or paints an overly frightening picture of reality in an attempt to prevent future cases. (The issue is so serious that PolitiFact refrained from applying their trademark Truth-O-Meter to the one-in-five figure.) 

            But there are four problems with this attitude. The first is that for every supposed assault there is an alleged perpetrator. Dramatically overestimating the prevalence of the crime comes with the attendant risk of turning public perception against the accused, making it more difficult for the innocent to convince anyone of their innocence. The second problem is that by exaggerating the danger in an effort to protect college students we’re sabotaging any opportunity these young adults may have to make informed decisions about the risks they take on. No one wants students to die in car accidents either, but we don’t manipulate the statistics to persuade them one in five drivers will die in a crash before they graduate from college. The third problem is that going to college and experimenting with sex are for many people a wonderful set of experiences they remember fondly for the rest of their lives. Do we really want young women to barricade themselves in their dorms? Do we want young men to feel like they have to get signed and notarized documentation of consent before they try to kiss anyone? The fourth problem I’ll get to in a bit.

            We need to strike some appropriate balance in our efforts to raise awareness without causing paranoia or inspiring unwarranted suspicion. And that balance should be represented by the results of our best good-faith effort to arrive at as precise an understanding of the risk as our most reliable methods allow. For this purpose, The Department of Justice’s Campus Sexual Assault Study, the source of the oft-cited statistic, is all but completely worthless. It has limitations, to begin with, when it comes to representativeness, since it surveyed students on just two university campuses. And, while the overall sample was chosen randomly, the 42% response rate implies a great deal of self-selection on behalf of the participants. The researchers did compare late responders to early ones to see if there was a systematic difference in their responses. But this doesn’t by any means rule out the possibility that many students chose categorically not to respond because they had nothing to say, and therefore had no interest in the study. (Some may have even found it offensive.) These are difficulties common to this sort of simple web-based survey, and they make interpreting the results problematic enough to recommend against their use in informing policy decisions.

            The biggest problems with the study, however, are not with the sample but with the methods. The survey questions appear to have been deliberately designed to generate inflated incidence rates. The basic strategy of avoiding direct questions about whether the students had been the victims of sexual assault is often justified with the assumption that many young people can’t be counted on to know what actions constitute rape and assault. But attempting to describe scenarios in survey items to get around this challenge opens the way for multiple interpretations and discounts the role of countless contextual factors. The CSA researchers write, “A surprisingly large number of respondents reported that they were at a party when the incident happened.” Cathy Young, a contributing editor at Reason magazine who analyzed the study all the way back in 2011, wrote that

the vast majority of the incidents it uncovered involved what the study termed “incapacitation” by alcohol (or, rarely, drugs): 14 percent of female respondents reported such an experience while in college, compared to six percent who reported sexual assault by physical force. Yet the question measuring incapacitation was framed ambiguously enough that it could have netted many “gray area” cases: “Has someone had sexual contact with you when you were unable to provide consent or stop what was happening because you were passed out, drugged, drunk, incapacitated, or asleep?” Does “unable to provide consent or stop” refer to actual incapacitation – given as only one option in the question – or impaired judgment?  An alleged assailant would be unlikely to get a break by claiming he was unable to stop because he was drunk.

This type of confusion is why it’s important to design survey questions carefully. That the items in the CSA study failed to make the kind of fine distinctions that would allow for more conclusive interpretations suggests the researchers had other goals in mind.

The researchers’ use of the blanket term “sexual assault,” and their grouping of attempted with completed assaults, is equally suspicious. Any survey designer cognizant of all the difficulties of web surveys would likely try to narrow the focus of the study as much as possible, and they would also try to eliminate as many sources of confusion with regard to definitions or descriptions as possible. But, as Young points out,

The CSA Study’s estimate of sexual assault by physical force is somewhat problematic as well – particularly for attempted sexual assaults, which account for nearly two-thirds of the total. Women were asked if anyone had ever had or attempted to have sexual contact with them by using force or threat, defined as “someone holding you down with his or her body weight, pinning your arms, hitting or kicking you, or using or threatening to use a weapon.” Suppose that, during a make-out session, the man tries to initiate sex by rolling on top of the woman, with his weight keeping her from moving away – but once she tells him to stop, he complies. Would this count as attempted sexual assault?

The simplest way to get around many of these difficulties would have been to ask the survey participants directly whether they had experienced the category of crime the researchers were interested in. If the researchers were concerned that the students might not understand that being raped while drunk still counts as rape, why didn’t they just ask the participants a question to that effect? It’s a simple enough question to devise.

The study did pose a follow up question to participants it classified as victims of forcible assault, the responses to which hint at the students’ actual thoughts about the incidents. It turns out 37 percent of so-called forcible assault victims explained that they hadn’t contacted law enforcement because they didn’t think the incident constituted a crime. That bears repeating: a third of the students the study says were forcibly assaulted didn’t think any crime had occurred. With regard to another category of victims, those of incapacitated assault, Young writes, “Not surprisingly, three-quarters of the female students in this category did not label their experience as rape.” Of those the study classified as actually having been raped while intoxicated, only 37 percent believed they had in fact been raped. Two thirds of the women the study labels as incapacitated rape victims didn’t believe they had been raped. Why so much disagreement on such a serious issue? Of the entire incapacitated sexual assault victim category, Young writes,

Two-thirds said they did not report the incident to the authorities because they didn’t think it was serious enough. Interestingly, only two percent reported having suffered emotional or psychological injury – a figure so low that the authors felt compelled to include a footnote asserting that the actual incidence of such trauma was undoubtedly far higher.

So the largest category making up the total one-in-five statistic is predominantly composed of individuals who didn’t think what happened to them was serious enough to report. And nearly all of them came away unscathed, both physically and psychologically.

            The impetus behind the CSA study was a common narrative about a so-called “rape culture” in which sexual violence is accepted as normal and young women fail to report incidents because they’re convinced you’re just supposed to tolerate it. That was the researchers’ rationale for using their own classification scheme for the survey participants’ experiences even when it was at odds with the students’ beliefs. But researchers have been doing this same dance for thirty years. As Young writes,

When the first campus rape studies in the 1980s found that many women labeled as victims by researchers did not believe they had been raped, the standard explanation was that cultural attitudes prevent women from recognizing forced sex as rape if the perpetrator is a close acquaintance. This may have been true twenty-five years ago, but it seems far less likely in our era of mandatory date rape and sexual assault workshops and prevention programs on college campuses.

The CSA also surveyed a large number of men, almost none of whom admitted to assaulting women. The researchers hypothesize that the men may have feared the survey wasn’t really anonymous, but that would mean they knew the behaviors in question were wrong. Again, if the researchers are really worried about mistaken beliefs regarding the definition of rape, they could investigate the issue with a few added survey items.

The huge discrepancies between incidences of sexual violence as measured by researchers and as reported by survey participants becomes even more suspicious in light of the history of similar studies. Those campus rape studies Young refers to from the 1980s produced a ratio of one in four. Their credibility was likewise undermined by later surveys that found that most of the supposed victims didn’t believe they’d been raped, and around forty percent of them went on to have sex with their alleged assailants again. A more recent study by the CDC used similar methods—a phone survey with a low response rate—and concluded that one in five women has been raped at some time in her life. Looking closer at this study, feminist critic and critic of feminism Christina Hoff Sommers attributes this finding as well to “a non-representative sample and vaguely worded questions.” It turns out activists have been conducting different versions of this same survey, and getting similarly, wildly inflated results for decades.

            Sommers challenges the CDC findings in a video everyone concerned with the issue of sexual violence should watch. We all need to understand that well-intentioned and intelligent people can, and often do, get carried away with activism that seems to have laudable goals but ends up doing more harm than good. Some people even build entire careers on this type of crusading. And PR has become so sophisticated that we never need to let a shortage, or utter lack of evidence keep us from advocating for our favorite causes. But there’s still a fourth problem with crazily exaggerated risk assessments—they obfuscate issues of real importance, making it more difficult to come up with real solutions. As Sommers explains,

To prevent rape and sexual assault we need state-of-the-art research. We need sober estimates. False and sensationalist statistics are going to get in the way of effective policies. And unfortunately, when it comes to research on sexual violence, exaggeration and sensation are not the exception; they are the rule. If you hear about a study that shows epidemic levels of sexual violence against American women, or college students, or women in the military, I can almost guarantee the researchers used some version of the defective CDC methodology. Now by this method, known as advocacy research, you can easily manufacture a women’s crisis. But here’s the bottom line: this is madness. First of all it trivializes the horrific pain and suffering of survivors. And it sends scarce resources in the wrong direction. Sexual violence is too serious a matter for antics, for politically motivated posturing. And right now the media, politicians, rape culture activists—they are deeply invested in these exaggerated numbers.

So while more and more normal, healthy, and consensual sexual practices are considered crimes, actual acts of exploitation and violence are becoming all the more easily overlooked in the atmosphere of paranoia. And college students face the dilemma of either risking assault or accusation by going out to enjoy themselves or succumbing to the hysteria and staying home, missing out on some of the richest experiences college life has to offer.

            One in five is a truly horrifying ratio. As conservative crime researcher Heather McDonald points out, “Such an assault rate would represent a crime wave unprecedented in civilized history. By comparison, the 2012 rape rate in New Orleans and its immediately surrounding parishes was .0234 percent; the rate for all violent crimes in New Orleans in 2012 was .48 percent.” I don’t know how a woman can pass a man on a sidewalk after hearing such numbers and not look at him with suspicion. Most of the reforms rape culture activists are pushing for now chip away at due process and strip away the rights of the accused. No one wants to make coming forward any more difficult for actual victims, but our first response to anyone making such a grave accusation—making any accusation—should be skepticism. Victims suffer severe psychological trauma, but then so do the falsely accused. The strongest evidence of an honest accusation is often the fact that the accuser must incur some cost in making it. That’s why we say victims who come forward are heroic. That’s the difference between a victim and a survivor.

Trumpeting crazy numbers creates the illusion that a large percentage of men are monsters, and this fosters an us-versus-them mentality that obliterates any appreciation for the difficulty of establishing guilt. That would be a truly scary world to live in. Fortunately, we in the US don’t really live in such a world. Sex doesn’t have to be that scary. It’s usually pretty damn fun. And the vast majority of men you meet—the vast majority of women as well—are good people. In fact, I’d wager most men would step in if they were around when some psychopath was trying to rape someone.



Lab Flies: Joshua Greene’s Moral Tribes and the Contamination of Walter White

Walter White’s Moral Math

In an episode near the end of Breaking Bad’s fourth season, the drug kingpin Gus Fring gives his meth cook Walter White an ultimatum. Walt’s brother-in-law Hank is a DEA agent who has been getting close to discovering the high-tech lab Gus has created for Walt and his partner Jesse, and Walt, despite his best efforts, hasn’t managed to put him off the trail. Gus decides that Walt himself has likewise become too big a liability, and he has found that Jesse can cook almost as well as his mentor. The only problem for Gus is that Jesse, even though he too is fed up with Walt, will refuse to cook if anything happens to his old partner. So Gus has Walt taken at gunpoint to the desert where he tells him to stay away from both the lab and Jesse. Walt, infuriated, goads Gus with the fact that he’s failed to turn Jesse against him completely, to which Gus responds, “For now,” before going on to say,

In the meantime, there’s the matter of your brother-in-law. He is a problem you promised to resolve. You have failed. Now it’s left to me to deal with him. If you try to interfere, this becomes a much simpler matter. I will kill your wife. I will kill your son. I will kill your infant daughter.

In other words, Gus tells Walt to stand by and let Hank be killed or else he will kill his wife and kids. Once he’s released, Walt immediately has his lawyer Saul Goodman place an anonymous call to the DEA to warn them that Hank is in danger. Afterward, Walt plans to pay a man to help his family escape to a new location with new, untraceable identities—but he soon discovers the money he was going to use to pay the man has already been spent (by his wife Skyler). Now it seems all five of them are doomed. This is when things get really interesting.

            Walt devises an elaborate plan to convince Jesse to help him kill Gus. Jesse knows that Gus would prefer for Walt to be dead, and both Walt and Gus know that Jesse would go berserk if anyone ever tried to hurt his girlfriend’s nine-year-old son Brock. Walt’s plan is to make it look like Gus is trying to frame him for poisoning Brock with risin. The idea is that Jesse would suspect Walt of trying to kill Brock as punishment for Jesse betraying him and going to work with Gus. But Walt will convince Jesse that this is really just Gus’s ploy to trick Jesse into doing what he has forbidden Gus to do up till now—and kill Walt himself. Once Jesse concludes that it was Gus who poisoned Brock, he will understand that his new boss has to go, and he will accept Walt’s offer to help him perform the deed. Walt will then be able to get Jesse to give him the crucial information he needs about Gus to figure out a way to kill him.

It’s a brilliant plan. The one problem is that it involves poisoning a nine-year-old child. Walt comes up with an ingenious trick which allows him to use a less deadly poison while still making it look like Brock has ingested the ricin, but for the plan to work the boy has to be made deathly ill. So Walt is faced with a dilemma: if he goes through with his plan, he can save Hank, his wife, and his two kids, but to do so he has to deceive his old partner Jesse in just about the most underhanded way imaginable—and he has to make a young boy very sick by poisoning him, with the attendant risk that something will go wrong and the boy, or someone else, or everyone else, will die anyway. The math seems easy: either four people die, or one person gets sick. The option recommended by the math is greatly complicated, however, by the fact that it involves an act of violence against an innocent child.

In the end, Walt chooses to go through with his plan, and it works perfectly. In another ingenious move, though, this time on the part of the show’s writers, Walt’s deception isn’t revealed until after his plan has been successfully implemented, which makes for an unforgettable shock at the end of the season. Unfortunately, this revelation after the fact, at a time when Walt and his family are finally safe, makes it all too easy to forget what made the dilemma so difficult in the first place—and thus makes it all too easy to condemn Walt for resorting to such monstrous means to see his way through.

Joshua Greene
            Fans of Breaking Bad who read about the famous thought-experiment called the footbridge dilemma in Harvard psychologist Joshua Greene’s multidisciplinary and momentously important book Moral Tribes: Emotion, Reason, and the Gap between Us and Them will immediately recognize the conflicting feelings underlying our responses to questions about serving some greater good by committing an act of violence. Here is how Greene describes the dilemma:

A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Next to you is a railway workman wearing a large backpack. The only way to save the five people is to push this man off the footbridge and onto the tracks below. The man will die as a result, but his body and backpack will stop the trolley from reaching the others. (You can’t jump yourself because you, without a backpack, are not big enough to stop the trolley, and there’s no time to put one on.) Is it morally acceptable to save the five people by pushing this stranger to his death? (113-4)

As was the case for Walter White when he faced his child-poisoning dilemma, the math is easy: you can save five people—strangers in this case—through a single act of violence. One of the fascinating things about common responses to the footbridge dilemma, though, is that the math is all but irrelevant to most of us; no matter how many people we might save, it’s hard for us to see past the murderous deed of pushing the man off the bridge. The answer for a large majority of people faced with this dilemma, even in the case of variations which put the number of people who would be saved much higher than five, is no, pushing the stranger to his death is not morally acceptable.

Another fascinating aspect of our responses is that they change drastically with the modification of a single detail in the hypothetical scenario. In the switch dilemma, a trolley is heading for five people again, but this time you can hit a switch to shift it onto another track where there happens to be a single person who would be killed. Though the math and the underlying logic are the same—you save five people by killing one—something about pushing a person off a bridge strikes us as far worse than pulling a switch. A large majority of people say killing the one person in the switch dilemma is acceptable. To figure out which specific factors account for the different responses, Greene and his colleagues tweak various minor details of the trolley scenario before posing the dilemma to test participants. By now, so many experiments have relied on these scenarios that Greene calls trolley dilemmas the fruit flies of the emerging field known as moral psychology.

The Automatic and Manual Modes of Moral Thinking

One hypothesis for why the footbridge case strikes us as unacceptable is that it involves using a human being as an instrument, a means to an end. So Greene and his fellow trolleyologists devised a variation called the loop dilemma, which still has participants pulling a hypothetical switch, but this time the lone victim on the alternate track must stop the trolley from looping back around onto the original track. In other words, you’re still hitting the switch to save the five people, but you’re also using a human being as a trolley stop. People nonetheless tend to respond to the loop dilemma in much the same way they do the switch dilemma. So there must be some factor other than the prospect of using a person as an instrument that makes the footbridge version so objectionable to us.

Greene’s own theory for why our intuitive responses to these dilemmas are so different begins with what Daniel Kahneman, one of the founders of behavioral economics, labeled the two-system model of the mind. The first system, a sort of autopilot, is the one we operate in most of the time. We only use the second system when doing things that require conscious effort, like multiplying 26 by 47. While system one is quick and intuitive, system two is slow and demanding. Greene proposes as an analogy the automatic and manual settings on a camera. System one is point-and-click; system two, though more flexible, requires several calibrations and adjustments. We usually only engage our manual systems when faced with circumstances that are either completely new or particularly challenging.

Tomkow.com
According to Greene’s model, our automatic settings have functions that go beyond the rapid processing of information to encompass our basic set of moral emotions, from indignation to gratitude, from guilt to outrage, which motivates us to behave in ways that over evolutionary history have helped our ancestors transcend their selfish impulses to live in cooperative societies. Greene writes,

According to the dual-process theory, there is an automatic setting that sounds the emotional alarm in response to certain kinds of harmful actions, such as the action in the footbridge case. Then there’s manual mode, which by its nature tends to think in terms of costs and benefits. Manual mode looks at all of these cases—switch, footbridge, and loop—and says “Five for one? Sounds like a good deal.” Because manual mode always reaches the same conclusion in these five-for-one cases (“Good deal!”), the trend in judgment for each of these cases is ultimately determined by the automatic setting—that is, by whether the myopic module sounds the alarm. (233)

What makes the dilemmas difficult then is that we experience them in two conflicting ways. Most of us, most of the time, follow the dictates of the automatic setting, which Greene describes as myopic because its speed and efficiency come at the cost of inflexibility and limited scope for extenuating considerations.

The reason our intuitive settings sound an alarm at the thought of pushing a man off a bridge but remain silent about hitting a switch, Greene suggests, is that our ancestors evolved to live in cooperative groups where some means of preventing violence between members had to be in place to avoid dissolution—or outright implosion. One of the dangers of living with a bunch of upright-walking apes who possess the gift of foresight is that any one of them could at any time be plotting revenge for some seemingly minor slight, or conspiring to get you killed so he can move in on your spouse or inherit your belongings. For a group composed of individuals with the capacity to hold grudges and calculate future gains to function cohesively, the members must have in place some mechanism that affords a modicum of assurance that no one will murder them in their sleep. Greene writes,

To keep one’s violent behavior in check, it would help to have some kind of internal monitor, an alarm system that says “Don’t do that!” when one is contemplating an act of violence. Such an action-plan inspector would not necessarily object to all forms of violence. It might shut down, for example, when it’s time to defend oneself or attack one’s enemies. But it would, in general, make individuals very reluctant to physically harm one another, thus protecting individuals from retaliation and, perhaps, supporting cooperation at the group level. My hypothesis is that the myopic module is precisely this action-plan inspecting system, a device for keeping us from being casually violent. (226)

Hitting a switch to transfer a train from one track to another seems acceptable, even though a person ends up being killed, because nothing our ancestors would have recognized as violence is involved.

Many philosophers cite our different responses to the various trolley dilemmas as support for deontological systems of morality—those based on the inherent rightness or wrongness of certain actions—since we intuitively know the choices suggested by a consequentialist approach are immoral. But Greene points out that this argument begs the question of how reliable our intuitions really are. He writes,

I’ve called the footbridge dilemma a moral fruit fly, and that analogy is doubly appropriate because, if I’m right, this dilemma is also a moral pest. It’s a highly contrived situation in which a prototypically violent action is guaranteed (by stipulation) to promote the greater good. The lesson that philosophers have, for the most part, drawn from this dilemma is that it’s sometimes deeply wrong to promote the greater good. However, our understanding of the dual-process moral brain suggests a different lesson: Our moral intuitions are generally sensible, but not infallible. As a result, it’s all but guaranteed that we can dream up examples that exploit the cognitive inflexibility of our moral intuitions. It’s all but guaranteed that we can dream up a hypothetical action that is actually good but that seems terribly, horribly wrong because it pushes our moral buttons. I know of no better candidate for this honor than the footbridge dilemma. (251)

The obverse is that many of the things that seem morally acceptable to us actually do cause harm to people. Greene cites the example of a man who lets a child drown because he doesn’t want to ruin his expensive shoes, which most people agree is monstrous, even though we think nothing of spending money on things we don’t really need when we could be sending that money to save sick or starving children in some distant country. Then there are crimes against the environment, which always seem to rank low on our list of priorities even though their future impact on real human lives could be devastating. We have our justifications for such actions or omissions, to be sure, but how valid are they really? Is distance really a morally relevant factor when we let children die? Does the diffusion of responsibility among so many millions change the fact that we personally could have a measurable impact?

These black marks notwithstanding, cooperation, and even a certain degree of altruism, come natural to us. To demonstrate this, Greene and his colleagues have devised some clever methods for separating test subjects’ intuitive responses from their more deliberate and effortful decisions. The experiments begin with a multi-participant exchange scenario developed by economic game theorists called the Public Goods Game, which has a number of anonymous players contribute to a common bank whose sum is then doubled and distributed evenly among them. Like the more famous game theory exchange known as the Prisoner’s Dilemma, the outcomes of the Public Goods Game reward cooperation, but only when a threshold number of fellow cooperators is reached. The flip side, however, is that any individual who decides to be stingy can get a free ride from everyone else’s contributions and make an even greater profit. What tends to happen is, over multiple rounds, the number of players opting for stinginess increases until the game is ruined for everyone, a process analogical to a phenomenon in economics known as the Tragedy of the Commons. Everyone wants to graze a few more sheep on the commons than can be sustained fairly, so eventually the grounds are left barren.

The Biological and Cultural Evolution of Morality

            Greene believes that humans evolved emotional mechanisms to prevent the various analogs of the Tragedy of the Commons from occurring so that we can live together harmoniously in tight-knit groups. The outcomes of multiple rounds of the Public Goods Game, for instance, tend to be far less dismal when players are given the opportunity to devote a portion of their own profits to punishing free riders. Most humans, it turns out, will be motivated by the emotion of anger to expend their own resources for the sake of enforcing fairness. Over several rounds, cooperation becomes the norm. Such an outcome has been replicated again and again, but researchers are always interested in factors that influence players’ strategies in the early rounds. Greene describes a series of experiments he conducted with David Rand and Martin Nowak, which were reported in an article in Nature in 2012. He writes,

…we conducted our own Public Goods Games, in which we forced some people to decide quickly (less than ten seconds) and forced others to decide slowly (more than ten seconds). As predicted, forcing people to decide faster made them more cooperative and forcing people to slow down made them less cooperative (more likely to free ride). In other experiments, we asked people, before playing the Public Goods Game, to write about a time in which their intuitions served them well, or about a time in which careful reasoning led them astray. Reflecting on the advantages of intuitive thinking (or the disadvantages of careful reflection) made people more cooperative. Likewise, reflecting on the advantages of careful reasoning (or the disadvantages of intuitive thinking) made people less cooperative. (62)

These results offer strong support for Greene’s dual-process theory of morality, and they even hint at the possibility that the manual mode is fundamentally selfish or amoral—in other words, that the philosophers have been right all along in deferring to human intuitions about right and wrong.

            As good as our intuitive moral sense is for preventing the Tragedy of the Commons, however, when given free rein in a society comprised of large groups of people who are strangers to one another, each with its own culture and priorities, our natural moral settings bring about an altogether different tragedy. Greene labels it the Tragedy of Commonsense Morality. He explains,

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups). (23)

Expanding on the story behind the Tragedy of the Commons, Greene describes what would happen if several groups, each having developed its own unique solution for making sure the commons were protected from overgrazing, were suddenly to come into contact with one another on a transformed landscape called the New Pastures. Each group would likely harbor suspicions against the others, and when it came time to negotiate a new set of rules to govern everybody the groups would all show a significant, though largely unconscious, bias in favor of their own members and their own ways.

            The origins of moral psychology as a field can be traced to both developmental and evolutionary psychology. Seminal research conducted at Yale’s Infant Cognition Center, led by Karen Wynn, Kiley Hamlin, and Paul Bloom (and which Bloom describes in a charming and highly accessible book called Just Babies), has demonstrated that children as young as six months possess what we can easily recognize as a rudimentary moral sense. These findings suggest that much of the behavior we might have previously ascribed to lessons learned from adults is actually innate. Experiments based on game theory scenarios and thought-experiments like the trolley dilemmas are likewise thought to tap into evolved patterns of human behavior. Yet when University of British Columbia psychologist Joseph Henrich teamed up with several anthropologists to see how people living in various small-scale societies responded to game theory scenarios like the Prisoner’s Dilemma and the Public Goods Game they discovered a great deal of variation. On the one hand, then, human moral intuitions seem to be rooted in emotional responses present at, or at least close to, birth, but on the other hand cultures vary widely in their conventional responses to classic dilemmas. These differences between cultural conceptions of right and wrong are in large part responsible for the conflict Greene envisions in his Parable of the New Pastures.
Joseph Henrich

But how can a moral sense be both innate and culturally variable?  “As you might expect,” Greene explains, “the way people play these games reflects the way they live.” People in some cultures rely much more heavily on cooperation to procure their sustenance, as is the case with the Lamelara of Indonesia, who live off the meat of whales they hunt in groups. Cultures also vary in how much they rely on market economies as opposed to less abstract and less formal modes of exchange. Just as people adjust the way they play economic games in response to other players’ moves, people acquire habits of cooperation based on the circumstances they face in their particular societies. Regarding the differences between small-scale societies in common game theory strategies, Greene writes,

Henrich and colleagues found that payoffs to cooperation and market integration explain more than two thirds of the variation across these cultures. A more recent study shows that, across societies, market integration is an excellent predictor of altruism in the Dictator Game. At the same time, many factors that you might expect to be important predictors of cooperative behavior—things like an individual’s sex, age, and relative wealth, or the amount of money at stake—have little predictive power. (72)

In much the same way humans are programmed to learn a language and acquire a set of religious beliefs, they also come into the world with a suite of emotional mechanisms that make up the raw material for what will become a culturally calibrated set of moral intuitions. The specific language and religion we end up with is of course dependent on the social context of our upbringing, just as our specific moral settings will reflect those of other people in the societies we grow up in.

Jonathan Haidt and Tribal Righteousness

Jonathan Haidt
In our modern industrial society, we actually have some degree of choice when it comes to our cultural affiliations, and this freedom opens the way for heritable differences between individuals to play a larger role in our moral development. Such differences are nowhere as apparent as in the realm of politics, where nearly all citizens occupy some point on a continuum between conservative and liberal. According to Greene’s fellow moral psychologist Jonathan Haidt, we have precious little control over our moral responses because, in his view, reason only comes into play to justify actions and judgments we’ve already made. In his fascinating 2012 book The Righteous Mind, Haidt insists,

Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth. (50)

To explain the moral divide between right and left, Haidt points to the findings of his own research on what he calls Moral Foundations, six dimensions underlying our intuitions about moral and immoral actions. Conservatives tend to endorse judgments based on all six of the foundations, valuing loyalty, authority, and sanctity much more than liberals, who focus more exclusively on care for the disadvantaged, fairness, and freedom from oppression. Since our politics emerge from our moral intuitions and reason merely serves as a sort of PR agent to rationalize judgments after the fact, Haidt enjoins us to be more accepting of rival political groups— after all, you can’t reason with them. 

            Greene objects both to Haidt’s Moral Foundations theory and to his prescription for a politics of complementarity. The responses to questions representing all the moral dimensions in Haidt’s studies form two clusters on a graph, Greene points out, not six, suggesting that the differences between conservatives and liberals are attributable to some single overarching variable as opposed to several individual tendencies. Furthermore, the specific content of the questions Haidt uses to flesh out the values of his respondents have a critical limitation. Greene writes,

According to Haidt, American social conservatives place greater value on respect for authority, and that’s true in a sense. Social conservatives feel less comfortable slapping their fathers, even as a joke, and so on. But social conservatives do not respect authority in a general way. Rather, they have great respect for authorities recognized by their tribe (from the Christian God to various religious and political leaders to parents). American social conservatives are not especially respectful of Barack Hussein Obama, whose status as a native-born American, and thus a legitimate president, they have persistently challenged. (339)

The same limitation applies to the loyalty and sanctity foundations. Conservatives feel little loyalty toward the atheists and Muslims among their fellow Americans. Nor do they recognize the sanctity of Mosques or Hindu holy texts. Greene goes on,

American social conservatives are not best described as people who place special value on authority, sanctity, and loyalty, but rather as tribal loyalists—loyal to their own authorities, their own religion, and themselves. This doesn’t make them evil, but it does make them parochial, tribal. In this they’re akin to the world’s other socially conservative tribes, from the Taliban in Afghanistan to European nationalists. According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise may be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic. (340)

Greene believes such persuasion is possible, even with regard to emotionally and morally charged controversies, because he sees our manual-mode thinking as playing a potentially much greater role than Haidt sees it playing.

Metamorality on the New Pastures

            Throughout The Righteous Mind, Haidt argues that the moral philosophers who laid the foundations of modern liberal and academic conceptions of right and wrong gave short shrift to emotions and intuitions—that they gave far too much credit to our capacity for reason. To be fair, Haidt does honor the distinction between descriptive and prescriptive theories of morality, but he nonetheless gives the impression that he considers liberal morality to be somewhat impoverished. Greene sees this attitude as thoroughly wrongheaded. Responding to Haidt’s metaphor comparing his Moral Foundations to taste buds—with the implication that the liberal palate is more limited in the range of flavors it can appreciate—Greene writes,

The great philosophers of the Enlightenment wrote at a time when the world was rapidly shrinking, forcing them to wonder whether their own laws, their own traditions, and their own God(s) were any better than anyone else’s. They wrote at a time when technology (e.g., ships) and consequent economic productivity (e.g., global trade) put wealth and power into the hands of a rising educated class, with incentives to question the traditional authorities of king and church. Finally, at this time, natural science was making the world comprehensible in secular terms, revealing universal natural laws and overturning ancient religious doctrines. Philosophers wondered whether there might also be universal moral laws, ones that, like Newton’s law of gravitation, applied to members of all tribes, whether or not they knew it. Thus, the Enlightenment philosophers were not arbitrarily shedding moral taste buds. They were looking for deeper, universal moral truths, and for good reason. They were looking for moral truths beyond the teachings of any particular religion and beyond the will of any earthly king. They were looking for what I’ve called a metamorality: a pan-tribal, or post-tribal, philosophy to govern life on the new pastures. (338-9)

While Haidt insists we must recognize the centrality of intuitions even in this civilization nominally ruled by reason, Greene points out that it was skepticism of old, seemingly unassailable and intuitive truths that opened up the world and made modern industrial civilization possible in the first place.  

            As Haidt explains, though, conservative morality serves people well in certain regards. Christian churches, for instance, encourage charity and foster a sense of community few secular institutions can match. But these advantages at the level of parochial groups have to be weighed against the problems tribalism inevitably leads to at higher levels. This is, in fact, precisely the point Greene created his Parable of the New Pastures to make. He writes,

The Tragedy of the Commons is averted by a suite of automatic settings—moral emotions that motivate and stabilize cooperation within limited groups. But the Tragedy of Commonsense Morality arises because of automatic settings, because different tribes have different automatic settings, causing them to see the world through different moral lenses. The Tragedy of the Commons is a tragedy of selfishness, but the Tragedy of Commonsense Morality is a tragedy of moral inflexibility. There is strife on the new pastures not because herders are hopelessly selfish, immoral, or amoral, but because they cannot step outside their respective moral perspectives. How should they think? The answer is now obvious: They should shift into manual mode. (172)

Greene argues that whenever we, as a society, are faced with a moral controversy—as with issues like abortion, capital punishment, and tax policy—our intuitions will not suffice because our intuitions are the very basis of our disagreement.

Emily Nussbaum
            Watching the conclusion to season four of Breaking Bad, most viewers probably responded to finding out that Walt had poisoned Brock by thinking that he’d become a monster—at least at first. Indeed, the currently dominant academic approach to art criticism involves taking a stance, both moral and political, with regard to a work’s models and messages. Writing for The New Yorker, Emily Nussbaum, for instance, disparages viewers of Breaking Bad for failing to condemn Walt, writing,

When Brock was near death in the I.C.U., I spent hours arguing with friends about who was responsible. To my surprise, some of the most hard-nosed cynics thought it inconceivable that it could be Walt—that might make the show impossible to take, they said. But, of course, it did nothing of the sort. Once the truth came out, and Brock recovered, I read posts insisting that Walt was so discerning, so careful with the dosage, that Brock could never have died. The audience has been trained by cable television to react this way: to hate the nagging wives, the dumb civilians, who might sour the fun of masculine adventure. “Breaking Bad” increases that cognitive dissonance, turning some viewers into not merely fans but enablers. (83)

To arrive at such an assessment, Nussbaum must reduce the show to the impact she assumes it will have on less sophisticated fans’ attitudes and judgments. But the really troubling aspect of this type of criticism is that it encourages scholars and critics to indulge their impulse toward self-righteousness when faced with challenging moral dilemmas; in other words, it encourages them to give voice to their automatic modes precisely when they should be shifting to manual mode. Thus, Nussbaum neglects outright the very details that make Walt’s scenario compelling, completely forgetting that by making Brock sick—and, yes, risking his life—he was able to save Hank, Skyler, and his own two children.

But how should we go about arriving at a resolution to moral dilemmas and political controversies if we agree we can’t always trust our intuitions? Greene believes that, while our automatic modes recognize certain acts as wrong and certain others as a matter of duty to perform, in keeping with deontological ethics, whenever we switch to manual mode, the focus shifts to weighing the relative desirability of each option's outcomes. In other words, manual mode thinking is consequentialist. And, since we tend to assess outcomes according their impact on other people, favoring those that improve the quality of their experiences the most, or detract from it the least, Greene argues that whenever we slow down and think through moral dilemmas deliberately we become utilitarians. He writes,

If I’m right, this convergence between what seems like the right moral philosophy (from a certain perspective) and what seems like the right moral psychology (from a certain perspective) is no accident. If I’m right, Bentham and Mill did something fundamentally different from all of their predecessors, both philosophically and psychologically. They transcended the limitations of commonsense morality by turning the problem of morality (almost) entirely over to manual mode. They put aside their inflexible automatic settings and instead asked two very abstract questions. First: What really matters? Second: What is the essence of morality? They concluded that experience is what ultimately matters, and that impartiality is the essence of morality. Combing these two ideas, we get utilitarianism: We should maximize the quality of our experience, giving equal weight to the experience of each person. (173)

If you cite an authority recognized only by your own tribe—say, the Bible—in support of a moral argument, then members of other tribes will either simply discount your position or counter it with pronouncements by their own authorities. If, on the other hand, you argue for a law or a policy by citing evidence that implementing it would mean longer, healthier, happier lives for the citizens it affects, then only those seeking to establish the dominion of their own tribe can discount your position (which of course isn’t to say they can’t offer rival interpretations of your evidence).

            If we turn commonsense morality on its head and evaluate the consequences of giving our intuitions priority over utilitarian accounting, we can find countless circumstances in which being overly moral is to everyone’s detriment. Ideas of justice and fairness allow far too much space for selfish and tribal biases, whereas the math behind mutually optimal outcomes based on compromise tends to be harder to fudge. Greene reports, for instance, the findings of a series of experiments conducted by Fieke Harinick and colleagues at the University of Amsterdam in 2000. Negotiations by lawyers representing either the prosecution or the defense were told to either focus on serving justice or on getting the best outcome for their clients. The negotiations in the first condition almost always came to loggerheads. Greene explains,

Thus, two selfish and rational negotiators who see that their positions are symmetrical will be willing to enlarge the pie, and then split the pie evenly. However, if negotiators are seeking justice, rather than merely looking out for their bottom lines, then other, more ambiguous, considerations come into play, and with them the opportunity for biased fairness. Maybe your clients really deserve lighter penalties. Or maybe the defendants you’re prosecuting really deserve stiffer penalties. There is a range of plausible views about what’s truly fair in these cases, and you can choose among them to suit your interests. By contrast, if it’s just a matter of getting the best deal you can from someone who’s just trying to get the best deal for himself, there’s a lot less wiggle room, and a lot less opportunity for biased fairness to create an impasse. (88)

Framing an argument as an effort to establish who was right and who was wrong is like drawing a line in the sand—it activates tribal attitudes pitting us against them, while treating negotiations more like an economic exchange circumvents these tribal biases.

Challenges to Utilitarianism

But do we really want to suppress our automatic moral reactions in favor of deliberative accountings of the greatest good for the greatest number? Deontologists have posed some effective challenges to utilitarianism in the form of thought-experiments that seem to show efforts to improve the quality of experiences would lead to atrocities. For instance, Greene recounts how in a high school debate, he was confronted by a hypothetical surgeon who could save five sick people by killing one healthy one. Then there’s the so-called Utility Monster, who experiences such happiness when eating humans that it quantitatively outweighs the suffering of those being eaten. More down-to-earth examples feature a scapegoat convicted of a crime to prevent rioting by people who are angry about police ineptitude, and the use of torture to extract information from a prisoner that could prevent a terrorist attack. The most influential challenge to utilitarianism, however, was leveled by the political philosopher John Rawls when he pointed out that it could be used to justify the enslavement of a minority by a majority.

Greene’s responses to these criticisms make up one of the most surprising, important, and fascinating parts of Moral Tribes. First, highly contrived thought-experiments about Utility Monsters and circumstances in which pushing a guy off a bridge is guaranteed to stop a trolley may indeed prove that utilitarianism is not true in any absolute sense. But whether or not such moral absolutes even exist is a contentious issue in its own right. Greene explains,

I am not claiming that utilitarianism is the absolute moral truth. Instead I’m claiming that it’s a good metamorality, a good standard for resolving moral disagreements in the real world. As long as utilitarianism doesn’t endorse things like slavery in the real world, that’s good enough. (275-6)

One source of confusion regarding the slavery issue is the equation of happiness with money; slave owners probably could make profits in excess of the losses sustained by the slaves. But money is often a poor index of happiness. Greene underscores this point by asking us to consider how much someone would have to pay us to sell ourselves into slavery. “In the real world,” he writes, “oppression offers only modest gains in happiness to the oppressors while heaping misery upon the oppressed” (284-5).

            Another failing of the thought-experiments thought to undermine utilitarianism is the shortsightedness of the supposedly obvious responses. The crimes of the murderous doctor and the scapegoating law officers may indeed produce short-term increases in happiness, but if the secret gets out healthy and innocent people will live in fear, knowing they can’t trust doctors and law officers. The same logic applies to the objection that utilitarianism would force us to become infinitely charitable, since we can almost always afford to be more generous than we currently are. But how long could we serve as so-called happiness pumps before burning out, becoming miserable, and thus lose the capacity for making anyone else happier? Greene writes,

If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits. (258)

Greene seems to be endorsing what philosophers call "rule utilitarianism." We can approach every choice by calculating the likely outcomes, but as a society we would be better served deciding on some rules for everyone to adhere to. It just may be possible for a doctor to increase happiness through murder in a particular set of circumstances—but most people would vociferously object to a rule legitimizing the practice.

            The concept of human rights may present another challenge to Greene in his championing of consequentialism over deontology. It is our duty, after all, to recognize the rights of every human, and we ourselves have no right to disregard someone else’s rights no matter what benefit we believe might result from doing so. In his book The Better Angels of our Nature, Steven Pinker, Greene’s colleague at Harvard, attributes much of the steep decline in rates of violent death over the past three centuries to a series of what he calls Rights Revolutions, the first of which began during the Enlightenment. But the problem with arguments that refer to rights, Greene explains, is that controversies arise for the very reason that people don’t agree which rights we should recognize. He writes,

Thus, appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribespeople feel, you can always posit the existence of a right that corresponds to your feelings. If you feel that abortion is wrong, you can talk about a “right to life.” If you feel that outlawing abortion is wrong, you can talk about a “right to choose.” If you’re Iran, you can talk about your “nuclear rights,” and if you’re Israel you can talk about your “right to self-defense.” “Rights” are nothing short of brilliant. They allow us to rationalize our gut feelings without doing any additional work. (302)

The only way to resolve controversies over which rights we should actually recognize and which rights we should prioritize over others, Greene argues, is to apply utilitarian reasoning.

Ideology and Epistemology

            In his discussion of the proper use of the language of rights, Greene comes closer than in other section of Moral Tribes to explicitly articulating what strikes me as the most revolutionary idea that he and his fellow moral psychologists are suggesting—albeit as of yet only implicitly. In his advocacy for what he calls “deep pragmatism,” Greene isn’t merely applying evolutionary theories to an old philosophical debate; he’s actually posing a subtly different question. The numerous thought-experiments philosophers use to poke holes in utilitarianism may not have much relevance in the real world—but they do undermine any claim utilitarianism may have on absolute moral truth. Greene’s approach is therefore to eschew any effort to arrive at absolute truths, including truths pertaining to rights. Instead, in much the same way scientists accept that our knowledge of the natural world is mediated by theories, which only approach the truth asymptotically, never capturing it with any finality, Greene intimates that the important task in the realm of moral philosophy isn’t to arrive at a full accounting of moral truths but rather to establish a process for resolving moral and political dilemmas.

What’s needed, in other words, isn’t a rock solid moral ideology but a workable moral epistemology. And, just as empiricism serves as the foundation of the epistemology of science, Greene makes a convincing case that we could use utilitarianism as the basis of an epistemology of morality. Pursuing the analogy between scientific and moral epistemologies even farther, we can compare theories, which stand or fall according to their empirical support, to individual human rights, which we afford and affirm according to their impact on the collective happiness of every individual in the society. Greene writes,

If we are truly interested in persuading our opponents with reason, then we should eschew the language of rights. This is, once again, because we have no non-question-begging (and utilitarian) way of figuring out which rights really exist and which rights take precedence over others. But when it’s not worth arguing—either because the question has been settled or because our opponents can’t be reasoned with—then it’s time to start rallying the troops. It’s time to affirm our moral commitments, not with wonky estimates of probabilities but with words that stir our souls. (308-9)

Rights may be the closest thing we have to moral truths, just as theories serve as our stand-ins for truths about the natural world, but even more important than rights or theories are the processes we rely on to establish and revise them.

A New Theory of Narrative

            As if a philosophical revolution weren’t enough, moral psychology is also putting in place what could be the foundation of a new understanding of the role of narratives in human lives. At the heart of every story is a conflict between competing moral ideals. In commercial fiction, there tends to be a character representing each side of the conflict, and audiences can be counted on to favor one side over the other—the good, altruistic guys over the bad, selfish guys. In more literary fiction, on the other hand, individual characters are faced with dilemmas pitting various modes of moral thinking against each other. In season one of Breaking Bad, for instance, Walter White famously writes a list on a notepad of the pros and cons of murdering the drug dealer restrained in Jesse’s basement. Everyone, including Walt, feels that killing the man is wrong, but if they let him go Walt and his family will at risk of retaliation. This dilemma is in fact quite similar to ones he faces in each of the following seasons, right up until he has to decide whether or not to poison Brock. Trying to work out what Walt should do, and anxiously anticipating what he will do, are mental exercises few can help engaging in as they watch the show.

            The current reigning conception of narrative in academia explains the appeal of stories by suggesting it derives from their tendency to fulfill conscious and unconscious desires, most troublesomely our desires to have our prejudices reinforced. We like to see men behaving in ways stereotypically male, women stereotypically female, minorities stereotypically black or Hispanic, and so on. Cultural products like works of art, and even scientific findings, are embraced, the reasoning goes, because they cement the hegemony of various dominant categories of people within the society. This tradition in arts scholarship and criticism can in large part be traced back to psychoanalysis, but it has developed over the last century to incorporate both the predominant view of language in the humanities and the cultural determinism espoused by many scholars in the service of various forms of identity politics. 

            The postmodern ideology that emerged from the convergence of these schools is marked by a suspicion that science is often little more than a veiled effort at buttressing the political status quo, and its preeminent thinkers deliberately set themselves apart from the humanist and Enlightenment traditions that held sway in academia until the middle of the last century by writing in byzantine, incoherent prose. Even though there could be no rational way either to support or challenge postmodern ideas, scholars still take them as cause for leveling accusations against both scientists and storytellers of using their work to further reactionary agendas.

            For anyone who recognizes the unparalleled power of science both to advance our understanding of the natural world and to improve the conditions of human lives, postmodernism stands out as a catastrophic wrong turn, not just in academic history but in the moral evolution of our civilization. The equation of narrative with fantasy is a bizarre fantasy in its own right. Attempting to explain the appeal of a show like Breaking Bad by suggesting that viewers have an unconscious urge to be diagnosed with cancer and to subsequently become drug manufacturers is symptomatic of intractable institutional delusion. And, as Pinker recounts in Better Angels, literature, and novels in particular, were likely instrumental in bringing about the shift in consciousness toward greater compassion for greater numbers of people that resulted in the unprecedented decline in violence beginning in the second half of the nineteenth century.

Yet, when it comes to arts scholarship, postmodernism is just about the only game in town. Granted, the writing in this tradition has progressed a great deal toward greater clarity, but the focus on identity politics has intensified to the point of hysteria: you’d be hard-pressed to find a major literary figure who hasn’t been accused of misogyny at one point or another, and any scientist who dares study something like gender differences can count on having her motives questioned and her work lampooned by well intentioned, well indoctrinated squads of well-poisoning liberal wags.

            When Emily Nussbaum complains about viewers of Breaking Bad being lulled by the “masculine adventure” and the digs against “nagging wives” into becoming enablers of Walt’s bad behavior, she’s doing exactly what so many of us were taught to do in academic courses on literary and film criticism, applying a postmodern feminist ideology to the show—and completely missing the point. As the series opens, Walt is deliberately portrayed as downtrodden and frustrated, and Skyler’s bullying is an important part of that dynamic. But the pleasure of watching the show doesn’t come so much from seeing Walt get out from under Skyler’s thumb—he never really does—as it does from anticipating and fretting over how far Walt will go in his criminality, goaded on by all that pent-up frustration. Walt shows a similar concern for himself, worrying over what effect his exploits will have on who he is and how he will be remembered. We see this in season three when he becomes obsessed by the “contamination” of his lab—which turns out to be no more than a house fly—and at several other points as well. Viewers are not concerned with Walt because he serves as an avatar acting out their fantasies (or else the show would have a lot more nude scenes with the principal of the school he teaches in). They’re concerned because, at least at the beginning of the show, he seems to be a good person and they can sympathize with his tribulations.

The much more solidly grounded view of narrative inspired by moral psychology suggests that common themes in fiction are not reflections or reinforcements of some dominant culture, but rather emerge from aspects of our universal human psychology. Our feelings about characters, according to this view, aren’t determined by how well they coincide with our abstract prejudices; on the contrary, we favor the types of people in fiction we would favor in real life. Indeed, if the story is any good, we will have to remind ourselves that the people whose lives we’re tracking really are fictional. Greene doesn’t explore the intersection between moral psychology and narrative in Moral Tribes, but he does give a nod to what we can only hope will be a burgeoning field when he writes, 

Nowhere is our concern for how others treat others more apparent than in our intense engagement with fiction. Were we purely selfish, we wouldn’t pay good money to hear a made-up story about a ragtag group of orphans who use their street smarts and quirky talents to outfox a criminal gang. We find stories about imaginary heroes and villains engrossing because they engage our social emotions, the ones that guide our reactions to real-life cooperators and rogues. We are not disinterested parties. (59)

Many people ask why we care so much about people who aren’t even real, but we only ever reflect on the fact that what we’re reading or viewing is a simulation when we’re not sufficiently engrossed by it.

            Was Nussbaum completely wrong to insist that Walt went beyond the pale when he poisoned Brock? She certainly showed the type of myopia Greene attributes to the automatic mode by forgetting Walt saved at least four lives by endangering the one. But most viewers probably had a similar reaction. The trouble wasn’t that she was appalled; it was that her postmodernism encouraged her to unquestioningly embrace and give voice to her initial feelings. Greene writes, 

It’s good that we’re alarmed by acts of violence. But the automatic emotional gizmos in our brains are not infinitely wise. Our moral alarm systems think that the difference between pushing and hitting a switch is of great moral significance. More important, our moral alarm systems see no difference between self-serving murder and saving a million lives at the cost of one. It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy. (253)

Greene doesn’t reveal whether he’s a Breaking Bad fan or not, but his discussion of the footbridge dilemma gives readers a good indication of how he’d respond to Walt’s actions.

If you don’t feel that it’s wrong to push the man off the footbridge, there’s something wrong with you. I, too, feel that it’s wrong, and I doubt that I could actually bring myself to push, and I’m glad that I’m like this. What’s more, in the real world, not pushing would almost certainly be the right decision. But if someone with the best of intentions were to muster the will to push the man off the footbridge, knowing for sure that it would save five lives, and knowing for sure that there was no better alternative, I would approve of this action, although I might be suspicious of the person who chose to perform it. (251)

The overarching theme of Breaking Bad, which Nussbaum fails utterly to comprehend, is the transformation of a good man into a bad one. When he poisons Brock, we’re glad he succeeded in saving his family, some of us are even okay with methods, but we’re worried—suspicious even—about what his ability to go through with it says about him. Over the course of the series, we’ve found ourselves rooting for Walt, and we’ve come to really like him. We don’t want to see him break too bad. And however bad he does break we can’t help hoping for his redemption. Since he’s a valuable member of our tribe, we’re loath to even consider it might be time to start thinking he has to go.




Healthcare.gov and the Government We Deserve

No matter what changes you make to a system as large and complicated as the healthcare industry, there are going to be winners and losers. The president was wrong to claim otherwise. Republicans were wrong to make all the misleading or mendacious claims that put the president in the position of having to counteract them by overselling the Affordable Care Act. Now we’re hearing every day about how millions of people are getting the cancellation notices Obama said they didn’t need to worry about getting, and they can’t sign on to healthcare.gov to shop for new plans because the website didn’t work when it was launched. The first point to make is that those who have been opposed to Obamacare all along were primed to pounce on any issues that arose—knowing full well that issues were bound to arise. Anyone who pronounced the reform a failure before the ink was dry on the bill forfeited whatever credibility he might have expected when he later pronounces it a failure owing to complications in the implementation.

The blame for the widespread worries and furor, however, can’t be laid solely at the feet of republicans because in claiming that there would be no losers Obama was effectively handing over a bunch of perfect props for the catastrophe narrative. If you claim not a single premium will go up, everyone who sees theirs increase can expect an invitation to go on a conservative pundit’s radio show. If you say not a single person will lose their coverage, everyone who receives a cancellation notice can expect to be prodded to go on TV to brandish it. Never mind the question of whether far more premiums go down than up. Never mind that millions more will acquire access to affordable coverage for the first time than will lose cheap plans which were full of holes anyway. Stories are what sway people, not numbers. And it’s too late now to roll out counter-stories about all the people who thought they were happy with their plans—until they actually got sick.

The second important point is that it’s not all that surprising that a website tasked with integrating inputs as complicated, from sources as far-flung, relying on the cooperation of private institutions as diverse, and serving visitors as numerous as those involved with healthcare.gov would have serious issues at launch. It’s also not surprising that people would start pointing fingers as soon as those issues arise, each voicing an opinion about how the design and launch should have been managed while expressing something between longsuffering contempt and unchecked outrage that any other approach was taken. The delays and frustration with the site are undeniably unfortunate because many people are forming their first impressions of healthcare reform in general based on it—even though many of its other policies have been in place for a while. But a problematic website launch is hardly the scandal it’s been made out to be.

Which brings us to the third point that needs to be made—however bad the governance behind healthcare reform really is or isn’t, the problems are our fault. That’s right, our fault—yours and mine. I’m not making a partisan point here; the problem wasn’t caused by people voting one way or another in any particular election. The problem that has led to this and countless other government failures is that we all, as American citizens, think of government as just another consumer product. Just like when we pull up to the drive-through at Burger King, we want government services our way, right away, and beyond that we don’t want to be bothered by it. Whenever we hear an elected official saying something we want to believe or already agree with, we give him or her our support and do no further investigation. Whenever they say something we disagree with, or whenever something goes wrong, we raise holy hell, and do no further investigation. And this is why our government almost never engages in anything even remotely resembling anything a rational person would describe as a practical decision-making or problem-solving process. Instead, the only process that takes place in the federal government is a never-ending, no-holds-barred, winner-take-all campaign for votes, lobbying money, and public approval.

If our politicians can no longer govern, it’s because we force them to perform on the sleazy reality shows our media outlets have become, which value controversies and scandals infinitely more than the information and context that would empower us to make prudent decisions about candidates and policies. Why have news shows sunk so low? Because they’re pandering to our dual need for validation and entertainment. Because if they don’t tell us what we want to hear we surf the channels for someone who will. Because if we don’t see two volatile, nearly hysterical people exchanging zingers we flip to some other reality show featuring some poor souls purchasing a few moments of fame for the small price of their dignity. Educate and inform? That might make us feel ignorant, or worse, stupid. It might make us feel like we’re in school—that dreadful ego-crushing realm of anti-entertainment the industry has been doing its best for decades to make us loath. Even when news shows oblige us with daily affirmations about how smart we already are and let us feel blithely superior to the people we elect to govern us, it’s still overwhelmingly the most ideologically erect who tune in, hoping to see their guys scoring points against the other guys, points they hope to score themselves at the next family gathering or friendly debate. Is it any wonder our so-called deliberative processes look more like WWE Smackdown?

The news isn’t—at least it shouldn’t be—a consumer product any more than government is. As citizens in a democracy, it is our responsibility, our duty to stay informed. It is the role of journalists and news agencies to help us do that. But for the past few decades we haven’t let them. Instead, those of us who actually bother to watch the news have been demanding precisely the type of news we’re getting. The news has resorted to scandalmongering, insane hyperbole, personal attacks, shouting matches, and cheerleading for reality-challenged tribal ideologues—to lure us away from actual news, to rivet us to the screen. And democracy is suffering for it.

Thinking of democracy as a set of consumer goods—government services your way, right away—transforms a system whose fundamental purpose is to serve collective interests into a commodity to be purchased by individuals. We think by paying taxes we’re participating in some type of commerce, so if our taxes serve collective goals we don’t benefit from directly it sours our attitude toward government. The problem is we’re too short-sighted and take far too much for granted to appreciate how much we truly benefit. While for some Americans government is a byword for limitations on individual freedom, government is in fact a fundamental necessity of life in complex societies. Without institutions empowered with the authority to direct collective action and to make and enforce rules, any challenge or problem that arises as a result of large numbers of people living together in circumscribed regions would be insurmountable. We would have no roads, no power grids, no schools, no currency, no military to protect us from foreign invasion. We would also have no science, no advanced technology like you’re probably using to read this essay, and no scientifically advanced healthcare.

Far from curtailing our freedoms, government is actually one of the main reasons why we Americans are currently enjoying a standard of living nearly unprecedented in the history of our species, surpassed only by those in some European nations whose governments are even larger as a proportion of their economies than ours. But in America we take the benefits afforded us by our wildly successful democracy for granted. Political conservatives fail to see the contradiction in revering our Founding Fathers and our Constitution while reviling the government they established as a hostile tribe of outsiders; while liberals perversely insist that civilizing institutions like governments function as engines of oppression for everyone but the moneyed white men who found them—even as these same liberals go about trying to convince the citizenry that government is the best way to combat injustice. The impression that government has either failed to meet the challenges of a modernizing world or been commandeered by members of a rival tribe results in both widespread apathy and the election of officials who deliberately try to undermine the very institutions they serve, making our government, in the most literal way, a victim of its own success.

The fundamental point we’re missing is that we can’t escape the need for actions that solve collective problems—even though every such action will by necessity create both winners and losers. We have to accept this tradeoff because overall we’re all much better off living in an advanced civilization, with things like cars and contact lenses, that’s governed democratically. The free market is a wonderful force for freedom, but it cannot solve collective action problems. And public health is not just an individual concern. When people who can’t afford routine doctor visits end up in the ER, we all pay for that. When people can’t pay their medical bills and declare bankruptcy, we all pay for that. When people overuse antibiotics and inadvertently create strains we can no longer treat, that may in the short term benefit the individual, but ultimately we all pay for that.

Healthcare, like news, education, and government in general, cannot be thought of as a consumer product. I don’t care who you are: when you get sick, you don’t shop around for the best services—you go to the doctor and do whatever he tells you to do. If, as healthy Americans, we persist in asking, “What can healthcare reform do for me?” instead of asking, “What can we do for healthcare reform?” then we are quite simply terrible citizens, and we don’t deserve to live in this paradise that is our advanced democracy. If all we can think to ask of any government initiative is what it can do for me, then we’re nothing but parasites, sucking up all the vital force from our society, as we hypocritically stand back pointing our fingers and getting howling mad about the sorry state of our government and the ugly nature of our politics. 

For more about collective action problems read: What's Wrong with the Darwin Economy?

For more on why government shouldn't be considered an enemy tribe read: A Zero-Sum Game with Obamacare?

And to find out why civilizations, particularly democratically governed ones are too great to take for granted read: The Self-Righteousness Instinct: Steven Pinker on the Better Angels of Modernity and the Evils of Morality 

A Zero-Sum Game with Obamacare?

The main reason conservatives are so steadfast in their opposition to Obamacare is that they believe in the hard-and-fast rule that more government equals less freedom for citizens. In other words, conservatives see the relationship between the government and the people as a zero-sum game, with each win for one amounting to a loss for the other. There’s an obvious kernel of truth to this reasoning: government exists to govern after all, to set limits, to establish guidelines and create an incentive system to get the people to adhere to them. At a fundamental level, then, government exists to control behavior; therefore less government ought to mean more freedom. At the same time, governments inevitably become their own separate entities demanding a share of a society’s resources to sustain themselves.

More government means more taxes, which in turn means less takehome and less freedom to spend and invest. To make matters worse, governments tend to creep. Every time society faces a new challenge, there’s a temptation to create a new government agency to address it, and every time you create a new agency you effectively create a new interest group made up of those it employs and those it serves. So our task as citizens, conservative thinking goes, is to resist any increase in the size of government and do whatever we can to shrink the government we already have. This is the conservative philosophy in a nutshell—and it shows why for conservatives Obamacare was already a catastrophe even before the ink was dry.

As citizens, though, we must ask whether the relationship between the government and the people really is a zero-sum game. With a moment’s reflection it’s relatively easy to come up with some scenarios in which government is actually playing a positive-sum game with the people—a win for one also means a win for the other. A store owner pays taxes, limiting his freedom, but those taxes go to funding the construction of better roads, which makes it easier for customers to drive to the store, increasing his profits. A driver has to obey strict rules on the road, but those rules make driving much safer. We pay a percentage of our income to the government so law enforcement officers prevent thieves from stealing everything we own. What about those government employees and the beneficiaries of their services? Even if we assume the agency in question in no way benefits anyone beyond its reach, those beneficiaries will still have more to spend at other people’s stores, and they’ll be less prone to committing acts of violence to procure the things they need to survive, so it won’t be a total loss.

            By acknowledging that not every win for government is a loss for the people, we’ve reframed the question about whether a bigger government can ever be a good thing to one about how effective government is at improving the lives of the people it serves. The notion that Obamacare can’t be a good idea simply because it means more government really doesn’t make sense outside the conservatives’ ideological antagonism toward government. We must ask instead what the consequences of Obamacare going into effect will be on American citizens. To answer that, we need to examine all the provisions included in the law. Of course, things start getting complicated as soon as we begin such an examination—and that’s precisely why simple hard-and-fast rules like more government means less freedom get so much traction in public debates. They serve as time- and attention-saving heuristics, and we’re all desperate for those. But we all also know that attempts at finding shortcuts often backfire, and we often can’t trust the people who point them out to us.

            Since our government is of the people, by the people, and for the people, the question of its ideal size is nonsensical. We must rather ask whether and how well each proposed law or agency will serve the people, whether it will serve them fairly, and whether it’s worth the costs. But, if we still want to come up with some basis for estimating the optimal size of government, keeping in mind that the more-government-less-freedom rule is built on faulty premises, we could compare governments against the well-being of their citizens across all the nations of the world. We can all probably agree at the outset that the extremes of anarchy in places like Somalia on the one side and totalitarianism in places like North Korea on the other are less than ideal. But countries that seem to be doing much better than the US on several measures—happiness, education outcomes, relative safety—actually have governments that are much bigger than ours. Tax revenue as a percentage of GDP in Norway is 43.5, in Switzerland 29.2, in Finland 43.9, in Sweden 49.5, and in America 27.3.  These correlations between larger governments and measures of citizens’ well-being involve far too many confounding factors to suggest a hard-and-fast rule in the direction opposite to the one conservatives apply, but they really do show that the question of the ideal size of government is essentially pointless.    

            So let’s stop talking about Obamacare, or any other law, in terms of its impact on the size of government and focus instead on the impact it will have on our lives. 

Also read: What's Wrong with the Darwin Economy?

The Self-Righteousness Instinct: Steven Pinker on the Better Angels of Modernity and the Evils of Morality

Steven Pinker is one of the few scientists who can write a really long book and still expect a significant number of people to read it. But I have a feeling many who might be vaguely intrigued by the buzz surrounding his 2011 book The Better Angels of Our Nature: Why Violence Has Declined wonder why he had to make it nearly seven hundred outsized pages long. Many curious folk likely also wonder why a linguist who proselytizes for psychological theories derived from evolutionary or Darwinian accounts of human nature would write a doorstop drawing on historical and cultural data to describe the downward trajectories of rates of the worst societal woes. The message that violence of pretty much every variety is at unprecedentedly low rates comes as quite a shock, as it runs counter to our intuitive, news-fueled sense of being on a crash course for Armageddon. So part of the reason behind the book’s heft is that Pinker has to bolster his case with lots of evidence to get us to rethink our views. But flipping through the book you find that somewhere between half and a third of its mass is devoted, not to evidence of the decline, but to answering the questions of why the trend has occurred and why it gives every indication of continuing into the foreseeable future. So is this a book about how evolution has made us violent or about how culture is making us peaceful?

The first thing that needs to be said about Better Angels is that you should read it. Despite its girth, it’s at no point the least bit cumbersome to read, and at many points it’s so fascinating that, weighty as it is, you’ll have a hard time putting it down. Pinker has mastered a prose style that’s simple and direct to the point of feeling casual without ever wanting for sophistication. You can also rest assured that what you’re reading is timely and important because it explores aspects of history and social evolution that impact pretty much everyone in the world but that have gone ignored—if not censoriously denied—by most of the eminences contributing to the zeitgeist since the decades following the last world war.

            Still, I suspect many people who take the plunge into the first hundred or so pages are going to feel a bit disoriented as they try to figure out what the real purpose of the book is, and this may cause them to falter in their resolve to finish reading. The problem is that the resistance Better Angels runs to such a prodigious page-count simultaneously anticipating and responding to doesn’t come from news media or the blinkered celebrities in the carnivals of sanctimonious imbecility that are political talk shows. It comes from Pinker’s fellow academics. The overall point of Better Angels remains obscure owing to some deliberate caginess on the author’s part when it comes to identifying the true targets of his arguments. 

            This evasiveness doesn’t make the book difficult to read, but a quality of diffuseness to the theoretical sections, a multitude of strands left dangling, does at points make you doubt whether Pinker had a clear purpose in writing, which makes you doubt your own purpose in reading. With just a little tying together of those strands, however, you start to see that while on the surface he’s merely righting the misperception that over the course of history our species has been either consistently or increasingly violent, what he’s really after is something different, something bigger. He’s trying to instigate, or at least play a part in instigating, a revolution—or more precisely a renaissance—in the way scholars and intellectuals think not just about human nature but about the most promising ways to improve the lot of human societies.

The longstanding complaint about evolutionary explanations of human behavior is that by focusing on our biology as opposed to our supposedly limitless capacity for learning they imply a certain level of fixity to our nature, and this fixedness is thought to further imply a limit to what political reforms can accomplish. The reasoning goes, if the explanation for the way things are is to be found in our biology, then, unless our biology changes, the way things are is the way they’re going to remain. Since biological change occurs at the glacial pace of natural selection, we’re pretty much stuck with the nature we have. 

            Historically, many scholars have made matters worse for evolutionary scientists today by applying ostensibly Darwinian reasoning to what seemed at the time obvious biological differences between human races in intelligence and capacity for acquiring the more civilized graces, making no secret of their conviction that the differences justified colonial expansion and other forms of oppressive rule. As a result, evolutionary psychologists of the past couple of decades have routinely had to defend themselves against charges that they’re secretly trying to advance some reactionary (or even genocidal) agenda. Considering Pinker’s choice of topic in Better Angels in light of this type of criticism, we can start to get a sense of what he’s up to—and why his efforts are discombobulating.

If you’ve spent any time on a university campus in the past forty years, particularly if it was in a department of the humanities, then you have been inculcated with an ideology that was once labeled postmodernism but that eventually became so entrenched in academia, and in intellectual culture more broadly, that it no longer requires a label. (If you took a class with the word "studies" in the title, then you got a direct shot to the brain.) Many younger scholars actually deny any espousal of it—“I’m not a pomo!”—with reference to a passé version marked by nonsensical tangles of meaningless jargon and the conviction that knowledge of the real world is impossible because “the real world” is merely a collective delusion or social construction put in place to perpetuate societal power structures. The disavowals notwithstanding, the essence of the ideology persists in an inescapable but unremarked obsession with those same power structures—the binaries of men and women, whites and blacks, rich and poor, the West and the rest—and the abiding assumption that texts and other forms of media must be assessed not just according to their truth content, aesthetic virtue, or entertainment value, but also with regard to what we imagine to be their political implications. Indeed, those imagined political implications are often taken as clear indicators of the author’s true purpose in writing, which we must sniff out—through a process called “deconstruction,” or its anemic offspring “rhetorical analysis”—lest we complacently succumb to the subtle persuasion.

In the late nineteenth and early twentieth centuries, faith in what we now call modernism inspired intellectuals to assume that the civilizations of Western Europe and the United States were on a steady march of progress toward improved lives for all their own inhabitants as well as the world beyond their borders. Democracy had brought about a new age of government in which rulers respected the rights and freedom of citizens. Medicine was helping ever more people live ever longer lives. And machines were transforming everything from how people labored to how they communicated with friends and loved ones. Everyone recognized that the driving force behind this progress was the juggernaut of scientific discovery. But jump ahead a hundred years to the early twenty-first century and you see a quite different attitude toward modernity. As Pinker explains in the closing chapter of Better Angels,

A loathing of modernity is one of the great constants of contemporary social criticism. Whether the nostalgia is for small-town intimacy, ecological sustainability, communitarian solidarity, family values, religious faith, primitive communism, or harmony with the rhythms of nature, everyone longs to turn back the clock. What has technology given us, they say, but alienation, despoliation, social pathology, the loss of meaning, and a consumer culture that is destroying the planet to give us McMansions, SUVs, and reality television? (692)

The social pathology here consists of all the inequities and injustices suffered by the people on the losing side of those binaries all us closet pomos go about obsessing over. Then of course there’s industrial-scale war and all the other types of modern violence. With terrorism, the War on Terror, the civil war in Syria, the Israel-Palestine conflict, genocides in the Sudan, Kosovo, and Rwanda, and the marauding bands of drugged-out gang rapists in the Congo, it seems safe to assume that science and democracy and capitalism have contributed to the construction of an unsafe global system with some fatal, even catastrophic design flaws. And that’s before we consider the two world wars and the Holocaust. So where the hell is this decline Pinker refers to in his title?
Historical Myopia, Edge.org History of Violence
            One way to think about the strain of postmodernism or anti-modernism with the most currency today (and if you’re reading this essay you can just assume your views have been influenced by it) is that it places morality and politics—identity politics in particular—atop a hierarchy of guiding standards above science and individual rights. So, for instance, concerns over the possibility that a negative image of Amazonian tribespeople might encourage their further exploitation trump objective reporting on their culture by anthropologists, even though there’s no evidence to support those concerns. And evidence that the disproportionate number of men in STEM fields reflects average differences between men and women in lifestyle preferences and career interests is ignored out of deference to a political ideal of perfect parity. The urge to grant moral and political ideals veto power over science is justified in part by all the oppression and injustice that abounds in modern civilizations—sexism, racism, economic exploitation—but most of all it’s rationalized with reference to the violence thought to follow in the wake of any movement toward modernity. Pinker writes,

“The twentieth century was the bloodiest in history” is a cliché that has been used to indict a vast range of demons, including atheism, Darwin, government, science, capitalism, communism, the ideal of progress, and the male gender. But is it true? The claim is rarely backed up by numbers from any century other than the 20th, or by any mention of the hemoclysms of centuries past. (193)

He gives the question even more gravity when he reports that all those other areas in which modernity is alleged to be such a colossal failure tend to improve in the absence of violence. “Across time and space,” he writes in the preface, “the more peaceable societies also tend to be richer, healthier, better educated, better governed, more respectful of their women, and more likely to engage in trade” (xxiii). So the question isn’t just about what the story with violence is; it’s about whether science, liberal democracy, and capitalism are the disastrous blunders we’ve learned to think of them as or whether they still just might hold some promise for a better world.
*******
Thomas Hobbes
            It’s in about the third chapter of Better Angels that you start to get the sense that Pinker’s style of thinking is, well, way out of style. He seems to be marching to the beat not of his own drummer but of some drummer from the nineteenth century. In the chapter previous, he drew a line connecting the violence of chimpanzees to that in what he calls non-state societies, and the images he’s left you with are savage indeed. Now he’s bringing in the philosopher Thomas Hobbes’s idea of a government Leviathan that once established immediately works to curb the violence that characterizes us humans in states of nature and anarchy. According to sociologist Norbert Elias’s 1969 book, The Civilizing Process, a work whose thesis plays a starring role throughout Better Angels, the consolidation of a Leviathan in England set in motion a trend toward pacification, beginning with the aristocracy no less, before spreading down to the lower ranks and radiating out to the countries of continental Europe and onward thence to other parts of the world. You can measure your feelings of unease in response to Pinker’s civilizing scenario as a proxy for how thoroughly steeped you are in postmodernism.
Norbert Elias
            The two factors missing from his account of the civilizing pacification of Europe that distinguish it from the self-congratulatory and self-exculpatory sagas of centuries past are the innate superiority of the paler stock and the special mission of conquest and conversion commissioned by a Christian god. In a later chapter, Pinker violates the contemporary taboo against discussing—or even thinking about—the potential role of average group (racial) differences in a propensity toward violence, but he concludes the case for any such differences is unconvincing: “while recent biological evolution may, in theory, have tweaked our inclinations toward violence and nonviolence, we have no good evidence that it actually has” (621). The conclusion that the Civilizing Process can’t be contingent on congenital characteristics follows from the observation of how readily individuals from far-flung regions acquire local habits of self-restraint and fellow-feeling when they’re raised in modernized societies. As for religion, Pinker includes it in a category of factors that are “Important but Inconsistent” with regard to the trend toward peace, dismissing the idea that atheism leads to genocide by pointing out that “Fascism happily coexisted with Catholicism in Spain, Italy, Portugal, and Croatia, and though Hitler had little use for Christianity, he was by no means an atheist, and professed that he was carrying out a divine plan.” Though he cites several examples of atrocities incited by religious fervor, he does credit “particular religious movements at particular times in history” with successfully working against violence (677).

            Despite his penchant for blithely trampling on the taboos of the liberal intelligentsia, Pinker refuses to cooperate with our reflex to pigeonhole him with imperialists or far-right traditionalists past or present. He continually holds up to ridicule the idea that violence has any redeeming effects. In a section on the connection between increasing peacefulness and rising intelligence, he suggests that our violence-tolerant “recent ancestors” can rightly be considered “morally retarded” (658). He singles out George W. Bush as an unfortunate and contemptible counterexample in a trend toward more complex political rhetoric among our leaders. And if it’s either gender that comes out not looking as virtuous in Better Angels it ain’t the distaff one. Pinker is difficult to categorize politically because he’s a scientist through and through. What he’s after are reasoned arguments supported by properly weighed evidence.

But there is something going on in Better Angels beyond a mere accounting for the ongoing decline in violence that most of us are completely oblivious of being the beneficiaries of. For one, there’s a challenge to the taboo status of topics like genetic differences between groups, or differences between individuals in IQ, or differences between genders. And there’s an implicit challenge as well to the complementary premises he took on more directly in his earlier book The Blank Slate that biological theories of human nature always lead to oppressive politics and that theories of the infinite malleability of human behavior always lead to progress (communism relies on a blank slate theory, and it inspired guys like Stalin, Mao, and Pol Pot to murder untold millions). But the most interesting and important task Pinker has set for himself with Better Angels is a restoration of the Enlightenment, with its twin pillars of science and individual rights, to its rightful place atop the hierarchy of our most cherished guiding principles, the position we as a society misguidedly allowed to be usurped by postmodernism, with its own dual pillars of relativism and identity politics.

 But, while the book succeeds handily in undermining the moral case against modernism, it does so largely by stealth, with only a few explicit references to the ideologies whose advocates have dogged Pinker and his fellow evolutionary psychologists for decades. Instead, he explores how our moral intuitions and political ideals often inspire us to make profoundly irrational arguments for positions that rational scrutiny reveals to be quite immoral, even murderous. As one illustration of how good causes can be taken to silly, but as yet harmless, extremes, he gives the example of how “violence against children has been defined down to dodgeball” (415) in gym classes all over the US, writing that

The prohibition against dodgeball represents the overshooting of yet another successful campaign against violence, the century-long movement to prevent the abuse and neglect of children. It reminds us of how a civilizing offensive can leave a culture with a legacy of puzzling customs, peccadilloes, and taboos. The code of etiquette bequeathed to us by this and other Rights Revolutions is pervasive enough to have acquired a name. We call it political correctness. (381)

Such “civilizing offensives” are deliberately undertaken counterparts to the fortuitously occurring Civilizing Process Elias proposed to explain the jagged downward slope in graphs of relative rates of violence beginning in the Middle Ages in Europe. The original change Elias describes came about as a result of rulers consolidating their territories and acquiring greater authority. As Pinker explains,

Once Leviathan was in charge, the rules of the game changed. A man’s ticket to fortune was no longer being the baddest knight in the area but making a pilgrimage to the king’s court and currying favor with him and his entourage. The court, basically a government bureaucracy, had no use for hotheads and loose cannons, but sought responsible custodians to run its provinces. The nobles had to change their marketing. They had to cultivate their manners, so as not to offend the king’s minions, and their empathy, to understand what they wanted. The manners appropriate for the court came to be called “courtly” manners or “courtesy.” (75)

And this higher premium on manners and self-presentation among the nobles would lead to a cascade of societal changes.
Elias first lighted on his theory of the Civilizing Process as he was reading some of the etiquette guides which survived from that era. It’s striking to us moderns to see that knights of yore had to be told not to dispose of their snot by shooting it into their host’s table cloth, but that simply shows how thoroughly people today internalize these rules. As Elias explains, they’ve become second nature to us. Of course, we still have to learn them as children. Pinker prefaces his discussion of Elias’s theory with a recollection of his bafflement at why it was so important for him as a child to abstain from using his knife as a backstop to help him scoop food off his plate with a fork. Table manners, he concludes, reside on the far end of a continuum of self-restraint at the opposite end of which are once-common practices like cutting off the nose of a dining partner who insults you. Likewise, protecting children from the perils of flying rubber balls is the product of a campaign against the once-common custom of brutalizing them. The centrality of self-control is the common underlying theme: we control our urge to misuse utensils, including their use in attacking our fellow diners, and we control our urge to throw things at our classmates, even if it’s just in sport. The effect of the Civilizing Process in the Middle Ages, Pinker explains, was that “A culture of honor—the readiness to take revenge—gave way to a culture of dignity—the readiness to control one’s emotions” (72). In other words, diplomacy became more important than deterrence.

            What we’re learning here is that even an evolved mind can adjust to changing incentive schemes. Chimpanzees have to control their impulses toward aggression, sexual indulgence, and food consumption in order to survive in hierarchical bands with other chimps, many of whom are bigger, stronger, and better-connected. Much of the violence in chimp populations takes the form of adult males vying for positions in the hierarchy so they can enjoy the perquisites males of lower status must forgo to avoid being brutalized. Lower ranking males meanwhile bide their time, hopefully forestalling their gratification until such time as they grow stronger or the alpha grows weaker. In humans, the capacity for impulse-control and the habit of delaying gratification are even more important because we live in even more complex societies. Those capacities can either lie dormant or they can be developed to their full potential depending on exactly how complex the society is in which we come of age. Elias noticed a connection between the move toward more structured bureaucracies, less violence, and an increasing focus on etiquette, and he concluded that self-restraint in the form of adhering to strict codes of comportment was both an advertisement of, and a type of training for, the impulse-control that would make someone a successful bureaucrat.

            Aside from children who can’t fathom why we’d futz with our forks trying to capture recalcitrant peas, we normally take our society’s rules of etiquette for granted, no matter how inconvenient or illogical they are, seldom thinking twice before drawing unflattering conclusions about people who don’t bother adhering to them, the ones for whom they aren’t second nature. And the importance we place on etiquette goes beyond table manners. We judge people according to the discretion with which they dispose of any and all varieties of bodily effluent, as well as the delicacy with which they discuss topics sexual or otherwise basely instinctual. 

            Elias and Pinker’s theory is that, while the particular rules are largely arbitrary, the underlying principle of transcending our animal nature through the application of will, motivated by an appreciation of social convention and the sensibilities of fellow community members, is what marked the transition of certain constituencies of our species from a violent non-state existence to a relatively peaceful, civilized lifestyle. To Pinker, the uptick in violence that ensued once the counterculture of the 1960s came into full blossom was no coincidence. The squares may not have been as exciting as the rock stars who sang their anthems to hedonism and the liberating thrill of sticking it to the man. But a society of squares has certain advantages—a lower probability for each of its citizens of getting beaten or killed foremost among them.

            The Civilizing Process as Elias and Pinker, along with Immanuel Kant, understand it picks up momentum as levels of peace conducive to increasingly complex forms of trade are achieved. To understand why the move toward markets or “gentle commerce” would lead to decreasing violence, us pomos have to swallow—at least momentarily—our animus for Wall Street and all the corporate fat cats in the top one percent of the wealth distribution. The basic dynamic underlying trade is that one person has access to more of something than they need, but less of something else, while another person has the opposite balance, so a trade benefits them both. It’s a win-win, or a positive-sum game. The hard part for educated liberals is to appreciate that economies work to increase the total wealth; there isn’t a set quantity everyone has to divvy up in a zero-sum game, an exchange in which every gain for one is a loss for another. And Pinker points to another benefit:

Positive-sum games also change the incentives for violence. If you’re trading favors or surpluses with someone, your trading partner suddenly becomes more valuable to you alive than dead. You have an incentive, moreover, to anticipate what he wants, the better to supply it to him in exchange for what you want. Though many intellectuals, following in the footsteps of Saints Augustine and Jerome, hold businesspeople in contempt for their selfishness and greed, in fact a free market puts a premium on empathy. (77)

The Occupy Wall Street crowd will want to jump in here with a lengthy list of examples of businesspeople being unempathetic in the extreme. But Pinker isn’t saying commerce always forces people to be altruistic; it merely encourages them to exercise their capacity for perspective-taking. Discussing the emergence of markets, he writes,

The advances encouraged the division of labor, increased surpluses, and lubricated the machinery of exchange. Life presented people with more positive-sum games and reduced the attractiveness of zero-sum plunder. To take advantage of the opportunities, people had to plan for the future, control their impulses, take other people’s perspectives, and exercise the other social and cognitive skills needed to prosper in social networks. (77)

And these changes, the theory suggests, will tend to make merchants less likely on average to harm anyone. As bad as bankers can be, they’re not out sacking villages.

            Once you have commerce, you also have a need to start keeping records. And once you start dealing with distant partners it helps to have a mode of communication that travels. As writing moved out of the monasteries, and as technological advances in transportation brought more of the world within reach, ideas and innovations collided to inspire sequential breakthroughs and discoveries. Every advance could be preserved, dispersed, and ratcheted up. Pinker focuses on two relatively brief historical periods that witnessed revolutions in the way we think about violence, and both came in the wake of major advances in the technologies involved in transportation and communication. The first is the Humanitarian Revolution that occurred in the second half of the eighteenth century, and the second covers the Rights Revolutions in the second half of the twentieth. The Civilizing Process and gentle commerce weren’t sufficient to end age-old institutions like slavery and the torture of heretics. But then came the rise of the novel as a form of mass entertainment, and with all the training in perspective-taking readers were undergoing the hitherto unimagined suffering of slaves, criminals, and swarthy foreigners became intolerably imaginable. People began to agitate and change ensued.

            The Humanitarian Revolution occurred at the tail end of the Age of Reason and is recognized today as part of the period known as the Enlightenment. According to some scholarly scenarios, the Enlightenment, for all its successes like the American Constitution and the abolition of slavery, paved the way for all those allegedly unprecedented horrors in the first half of the twentieth century. Notwithstanding all this ivory tower traducing, the Enlightenment emerged from dormancy after the Second World War and gradually gained momentum, delivering us into a period Pinker calls the New Peace. Just as the original Enlightenment was preceded by increasing cosmopolitanism, improving transportation, and an explosion of literacy, the transformations that brought about the New Peace followed a burst of technological innovation. For Pinker, this is no coincidence. He writes,

If I were to put my money on the single most important exogenous cause of the Rights Revolutions, it would be the technologies that made ideas and people increasingly mobile. The decades of the Rights Revolutions were the decades of the electronics revolutions: television, transistor radios, cable, satellite, long-distance telephones, photocopiers, fax machines, the Internet, cell phones, text messaging, Web video. They were the decades of the interstate highway, high-speed rail, and the jet airplane. They were the decades of the unprecedented growth in higher education and in the endless frontier of scientific research. Less well known is that they were also the decades of an explosion in book publishing. From 1960 to 2000, the annual number of books published in the United States increased almost fivefold. (477)

Violence got slightly worse in the 60s. But the Civil Rights Movement was underway, Women’s Rights were being extended into new territories, and people even began to acknowledge that animals could suffer, prompting them to argue that we shouldn’t cause them to do so without cause. Today the push for Gay Rights continues. By 1990, the uptick in violence was over, and so far the move toward peace is looking like an ever greater success. Ironically, though, all the new types of media bringing images from all over the globe into our living rooms and pockets contributes to the sense that violence is worse than ever.
*******

            Three factors brought about a reduction in violence over the course of history then: strong government, trade, and communications technology. These factors had the impact they did because they interacted with two of our innate propensities, impulse-control and perspective-taking, by giving individuals both the motivation and the wherewithal to develop them both to ever greater degrees. It’s difficult to draw a clear delineation between developments that were driven by chance or coincidence and those driven by deliberate efforts to transform societies. But Pinker does credit political movements based on moral principles with having played key roles:

Insofar as violence is immoral, the Rights Revolutions show that a moral way of life often requires a decisive rejection of instinct, culture, religion, and standard practice. In their place is an ethics that is inspired by empathy and reason and stated in the language of rights. We force ourselves into the shoes (or paws) of other sentient beings and consider their interests, starting with their interest in not being hurt or killed, and we ignore superficialities that may catch our eye such as race, ethnicity, gender, age, sexual orientation, and to some extent, species. (475)

Some of the instincts we must reject in order to bring about peace, however, are actually moral instincts.

Pinker is setting up a distinction here between different kinds of morality. The one he describes that’s based on perspective-taking—which evidence he presents later suggests inspires sympathy—and is “stated in the language of rights” is the one he credits with transforming the world for the better. Of the idea that superficial differences shouldn’t distract us from our common humanity, he writes,

This conclusion, of course, is the moral vision of the Enlightenment and the strands of humanism and liberalism that have grown out of it. The Rights Revolutions are liberal revolutions. Each has been associated with liberal movements, and each is currently distributed along a gradient that runs, more or less, from Western Europe to the blue American states to the red American states to the democracies of Latin America and Asia and then to the more authoritarian countries, with Africa and most of the Islamic world pulling up the rear. In every case, the movements have left Western cultures with excesses of propriety and taboo that are deservedly ridiculed as political correctness. But the numbers show that the movements have reduced many causes of death and suffering and have made the culture increasingly intolerant of violence in any form. (475-6)

So you’re not allowed to play dodgeball at school or tell off-color jokes at work, but that’s a small price to pay. The most remarkable part of this passage though is that gradient he describes; it suggests the most violent regions of the globe are also the ones where people are the most obsessed with morality, with things like Sharia and so-called family values. It also suggests that academic complaints about the evils of Western culture are unfounded and startlingly misguided. As Pinker casually points out in his section on Women’s Rights, “Though the United States and other Western nations are often accused of being misogynistic patriarchies, the rest of the world is immensely worse” (413).
Jonathan Haidt
            The Better Angels of Our Nature came out about a year before Jonathan Haidt’s The Righteous Mind, but Pinker’s book beats Haidt’s to the punch by identifying a serious flaw in his reasoning. The Righteous Mind explores how liberals and conservatives conceive of morality differently, and Haidt argues that each conception is equally valid so we should simply work to understand and appreciate opposing political views. It’s not like you’re going to change anyone’s mind anyway, right? But the liberal ideal of resisting certain moral intuitions tends to bring about a rather important change wherever it’s allowed to be realized. Pinker writes that

right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence. And that retraction is precisely the agenda of classical liberalism: a freedom of individuals from tribal and authoritarian force, and a tolerance of personal choices as long as they do not infringe on the autonomy and well-being of others. (637)

Classical liberalism—which Pinker distinguishes from contemporary political liberalism—can even be viewed as an effort to move morality away from the realm of instincts and intuitions into the more abstract domains of law and reason. The perspective-taking at the heart of Enlightenment morality can be said to consist of abstracting yourself from your identifying characteristics and immediate circumstances to imagine being someone else in unfamiliar straits. A man with a job imagines being a woman who can’t get one. A white man on good terms with law enforcement imagines being a black man who gets harassed. This practice of abstracting experiences and distilling individual concerns down to universal principles is the common thread connecting Enlightenment morality to science.

            So it’s probably no coincidence, Pinker argues, that as we’ve gotten more peaceful, people in Europe and the US have been getting better at abstract reasoning as well, a trend which has been going on for as long as researchers have had tests to measure it. Psychologists over the course of the twentieth century have had to adjust IQ test results (the average is always 100) a few points every generation because scores on a few subsets of questions have kept going up. The regular rising of scores is known as the Flynn Effect, after psychologist James Flynn, who was one of the first researchers to realize the trend was more than methodological noise. Having posited a possible connection between scientific and moral reasoning, Pinker asks, “Could there be a moral Flynn Effect?” He explains,

We have several grounds for supposing that enhanced powers of reason—specifically, the ability to set aside immediate experience, detach oneself from a parochial vantage point, and frame one’s ideas in abstract, universal terms—would lead to better moral commitments, including an avoidance of violence. And we have just seen that over the course of the 20th century, people’s reasoning abilities—particularly their ability to set aside immediate experience, detach themselves from a parochial vantage point, and think in abstract terms—were steadily enhanced. (656)

Pinker cites evidence from an array of studies showing that high-IQ people tend have high moral IQs as well. One of them, an infamous study by psychologist Satoshi Kanazawa based on data from over twenty thousand young adults in the US, demonstrates that exceptionally intelligent people tend to hold a particular set of political views. And just as Pinker finds it necessary to distinguish between two different types of morality he suggests we also need to distinguish between two different types of liberalism:

Intelligence is expected to correlate with classical liberalism because classical liberalism is itself a consequence of the interchangeability of perspectives that is inherent to reason itself. Intelligence need not correlate with other ideologies that get lumped into contemporary left-of-center political coalitions, such as populism, socialism, political correctness, identity politics, and the Green movement. Indeed, classical liberalism is sometimes congenial to the libertarian and anti-political-correctness factions in today’s right-of-center coalitions. (662)

And Kanazawa’s findings bear this out. It’s not liberalism in general that increases steadily with intelligence, but a particular kind of liberalism, the type focusing more on fairness than on ideology.
*******

Following the chapters devoted to historical change, from the early Middle Ages to the ongoing Rights Revolutions, Pinker includes two chapters on psychology, the first on our “Inner Demons” and the second on our “Better Angels.” Ideology gets some prime real estate in the Demons chapter, because, he writes, “the really big body counts in history pile up” when people believe they’re serving some greater good. “Yet for all that idealism,” he explains, “it’s ideology that drove many of the worst things that people have ever done to each other.” Christianity, Nazism, communism—they all “render opponents of the ideology infinitely evil and hence deserving of infinite punishment” (556). Pinker’s discussion of morality, on the other hand, is more complicated. It begins, oddly enough, in the Demons chapter, but stretches into the Angels one as well. This is how the section on morality in the Angels chapter begins:

The world has far too much morality. If you added up all the homicides committed in pursuit of self-help justice, the casualties of religious and revolutionary wars, the people executed for victimless crimes and misdemeanors, and the targets of ideological genocides, they would surely outnumber the fatalities from amoral predation and conquest. The human moral sense can excuse any atrocity in the minds of those who commit it, and it furnishes them with motives for acts of violence that bring them no tangible benefit. The torture of heretics and conversos, the burning of witches, the imprisonment of homosexuals, and the honor killing of unchaste sisters and daughters are just a few examples. (622)

The postmodern push to give precedence to moral and political considerations over science, reason, and fairness may seem like a good idea at first. But political ideologies can’t be defended on the grounds of their good intentions—they all have those. And morality has historically caused more harm than good. It’s only the minimalist, liberal morality that has any redemptive promise:

Though the net contribution of the human moral sense to human well-being may well be negative, on those occasions when it is suitably deployed it can claim some monumental advances, including the humanitarian reforms of the Enlightenment and the Rights Revolutions of recent decades. (622)

            One of the problems with ideologies Pinker explores is that they lend themselves too readily to for-us-or-against-us divisions which piggyback on all our tribal instincts, leading to dehumanization of opponents as a step along the path to unrestrained violence. But, we may ask, isn’t the Enlightenment just another ideology? If not, is there some reliable way to distinguish an ideological movement from a “civilizing offensive” or a “Rights Revolution”? Pinker doesn’t answer these questions directly, but it’s in his discussion of the demonic side of morality where Better Angels offers its most profound insights—and it’s also where we start to be able to piece together the larger purpose of the book. He writes,

In The Blank Slate I argued that the modern denial of the dark side of human nature—the doctrine of the Noble Savage—was a reaction against the romantic militarism, hydraulic theories of aggression, and glorification of struggle and strife that had been popular in the late 19th and early 20th centuries. Scientists and scholars who question the modern doctrine have been accused of justifying violence and have been subjected to vilification, blood libel, and physical assault. The Noble Savage myth appears to be another instance of an antiviolence movement leaving a cultural legacy of propriety and taboo. (488)

Since Pinker figured that what he and his fellow evolutionary psychologists kept running up against was akin to the repulsion people feel against poor table manners or kids winging balls at each other in gym class, he reasoned that he ought to be able to simply explain to the critics that evolutionary psychologists have no intention of justifying, or even encouraging complacency toward, the dark side of human nature. “But I am now convinced,” he writes after more than a decade of trying to explain himself, “that a denial of the human capacity for evil runs even deeper, and may itself be a feature of human nature” (488). That feature, he goes on to explain, makes us feel compelled to label as evil anyone who tries to explain evil scientifically—because evil as a cosmic force beyond the reach of human understanding plays an indispensable role in group identity.

Roy Baumeister
            Pinker began to fully appreciate the nature of the resistance to letting biology into discussions of human harm-doing when he read about the work of psychologist Roy Baumeister exploring the wide discrepancies in accounts of anger-inducing incidents between perpetrators and victims. The first studies looked at responses to minor offenses, but Baumeister went on to present evidence that the pattern, which Pinker labels the “Moralization Gap,” can be scaled up to describe societal attitudes toward historical atrocities. Pinker explains,

The Moralization Gap consists of complementary bargaining tactics in the negotiation for recompense between a victim and a perpetrator. Like opposing counsel in a lawsuit over a tort, the social plaintiff will emphasize the deliberateness, or at least the depraved indifference, of the defendant’s action, together with the pain and suffering the plaintiff endures. The social defendant will emphasize the reasonableness or unavoidability of the action, and will minimize the plaintiff’s pain and suffering. The competing framings shape the negotiations over amends, and also play to the gallery in a competition for their sympathy and for a reputation as a responsible reciprocator. (491)

Another of the Inner Demons Pinker suggests plays a key role in human violence is the drive for dominance, which he explains operates not just at the level of the individual but at that of the group to which he or she belongs. We want our group, however we understand it in the immediate context, to rest comfortably atop a hierarchy of other groups. What happens is that the Moralization Gap gets mingled with this drive to establish individual and group superiority. You see this dynamic playing out even in national conflicts. Pinker points out,

The victims of a conflict are assiduous historians and cultivators of memory. The perpetrators are pragmatists, firmly planted in the present. Ordinarily we tend to think of historical memory as a good thing, but when the events being remembered are lingering wounds that call for redress, it can be a call to violence. (493)

Name a conflict and with little effort you’ll likely also be able to recall contentions over historical records associated with it.

            The outcome of the Moralization Gap being taken to the group historical level is what Pinker and Baumeister call the “Myth of Pure Evil.” Harm-doing narratives start to take on religious overtones as what began as a conflict between regular humans pursuing or defending their interests, in ways they probably reasoned were just, transforms into an eternal struggle against inhuman and sadistic agents of chaos. And Pinker has come to realize that it is this Myth of Pure Evil that behavioral scientists ineluctably end up blaspheming:

Baumeister notes that in the attempt to understand harm-doing, the viewpoint of the scientist or scholar overlaps with the viewpoint of the perpetrator. Both take a detached, amoral stance toward the harmful act. Both are contextualizers, always attentive to the complexities of the situation and how they contributed to the causation of the harm. And both believe that the harm is ultimately explicable. (495)

This is why evolutionary psychologists who study violence inspire what Pinker in The Blank Slate called “political paranoia and moral exhibitionism” (106) on the part of us naïve pomos, ravenously eager to showcase our valor by charging once more into the breach against the mythical malevolence. All the while, our impregnable assurance of our own righteousness is borne of the conviction that we’re standing up for the oppressed. Pinker writes,

The viewpoint of the moralist, in contrast, is the viewpoint of the victim. The harm is treated with reverence and awe. It continues to evoke sadness and anger long after it was perpetrated. And for all the feeble ratiocination we mortals throw at it, it remains a cosmic mystery, a manifestation of the irreducible and inexplicable existence of evil in the universe. Many chroniclers of the Holocaust consider it immoral even to try to explain it. (495-6)

We simply can’t help inflating the magnitude of the crime in our attempt to convince our ideological opponents of their folly—though what we’re really inflating is our own, and our group’s, glorification—and so we can’t abide anyone puncturing our overblown conception because doing so lends credence to the opposition, making us look a bit foolish in the process for all our exaggerations.

            Reading Better Angels, you get the sense that Pinker experienced some genuine surprise and some real delight in discovering more and more corroboration for the idea that rates of violence have been trending downward in nearly every domain he explored. But things get tricky as you proceed through the pages because many of his arguments take on opposing positions he avoids naming. He seems to have seen the trove of evidence for declining violence as an opportunity to outflank the critics of evolutionary psychology in leftist, postmodern academia (to use a martial metaphor). Instead of calling them out directly, he circles around to chip away at the moral case for their political mission. We see this, for example, in his discussion of rape, which psychologists get into all kinds of trouble for trying to explain. After examining how scientists seem to be taking the perspective of perpetrators, Pinker goes on to write,

The accusation of relativizing evil is particularly likely when the motive the analyst imputes to the perpetrator appears to be venial, like jealousy, status, or retaliation, rather than grandiose, like the persistence of suffering in the world or the perpetuation of race, class, or gender oppression. It is also likely when the analyst ascribes the motive to every human being rather than to a few psychopaths or to the agents of a malignant political system (hence the popularity of the doctrine of the Noble Savage). (496)

In his earlier section on Woman’s Rights and the decline of rape, he attributed the difficulty in finding good data on the incidence of the crime, as well as some of the “preposterous” ideas about what motivates it, to the same kind of overextensions of anti-violence campaigns that lead to arbitrary rules about the use of silverware and proscriptions against dodgeball:

Common sense never gets in the way of a sacred custom that has accompanied a decline in violence, and today rape centers unanimously insist that “rape or sexual assault is not an act of sex or lust—it’s about aggression, power, and humiliation, using sex as the weapon. The rapist’s goal is domination.” (To which the journalist Heather MacDonald replies: “The guys who push themselves on women at keggers are after one thing only, and it’s not a reinstatement of the patriarchy.”) (406)

Jumping ahead to Pinker’s discussion of the Moralization Gap, we see that the theory that rape is about power, as opposed to the much more obvious theory that it’s about sex, is an outgrowth of the Myth of Pure Evil, an inflation of the mundane drives that lead some pathetic individuals to commit horrible crimes into eternal cosmic forces, inscrutable and infinitely punishable.

            When feminists impute political motives to rapists, they’re crossing the boundary from Enlightenment morality to the type of moral ideology that inspires dehumanization and violence. The good news is that it’s not difficult to distinguish between the two. From the Enlightenment perspective, rape is indefensibly wrong because it violates the autonomy of the victim—it’s an act of violence perpetrated by one individual against another. From the ideological perspective, every rape must be understood in the context of the historical oppression of women by men; it transcends the individuals involved as a representation of a greater evil. The rape-as-a-political-act theory also comes dangerously close to implying a type of collective guilt, which is a clear violation of individual rights.

Scholars already make the distinction between three different waves of feminism. The first two fall within Pinker’s definition of Rights Revolutions; they encompassed pushes for suffrage, marriage rights, and property rights, and then the rights to equal pay and equal opportunity in the workplace. The third wave is avowedly postmodern, its advocates committed to the ideas that gender is a pure social construct and that suggesting otherwise is an act of oppression. What you come away from Better Angels realizing, even though Pinker doesn’t say it explicitly, is that somewhere between the second and third waves feminists effectively turned against the very ideas and institutions that had been most instrumental in bringing about the historical improvements in women’s lives from the Middle Ages to the turn of the twenty-first century. And so it is with all the other ideologies on the postmodern roster.

Another misguided propaganda tactic that dogged Pinker’s efforts to identify historical trends in violence can likewise be understood as an instance of inflating the severity of crimes on behalf of a moral ideology—and the taboo placed on puncturing the bubble or vitiating the purity of evil with evidence and theories of venial motives. As he explains in the preface, “No one has ever recruited activists to a cause by announcing that things are getting better, and bearers of good news are often advised to keep their mouths shut lest they lull people into complacency” (xxii). Here again the objective researcher can’t escape the appearance of trying to minimize the evil, and therefore risks being accused of looking the other way, or even of complicity. But in an earlier section on genocide Pinker provides the quintessential Enlightenment rationale for the clear-eyed scientific approach to studying even the worst atrocities. He writes,

The effort to whittle down the numbers that quantify the misery can seem heartless, especially when the numbers serve as propaganda for raising money and attention. But there is a moral imperative in getting the facts right, and not just to maintain credibility. The discovery that fewer people are dying in wars all over the world can thwart cynicism among compassion-fatigued news readers who might otherwise think that poor countries are irredeemable hellholes. And a better understanding of what drove the numbers down can steer us toward doing things that make people better off rather than congratulating ourselves on how altruistic we are. (320)

This passage can be taken as the underlying argument of the whole book. And it gestures toward some far-reaching ramifications to the idea that exaggerated numbers are a product of the same impulse that causes us to inflate crimes to the status of pure evil.

Could it be that the nearly universal misperception that violence is getting worse all over the world, that we’re doomed to global annihilation, and that everywhere you look is evidence of the breakdown in human decency—could it be that the false impression Pinker set out to correct with Better Angels is itself a manifestation of a natural urge in all of us to seek out evil and aggrandize ourselves by unconsciously overestimating it? Pinker himself never goes as far as suggesting the mass ignorance of waning violence is a byproduct of an instinct toward self-righteousness. Instead, he writes of the “gloom” about the fate of humanity,

I think it comes from the innumeracy of our journalistic and intellectual culture. The journalist Michael Kinsley recently wrote, “It is a crushing disappointment that Boomers entered adulthood with Americans killing and dying halfway around the world, and now, as Boomers reach retirement and beyond, our country is doing the same damned thing.” This assumes that 5,000 Americans dying is the same damned thing as 58,000 Americans dying, and that a hundred thousand Iraqis being killed is the same damned thing as several million Vietnamese being killed. If we don’t keep an eye on the numbers, the programming policy “If it bleeds it leads” will feed the cognitive shortcut “The more memorable, the more frequent,” and we will end up with what has been called a false sense of insecurity. (296)

Pinker probably has a point, but the self-righteous undertone of Kinsley’s “same damned thing” is unmistakable. He’s effectively saying, I’m such an outstanding moral being the outrageous evilness of the invasion of Iraq is blatantly obvious to me—why isn’t it to everyone else? And that same message seems to underlie most of the statements people make expressing similar sentiments about how the world is going to hell.

            Though Pinker neglects to tie all the strands together, he still manages to suggest that the drive to dominance, ideology, tribal morality, and the Myth of Pure Evil are all facets of the same disastrous flaw in human nature—an instinct for self-righteousness. Progress on the moral front—real progress like fewer deaths, less suffering, and more freedom—comes from something much closer to utilitarian pragmatism than activist idealism. Yet the activist tradition is so thoroughly enmeshed in our university culture that we’re taught to exercise our powers of political righteousness even while engaging in tasks as mundane as reading books and articles. 

            If the decline in violence and the improvement of the general weal in various other areas are attributable to the Enlightenment, then many of the assumptions underlying postmodernism are turned on their heads. If social ills like warfare, racism, sexism, and child abuse exist in cultures untouched by modernism—and they in fact not only exist but tend to be much worse—then science can’t be responsible for creating them; indeed, if they’ve all trended downward with the historical development of all the factors associated with male-dominated western culture, including strong government, market economies, run-away technology, and scientific progress, then postmodernism not only has everything wrong but threatens the progress achieved by the very institutions it depends on, emerged from, and squanders innumerable scholarly careers maligning.

Of course some Enlightenment figures and some scientists do evil things. Of course living even in the most Enlightened of civilizations is no guarantee of safety. But postmodernism is an ideology based on the premise that we ought to discard a solution to our societal woes for not working perfectly and immediately, substituting instead remedies that have historically caused more problems than they solved by orders of magnitude. The argument that there’s a core to the Enlightenment that some of its representatives have been faithless to when they committed atrocities may seem reminiscent of apologies for Christianity based on the fact that Crusaders and Inquisitors weren’t loving their neighbors as Christ enjoined. The difference is that the Enlightenment works—in just a few centuries it’s transformed the world and brought about a reduction in violence no religion has been able to match in millennia. If anything, the big monotheistic religions brought about more violence.

Embracing Enlightenment morality or classical liberalism doesn’t mean we should give up our efforts to make the world a better place. As Pinker describes the transformation he hopes to encourage with Better Angels,

As one becomes aware of the decline of violence, the world begins to look different. The past seems less innocent; the present less sinister. One starts to appreciate the small gifts of coexistence that would have seemed utopian to our ancestors: the interracial family playing in the park, the comedian who lands a zinger on the commander in chief, the countries that quietly back away from a crisis instead of escalating to war. The shift is not toward complacency: we enjoy the peace we find today because people in past generations were appalled by the violence in their time and worked to reduce it, and so we should work to reduce the violence that remains in our time. Indeed, it is a recognition of the decline of violence that best affirms that such efforts are worthwhile. (xxvi)

Since our task for the remainder of this century is to extend the reach of science, literacy, and the recognition of universal human rights farther and farther along the Enlightenment gradient until they're able to grant the same increasing likelihood of a long peaceful life to every citizen of every nation of the globe, and since the key to accomplishing this task lies in fomenting future Rights Revolutions while at the same time recognizing, so as to be better equipped to rein in, our drive for dominance as manifested in our more deadly moral instincts, I for one am glad Steven Pinker has the courage to violate so many of the outrageously counterproductive postmodern taboos while having the grace to resist succumbing himself, for the most part, to the temptation of self-righteousness.




Projecting Power, Competing for Life, & Supply Side Math


Some issues I feel are being skirted in the debates:
1. How the Toughest Guy Projects his Power
The Republican position on national security is that the best way to achieve peace is by “projecting power,” and they are fond of saying that Democrats invite aggression by “projecting weakness.” The idea is that no one will start a fight he knows he won’t win, nor will he threaten to start a fight with someone he knows will call his bluff. This is why Republican presidents often suffer from Cowboy Syndrome.
In certain individual relationships, this type of dynamic actually does establish itself—or rather the dominant individual establishes this type of dynamic. But in the realm of national security we aren’t dealing with individuals. With national security, we’re basically broadcasting to the world the level of respect we have for them. If a George Bush or a Mitt Romney swaggers around projecting his strength by making threats and trying to push around intergovernmental organizations, some people will naturally be intimidated and back down. But those probably weren’t the people we needed to worry about in the first place.
The idea that shouting out to the world that the US is the toughest country around and we’re ready to prove it is somehow going to deter Al Qaeda militants and others like them is dangerously naïve. We can’t hope for all the nations of the world to fall into some analog of Battered Wife Syndrome. Think about it this way, everyone knows that the heavyweight champion MMA guy is the toughest fighter in the world. If you want to project power, there’s no better way to do it than by winning that belt. Now we have to ask ourselves: Do fewer people want to fight the champion? We might also ask: Does a country like Canada get attacked more because of its lame military?
The very reason organizations like Al Qaeda ever came into existence was that America was projecting its power too much. The strategy of projecting power may as well have been devised by teenage boys—and it continues to appeal to people with that mindset.


2. Supplying more Health and Demanding not to Die

Paul Ryan knows that his voucher system for Medicare is going to run into the problem that increasing healthcare costs will quickly surpass whatever amount is allotted to individuals in the vouchers—that’s the source of the savings the program achieves. But it’s not that he wants to shortchange seniors. Rather, he’s applying a principle from his economic ideology, the one that says the best way to control costs is to make providers compete. If people can shop around, the reasoning goes, they’ll flock toward the provider with the lowest prices—the same way we all do with consumer products. Over time, all the providers have to find ways to become more efficient so they can cut costs and stay in business.
Sounds good, right? But the problem is that healthcare services aren’t anything like consumer goods. Supply and demand doesn’t work in the realm of life and death. Maybe, before deciding which insurance company should get our voucher, we’ll do some research. But how do you know what types of services you’re going to need before you sign up? You’re not going to find out that your plan doesn’t cover the service you need until you need the service. And at that point the last thing you’re going to want to do is start shopping around again. Think about it, people shop around for private insurance now--are insurance companies paragons of efficiency? 
Another problem is that you can’t shop around to find better services once industry standards have set in. For example—if you don’t like how impersonal your cell phone service is, can you just drop your current provider and go to another? If you do, you’re just going to run into the same problem again. What’s the lowest price you can pay for cable or internet services? The reason Comcast and Dish Network keep going back and forth with their commercials about whose service is better is that there is fundamentally very little difference.
Finally, insurance is so complicated that only people who can afford accountants or financial advisors, only people who are educated and have the time to research their options, basically only people with resources are going to be able to make prudent decisions. This is why the voucher system, over time, is just going to lead to further disadvantages for the poor and uneducated, bring about increased inequality, and exacerbate all the side effects of inequality, like increased violent crime.


3. Demand Side Never Shows up for the Debate

Tax Cuts Don't Lead to Economic Growth
The reason Romney and Ryan aren’t specifying how they’re going to pay for their tax cuts, while at the same time increasing the budget for the military, while at the same time decreasing the deficit, is that they believe, again based on their economic ideology, that the tax cuts will automatically lead to economic growth. The reasoning is that if people have more money after taxes, they’ll be more likely to spend it. This includes business owners who will put the money toward expanding their businesses, which of course entails hiring new workers. All this cycles around to more money for everyone, more people paying that smaller percentage but on larger incomes, so more revenue comes in, and now we can sit back and watch the deficit go down. This is classic supply side economics.
Sounds good, right? The problem is that businesses only invest in expansion when there’s increasing demand for their products or services, and the tax cuts for lower earners won’t be enough to significantly increase that demand. If there's no demand, rich people don't invest and hire; they buy bigger houses and such. The supply side theory has been around for a long time—and it simply doesn’t work. The only reliable outcome of supply side policies is increasing wealth inequality.
What works is increasing demand—that’s demand side economics. You do this by investing in education, public resources, and infrastructure. Those construction workers building roads and bridges and maintaining parks and monuments get jobs when their companies are hired by the government—meaning they get paid with our tax money. Of course, they get taxed on it, thus helping to support more projects. Meanwhile, unemployment goes down by however many people are hired. These people have more income, and thus create more demand. The business owners expand their businesses—hire more people. As the economy grows, the government can scale back its investment.
Demand side economics can also focus on human capital - including healthcare because it's hard to work when you're sick or dying and you're not going to be creating any demand when you're bankrupt from hospital and insurance payments. Government can also help the economy by investing in people's education, because educated people tend to get better jobs, make more money, and—wait for it, create more demand. (Not to mention innovation.) Job training can work the same way.
Supply side versus demand side is at the heart of most policy debates. The supply side ideology has all kinds of popular advocates, from Ayn Rand to Rush Limbaugh. The demand siders seem more mum, but that might just be because I live in Indiana. In any case, the demand siders have much better evidence supporting their ideas, even though they lose in terms of rhetoric as the knee jerk response to their ideas is to (stupidly, inaccurately) label them socialist. As Bill Clinton pointed out and the fact checkers corroborated, Democrats do a better job creating jobs. 

4. Climate Change?

The Imp of the Underground and the Literature of Low Status

Image courtesy of Block Magazine
The one overarching theme in literature, and I mean all literature since there’s been any to speak of, is injustice. Does the girl get the guy she deserves? If so, the work is probably commercial, as opposed to literary, fiction. If not, then the reason begs pondering. Maybe she isn’t pretty enough, despite her wit and aesthetic sophistication, so we’re left lamenting the shallowness of our society’s males. Maybe she’s of a lower caste, despite her unassailable virtue, in which case we’re forced to question our complacency before morally arbitrary class distinctions. Or maybe the timing was just off—cursed fate in all her fickleness. Another literary work might be about the woman who ends up without the fulfilling career she longed for and worked hard to get, in which case we may blame society’s narrow conception of femininity, as evidenced by all those damn does-the-girl-get-the-guy stories.

            The prevailing theory of what arouses our interest in narratives focuses on the characters’ goals, which magically, by some as yet undiscovered cognitive mechanism, become our own. But plots often catch us up before any clear goals are presented to us, and our partisanship on behalf of a character easily endures shifting purposes. We as readers and viewers are not swept into stories through the transubstantiation of someone else’s striving into our own, with the protagonist serving as our avatar as we traverse the virtual setting and experience the pre-orchestrated plot. Rather, we reflexively monitor the character for signs of virtue and for a capacity to contribute something of value to his or her community, the same way we, in our nonvirtual existence, would monitor and assess a new coworker, classmate, or potential date. While suspense in commercial fiction hinges on high-stakes struggles between characters easily recognizable as good and those easily recognizable as bad, and comfortably condemnable as such, forward momentum in literary fiction—such as it is—depends on scenes in which the protagonist is faced with temptations, tests of virtue, moral dilemmas.

The strain and complexity of coming to some sort of resolution to these dilemmas often serves as a theme in itself, a comment on the mad world we live in, where it’s all but impossible to discern between right and wrong. Indeed, the most common emotional struggle depicted in literature is that between the informal, even intimate handling of moral evaluation—which comes natural to us owing to our evolutionary heritage as a group-living species—and the official, systematized, legal or institutional channels for determining merit and culpability that became unavoidable as societies scaled up exponentially after the advent of agriculture. These burgeoning impersonal bureaucracies are all too often ill-equipped to properly weigh messy mitigating factors, and they’re all too vulnerable to subversion by unscrupulous individuals who know how to game them. Psychopaths who ought to be in prison instead become CEOs of multinational investment firms, while sensitive and compassionate artists and humanitarians wind up taking lowly day jobs at schools or used book stores. But the feature of institutions and bureaucracies—and of complex societies more generally—that takes the biggest toll on our Pleistocene psyches, the one that strikes us as the most glaring injustice, is their stratification, their arrangement into steeply graded hierarchies.

Unlike our hierarchical ape cousins, all present-day societies still living in small groups as nomadic foragers, like those our ancestors lived in throughout the epoch that gave rise to the suite of traits we recognize as uniquely human, collectively enforce an ethos of egalitarianism. As anthropologist Christopher Boehm explains in his book Hierarchy in the Forest: The Evolution of Egalitarianism,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

Since humans evolved from a species that was ancestral to both chimpanzees and gorillas, we carry in us many of the emotional and behavioral capacities that support hierarchies. But, during all those millennia of egalitarianism, we also developed an instinctive distaste for behaviors that undermine an individual’s personal sovereignty. “On their list of serious moral transgressions,” Boehm explains,

hunter-gathers regularly proscribe the enactment of behavior that is politically overbearing. They are aiming at upstarts who threaten the autonomy of other group members, and upstartism takes various forms. An upstart may act the bully simply because he is disposed to dominate others, or he may become selfishly greedy when it is time to share meat, or he may want to make off with another man’s wife by threat or by force. He (or sometimes she) may also be a respected leader who suddenly begins to issue direct orders… An upstart may simply take on airs of superiority, or may aggressively put others down and thereby violate the group’s idea of how its main political actors should be treating one another. (43)

In a band of thirty people, it’s possible to keep a vigilant eye on everyone and head off potential problems. But, as populations grow, encounters with strangers in settings where no one knows one another open the way for threats to individual autonomy and casual insults to personal dignity. And, as professional specialization and institutional complexity increase in pace with technological advancement, power structures become necessary for efficient decision-making. Economic inequality then takes hold as a corollary of professional inequality.

None of this is to suggest that the advance of civilization inevitably leads to increasing injustice. In fact, per capita murder rates are much higher in hunter-gatherer societies. Nevertheless, the impersonal nature of our dealings with others in the modern world often strikes us as overly conducive to perverse incentives and unfair outcomes. And even the most mundane signals of superior status or the most subtle expressions of power, though officially sanctioned, can be maddening. Compare this famous moment in literary history to Boehm’s account of hunter-gatherer political philosophy:

I was standing beside the billiard table, blocking the way unwittingly, and he wanted to pass; he took me by the shoulders and silently—with no warning or explanation—moved me from where I stood to another place, and then passed by as if without noticing. I could have forgiven a beating, but I simply could not forgive his moving me and in the end just not noticing me. (49)

The billiard player's failure to acknowledge his autonomy outrages the narrator, who then considers attacking the man who has treated him with such disrespect. But he can’t bring himself to do it. He explains,

I turned coward not from cowardice, but from the most boundless vanity. I was afraid, not of six-foot-tallness, nor of being badly beaten and chucked out the window; I really would have had physical courage enough; what I lacked was sufficient moral courage. I was afraid that none of those present—from the insolent marker to the last putrid and blackhead-covered clerk with a collar of lard who was hanging about there—would understand, and that they would all deride me if I started protesting and talking to them in literary language. Because among us to this day it is impossible to speak of a point of honor—that is, not honor, but a point of honor (point d’honneur) otherwise than in literary language. (50)

The languages of law and practicality are the only ones whose legitimacy is recognized in modern societies. The language of morality used to describe sentiments like honor has been consigned to literature. This man wants to exact his revenge for the slight he suffered, but that would require his revenge to be understood by witnesses as such. The derision he can count on from all the bystanders would just compound the slight. In place of a close-knit moral community, there is only a loose assortment of strangers. And so he has no recourse.

            The character in this scene could be anyone. Males may be more keyed into the physical dimension of domination and more prone to react with physical violence, but females likewise suffer from slights and belittlements, and react aggressively, often by attacking their tormenter's reputation through gossip. Treating a person of either gender as an insensate obstacle is easier when that person is a stranger you’re unlikely ever to encounter again. But another dynamic is at play in the scene which makes it still easier—almost inevitable. After being unceremoniously moved aside, the narrator becomes obsessed with the man who treated him so dismissively. Desperate to even the score, he ends up stalking the man, stewing resentfully, trying to come up with a plan. He writes,

And suddenly… suddenly I got my revenge in the simplest, the most brilliant way! The brightest idea suddenly dawned on me. Sometimes on holidays I would go to Nevsky Prospect between three and four, and stroll along the sunny side. That is, I by no means went strolling there, but experienced countless torments, humiliations and risings of bile: that must have been just what I needed. I darted like an eel among the passers-by, in a most uncomely fashion, ceaselessly giving way now to generals, now to cavalry officers and hussars, now to ladies; in those moments I felt convulsive pains in my heart and a hotness in my spine at the mere thought of the measliness of my attire and the measliness and triteness of my darting little figure. This was a torment of torments, a ceaseless, unbearable humiliation from the thought, which would turn into a ceaseless and immediate sensation, of my being a fly before that whole world, a foul, obscene fly—more intelligent, more developed, more noble than everyone else—that went without saying—but a fly, ceaselessly giving way to everyone, humiliated by everyone, insulted by everyone. (52)

So the indignity, it seems, was not borne of being moved aside like a piece of furniture so much as it was of being afforded absolutely no status. That’s why being beaten would have been preferable; a beating implies a modicum of worthiness in that it demands recognition, effort, even risk, no matter how slight.

            The idea that occurs to the narrator for the perfect revenge requires that he first remedy the outward signals of his lower social status, “the measliness of my attire and the measliness… of my darting little figure,” as he calls them. The catch is that to don the proper attire for leveling a challenge, he has to borrow money from a man he works with—which only adds to his daily feelings of humiliation. Psychologists Derek Rucker and Adam Galinsky have conducted experiments demonstrating that people display a disturbing readiness to compensate for feelings of powerlessness and low status by making pricy purchases, even though in the long run such expenditures only serve to perpetuate their lowly economic and social straits. The irony is heightened in the story when the actual revenge itself, the trappings for which were so dearly purchased, turns out to be so bathetic.

Suddenly, within three steps of my enemy, I unexpectedly decided, closed my eyes, and—we bumped solidly shoulder against shoulder! I did not yield an inch and passed by on perfectly equal footing! He did not even look back and pretended not to notice: but he only pretended, I’m sure of that. To this day I’m sure of it! Of course, I got the worst of it; he was stronger, but that was not the point. The point was that I had achieved my purpose, preserved my dignity, yielded not a step, and placed myself publicly on an equal social footing with him. I returned home perfectly avenged for everything. (55)

But this perfect vengeance has cost him not only the price of a new coat and hat; it has cost him a full two years of obsession, anguish, and insomnia as well. The implication is that being of lowly status is a constant psychological burden, one that makes people so crazy they become incapable of making rational decisions.

Dostoevsky
            Literature buffs will have recognized these scenes from Dostoevsky’s Notes from Underground (as translated by Richard Prevear and Larissa Volokhnosky), which satirizes the idea of a society based on the principle of “rational egotism” as symbolized by N.G. Chernyshevsky’s image of a “crystal palace” (25), a well-ordered utopia in which every citizen pursues his or her own rational self-interests. Dostoevsky’s underground man hates the idea because regardless of how effectively such a society may satisfy people’s individual needs the rigid conformity it would demand would be intolerable. The supposed utopia, then, could never satisfy people’s true interests. He argues,

That’s just the thing, gentlemen, that there may well exist something that is dearer for almost every man than his very best profit, or (so as not to violate logic) that there is this one most profitable profit (precisely the omitted one, the one we were just talking about), which is chiefer and more profitable than all other profits, and for which a man is ready, if need be, to go against all laws, that is, against reason, honor, peace, prosperity—in short, against all these beautiful and useful things—only so as to attain this primary, most profitable profit which is dearer to him than anything else. (22)

The underground man cites examples of people behaving against their own best interests in this section, which serves as a preface to the story of his revenge against the billiard player who so blithely moves him aside. The way he explains this “very best profit” which makes people like himself behave in counterproductive, even self-destructive ways is to suggest that nothing else matters unless everyone’s freedom to choose how to behave is held inviolate. He writes,

One’s own free and voluntary wanting, one’s own caprice, however wild, one’s own fancy, though chafed sometimes to the point of madness—all this is that same most profitable profit, the omitted one, which does not fit into any classification, and because of which all systems and theories are constantly blown to the devil… Man needs only independent wanting, whatever this independence may cost and wherever it may lead. (25-6)
Arthur Rackham's Imp of Perverse Illustration

Notes from Underground was originally published in 1864. But the underground man echoes, wittingly or not, the narrator of Edgar Allan Poe’s story from almost twenty years earlier, "The Imp of the Perverse," who posits an innate drive to perversity, explaining,

Through its promptings we act without comprehensible object. Or if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say that through its promptings we act for the reason that we should not. In theory, no reason can be more unreasonable, but in reality there is none so strong. With certain minds, under certain circumstances, it becomes absolutely irresistible. I am not more sure that I breathe, than that the conviction of the wrong or impolicy of an action is often the one unconquerable force which impels us, and alone impels us, to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution to ulterior elements. (403)

This narrator’s suggestion of the irreducibility of the impulse notwithstanding, it’s noteworthy how often the circumstances that induce its expression include the presence of an individual of higher status.
Dov Cohen

            The famous shoulder bump in Notes from Underground has an uncanny parallel in experimental psychology. In 1996, Dov Cohen, Richard Nisbett, and their colleagues published the research article, “Insult, Aggression, and the Southern Culture of Honor: An ‘Experimental Ethnography’,” in which they report the results of a comparison between the cognitive and physiological responses of southern males to being bumped in a hallway and casually called an asshole to those of northern males. The study showed that whereas men from northern regions were usually amused by the run-in, southern males were much more likely to see it as an insult and a threat to their manhood, and they were much more likely to respond violently. The cortisol and testosterone levels of southern males spiked—the clever experimental setup allowed meaures before and after—and these men reported believing physical confrontation was the appropriate way to redress the insult. The way Cohen and Nisbett explain the difference is that the “culture of honor” that emerges in southern regions originally developed as a safeguard for men who lived as herders. Cultures that arise in farming regions place less emphasis on manly honor because farmland is difficult to steal. But if word gets out that a herder is soft then his livelihood is at risk. Cohen and Nisbett write,
Richard Nisbett

Such concerns might appear outdated for southern participants now that the South is no longer a lawless frontier based on a herding economy. However, we believe these experiments may also hint at how the culture of honor has sustained itself in the South. It is possible that the culture-of-honor stance has become “functionally autonomous” from the material circumstances that created it. Culture of honor norms are now socially enforced and perpetuated because they have become embedded in social roles, expectations, and shared definitions of manhood. (958)

            More recently, in a 2009 article titled “Low-Status Compensation: A Theory for Understanding the Role of Status in Cultures of Honor,” psychologist P.J. Henry takes another look at Cohen and Nisbett’s findings and offers another interpretation based on his own further experimentation. Henry’s key insight is that herding peoples are often considered to be of lower status than people with other professions and lifestyles. After establishing that the southern communities with a culture of honor are often stigmatized with negative stereotypes—drawling accents signaling low intelligence, high incidence of incest and drug use, etc.—both in the minds of outsiders and those of the people themselves, Henry suggests that a readiness to resort to violence probably isn’t now and may not ever have been adaptive in terms of material benefits.
P.J. Henry

An important perspective of low-status compensation theory is that low status is a stigma that brings with it lower psychological worth and value. While it is true that stigma also often accompanies lower economic worth and, as in the studies presented here, is sometimes defined by it (i.e., those who have lower incomes in a society have more of a social stigma compared with those who have higher incomes), low-status compensation theory assumes that it is psychological worth that is being protected, not economic or financial worth. In other words, the compensation strategies used by members of low-status groups are used in the service of psychological self-protection, not as a means of gaining higher status, higher income, more resources, etc. (453)

And this conception of honor brings us closer to the observations of the underground man and Poe’s boastful murderer. If psychological worth is what’s being defended, then economic considerations fall by the wayside. Unfortunately, since our financial standing tends to be so closely tied to our social standing, our efforts to protect our sense of psychological worth have a nasty tendency to backfire in the long run.

            Henry found evidence for the importance of psychological reactance, as opposed to cultural norms, in causing violence when he divided participants of his study into either high or low status categories and then had them respond to questions about how likely they would be to respond to insults with physical aggression. But before being asked about the propriety of violent reprisals half of the members of each group were asked to recall as vividly as they could a time in their lives when they felt valued by their community. Henry describes the findings thus:

When lower status participants were given the opportunity to validate their worth, they were less likely to endorse lashing out aggressively when insulted or disrespected. Higher status participants were unaffected by the manipulation. (463)

The implication is that people who feel less valuable than others, a condition that tends to be associated with low socioeconomic status, are quicker to retaliate because they are almost constantly on-edge, preoccupied at almost every moment with assessments of their standing in relation to others. Aside from a readiness to engage in violence, this type of obsessive vigilance for possible slights, and the feeling of powerlessness that attends it, can be counted on to keep people in a constant state of stress. The massive longitudinal study of British Civil Service employees called the Whitehall Study, which tracks the health outcomes of people at the various levels of the bureaucratic hierarchy, has found that the stress associated with low status also has profound effects on our physical well-being.  
When Americans are asked to imagine an ideal distribution of
wealth, the results show far less stratification than actually
exists.

            Though it may seem that violence-prone poor people occupying lowly positions on societal and professional totem poles are responsible for aggravating and prolonging their own misery because they tend to spend extravagantly and lash out at their perceived overlords with nary a concern for the consequences, the regularity with which low status leads to self-defeating behavior suggests the impulses are much more deeply rooted than some lazily executed weighing of pros and cons. If the type of wealth or status inequality the underground man finds himself on the short end of would have begun to take root in societies like the ones Christopher Boehm describes, a high-risk attempt at leveling the playing field would not only have been understandable—it would have been morally imperative. In a group of nomadic foragers, though, a man endeavoring to knock a would-be alpha down a few pegs would be able to count on the endorsement of most of the other group members. And the success rate for re-establishing and maintaining egalitarianism would have been heartening. Today, we are forced to live with inequality, even though beyond a certain point most people (regardless of political affiliation) see it as an injustice. 

            Some of the functions of literature, then, are to help us imagine just how intolerable life on the bottom can be, sympathize with those who get trapped in downward spirals of self-defeat, and begin to imagine what a more just and equitable society might look like. The catch is that we will be put off by characters who mistreat others or simply show a dearth of redeeming qualities.



Also read The Adaptive Appeal of Bad Boys

and Can't Win for Losing: Why there are So Many Losers in Literature and Why it has to Change

What's the Point of Difficult Reading?

James Joyce

          You sit reading the first dozen or so pages of some celebrated classic and gradually realize that having to sort out how the ends of the long sentences fix to their beginnings is taking just enough effort to distract you entirely from the setting or character you’re supposed to be getting to know. After a handful of words you swear are made up and a few tangled metaphors you find yourself riddling over with nary a resolution, the dread sinks in. Is the whole book going to be like this? Is it going to be one of those deals where you get to what’s clearly meant to be a crucial turning point in the plot but for you is just another riddle without a solution, sending you paging back through the forest of verbiage in search of some key succession of paragraphs you spaced out while reading the first time through? Then you wonder if you’re missing some other kind of key, like maybe the story’s an allegory, a reference to some historical event like World War II or some Revolution you once had to learn about but have since lost all recollection of. Maybe the insoluble similes are allusions to some other work you haven’t read or can’t recall. In any case, you’re not getting anything out of this celebrated classic but frustration leading to the dual suspicion that you’re too ignorant or stupid to enjoy great literature and that the whole “great literature” thing is just a conspiracy to trick us into feeling dumb so we’ll defer to the pseudo-wisdom of Ivory Tower elites.

            If enough people of sufficient status get together and agree to extol a work of fiction, they can get almost everyone else to agree. The readers who get nothing out of it but frustration and boredom assume that since their professors or some critic in a fancy-pants magazine or the judges of some literary award committee think it’s great they must simply be missing something. They dutifully continue reading it, parrot a few points from a review that sound clever, and afterward toe the line by agreeing that it is indeed a great work of literature, clearly, even if it doesn’t speak to them personally. For instance, James Joyce’s Ulysses, utterly nonsensical to anyone without at least a master’s degree, tops the Modern Library’s list of 100 best novels in the English language. Responding to the urging of his friends to write out an explanation of the novel, Joyce scoffed, boasting, “I’ve put in so many enigmas and puzzles that it will keep the professors busy for centuries arguing over what I meant, and that’s the only way of ensuring one’s immortality.” He was right. To this day, professors continue to love him even as Ulysses and the even greater monstrosity Finnegan’s Wake do nothing but bore and befuddle everyone else—or else, more fittingly, sit inert or unchecked-out on the shelf, gathering well-deserved dust.

Jonathan Franzen-Courtesy of Frank Bauer
            Joyce’s later novels are not literature; they are lengthy collections of loosely connected literary puzzles. But at least his puzzles have actual solutions—or so I’m told. Ulysses represents the apotheosis of the tradition in literature called modernism. What came next, postmodernism, is even more disconnected from the universal human passion for narrative. Even professors aren’t sure what to do with it, so they simply throw their hands up, say it’s great, and explain that the source of its greatness is its very resistance to explanation. Jonathan Franzen, whose 2001 novel The Corrections represented a major departure from the postmodernism he began his career experimenting with, explained the following year in The New Yorker how he’d turned away from the tradition. He’d been reading the work of William Gaddis “as a kind of penance” (101) and not getting any meaning out of it. Of the final piece in the celebrated author’s oeuvre, Franzen writes,

The novel is an example of the particular corrosiveness of literary postmodernism. Gaddis began his career with a Modernist epic about the forgery of masterpieces. He ended it with a pomo romp that superficially resembles a masterpiece but punishes the reader who tries to stay with it and follow its logic. When the reader finally says, Hey, wait a minute, this is a mess, not a masterpiece, the book instantly morphs into a performance-art prop: its fraudulence is the whole point! And the reader is out twenty hours of good-faith effort. (111)

In other words, reading postmodern fiction means not only forgoing the rewards of narratives, having them replaced by the more taxing endeavor of solving multiple riddles in succession, but those riddles don’t even have answers. What’s the point of reading this crap? Exactly. Get it?

            You can dig deeper into the meaningless meanderings of pomos and discover there is in fact an ideology inspiring all the infuriating inanity. The super smart people who write and read this stuff point to the willing, eager complicity of the common reader in the propagation of all the lies that sustain our atrociously unjust society (but atrociously unjust compared to what?). Franzen refers to this as the Fallacy of the Stupid Reader,

wherein difficulty is a “strategy” to protect art from cooptation and the purpose of art is to “upset” or “compel” or “challenge” or “subvert” or “scar” the unsuspecting reader; as if the writer’s audience somehow consisted, again and again, of Charlie Browns running to kick Lucy’s football; as if it were a virtue in a novelist to be the kind of boor who propagandizes at friendly social gatherings. (109)

But if the author is worried about art becoming a commodity does making the art shitty really amount to a solution? And if the goal is to make readers rethink something they take for granted why not bring the matter up directly, or have a character wrestle with it, or have a character argue with another character about it? The sad fact is that these authors probably just suck, that, as Franzen suspects, “literary difficulty can operate as a smoke screen for an author who has nothing interesting, wise, or entertaining to say” (111).

            Not all difficulty in fiction is a smoke screen though. Not all the literary emperors are naked. Franzen writes that “there is no headache like the headache you get from working harder on deciphering a text than the author, by all appearances, has worked on assembling it.” But the essay, titled “Mr. Difficult,” begins with a reader complaint sent not to Gaddis but to Franzen himself. And the reader, a Mrs. M. from Maryland, really gives him the business:

Who is it that you are writing for? It surely could not be the average person who enjoys a good read… The elite of New York, the elite who are beautiful, thin, anorexic, neurotic, sophisticated, don’t smoke, have abortions tri-yearly, are antiseptic, live in penthouses, this superior species of humanity who read Harper’s and The New Yorker. (100)

In this first part of the essay, Franzen introduces a dilemma that sets up his explanation of why he turned away from postmodernism—he’s an adherent of the “Contract model” of literature, whereby the author agrees to share, on equal footing, an entertaining or in some other way gratifying experience, as opposed to the “Status model,” whereby the author demonstrates his or her genius and if you don’t get it, tough. But his coming to a supposed agreement with Mrs. M. about writers like Gaddis doesn’t really resolve Mrs. M.’s conflict with him. The Corrections, after all, the novel she was responding to, represents his turning away from the tradition Gaddis wrote in. (It must be said, though, that Freedom, Franzen’s next novel, is written in a still more accessible style.)

            The first thing we must do to respond properly to Mrs. M. is break down each of Franzen’s models into two categories. The status model includes writers like Gaddis whose difficulty serves no purpose but to frustrate and alienate readers. But Franzen’s own type specimen for this model is Flaubert, much of whose writing, though difficult at first, rewards any effort to re-read and further comprehend with a more profound connection. So it is for countless other writers, the one behind number two on the Modern Library’s ranking for instance—Fitzgerald and Gatsby. As for the contract model, Franzen admits,

Taken to its free-market extreme, Contract stipulates that if a product is disagreeable to you the fault must be the product’s. If you crack a tooth on a hard word in a novel, you sue the author. If your professor puts Dreiser on your reading list, you write a harsh student evaluation… You’re the consumer; you rule. (100)

Franzen, in declaring himself a “Contract kind of person,” assumes that the free-market extreme can be dismissed for its extremity. But Mrs. M. would probably challenge him on that. For many, particularly right-leaning readers, the market not only can but should be relied on to determine which books are good and which ones belong in some tiny niche. When the Modern Library conducted a readers' poll to create a popular ranking to balance the one made by experts, the ballot was stuffed by Ayn Rand acolytes and scientologists. Mrs. M. herself leaves little doubt as to her political sympathies. For her and her fellow travelers, things like literature departments, National Book Awards—like the one The Corrections won—Nobels and Pulitzers are all an evil form of intervention into the sacred workings of the divine free market, un-American, sacrilegious, communist. According to this line of thinking, authors aren’t much different from whores—except of course literal whoring is condemned in the bible (except when it isn’t).

            A contract with readers who score high on the personality dimension of openness to new ideas and experiences (who tend to be liberal), those who have spent a lot of time in the past reading books like The Great Gatsby or Heart of Darkness or Lolita (the horror!), those who read enough to have developed finely honed comprehension skills—that contract is going to look quite a bit different from one with readers who attend Beck University, those for whom Atlas Shrugged is the height of literary excellence. At the same time, though, the cult of self-esteem is poisoning schools and homes with the idea that suggesting that a student or son or daughter is anything other than a budding genius is a form of abuse. Heaven forbid a young person feel judged or criticized while speaking or writing. And if an author makes you feel the least bit dumb or ignorant, well, it’s an outrage—heroes like Mrs. M. to the rescue.

            One of the problems with the cult of self-esteem is that anticipating criticism tends to make people more, not less creative. And the link between low self-esteem and mental disorders is almost purely mythical. High self-esteem is correlated with school performance, but as far as researchers can tell it’s the performance causing the esteem, not the other way around. More invidious, though, is the tendency to view anything that takes a great deal of education or intelligence to accomplish as an affront to everyone less educated or intelligent. Conservatives complain endlessly about class warfare and envy of the rich—the financially elite—but they have no qualms about decrying intellectual elites and condemning them for flaunting their superior literary achievements. They see the elitist mote in the eye of Nobel laureates without noticing the beam in their own.

         What’s the point of difficult reading? Well, what’s the point of running five or ten miles? What’s the point of eating vegetables as opposed to ice cream or Doritos? Difficulty need not preclude enjoyment. And discipline in the present is often rewarded in the future. It very well may be that the complexity of the ideas you’re capable of understanding is influenced by how many complex ideas you attempt to understand. No matter how vehemently true believers in the magic of markets insist otherwise, markets don’t have minds. And though an individual’s intelligence need not be fixed a good way to ensure children never get any smarter than they already are is to make them feel fantastically wonderful about their mediocrity. We just have to hope that despite these ideological traps there are enough people out there determined to wrap their minds around complex situations depicted in complex narratives about complex people told in complex language, people who will in the process develop the types of minds and intelligence necessary to lead the rest of our lazy asses into a future that’s livable and enjoyable. For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them. How do we get the rewards without wasting our time?

Also read "Can't Win for Losing: Why There are so many Losers in Literature and Why It has to Change."

And: "Life's White Machine: James Wood and What doesn't Happen in Fiction."

And: Stories, Social Proof, & Our Two Selves

The Enlightened Hypocrisy of Jonathan Haidt's Righteous Mind

A Review of Jonathan Haidt's new book, The Righteous Mind: Why Good People are Divided by Politics and Religion
            Back in the early 1950s, Muzafer Sherif and his colleagues conducted a now-infamous experiment that validated the central premise of Lord of the Flies. Two groups of 12-year-old boys were brought to a camp called Robber’s Cave in southern Oklahoma where they were observed by researchers as the members got to know each other. Each group, unaware at first of the other’s presence at the camp, spontaneously formed a hierarchy, and they each came up with a name for themselves, the Eagles and the Rattlers. That was the first stage of the study. In the second stage, the two groups were gradually made aware of each other’s presence, and then they were pitted against each other in several games like baseball and tug-o-war. The goal was to find out if animosity would emerge between the groups. This phase of the study had to be brought to an end after the groups began staging armed raids on each other’s territory, wielding socks they’d filled with rocks. Prepubescent boys, this and several other studies confirm, tend to be highly tribal.
           
            So do conservatives.
           
           This is what University of Virginia psychologist Jonathan Haidt heroically avoids saying explicitly for the entirety of his new 318-page, heavily endnoted The Righteous Mind: Why Good People Are Divided by Politics and Religion. In the first of three parts, he takes on ethicists like John Stuart Mill and Immanuel Kant, along with the so-called New Atheists like Sam Harris and Richard Dawkins, because, as he says in a characteristically self-undermining pronouncement, “Anyone who values truth should stop worshipping reason” (89). Intuition, Haidt insists, is more worthy of focus. In part two, he lays out evidence from his own research showing that all over the world judgments about behaviors rely on a total of six intuitive dimensions, all of which served some ancestral, adaptive function. Conservatives live in “moral matrices” that incorporate all six, while liberal morality rests disproportionally on just three. At times, Haidt intimates that more dimensions is better, but then he explicitly disavows that position. He is, after all, a liberal himself. In part three, he covers some of the most fascinating research to emerge from the field of human evolutionary anthropology over the past decade and a half, concluding that tribalism emerged from group selection and that without it humans never would have become, well, human. Again, the point is that tribal morality—i.e. conservatism—cannot be all bad.
Jonathan Haidt

            One of Haidt’s goals in writing The Righteous Mind, though, was to improve understanding on each side of the central political divide by exploring, and even encouraging an appreciation for, the moral psychology of those on the rival side. Tribalism can’t be all bad—and yet we need much less of it in the form of partisanship. “My hope,” Haidt writes in the introduction, “is that this book will make conversations about morality, politics, and religion more common, more civil, and more fun, even in mixed company” (xii). Later he identifies the crux of his challenge, “Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide” (49). There are plenty of books by conservative authors which gleefully point out the contradictions and errors in the thinking of naïve liberals, and there are plenty by liberals returning the favor. What Haidt attempts is a willful disregard of his own politics for the sake of transcending the entrenched divisions, even as he’s covering some key evidence that forms the basis of his beliefs. Not surprisingly, he gives the impression at several points throughout the book that he’s either withholding the conclusions he really draws from the research or exercising great discipline in directing his conclusions along paths amenable to his agenda of bringing about greater civility.

            Haidt’s focus is on intuition, so he faces the same challenge Daniel Kahneman did in writing Thinking, Fast and Slow: how to convey all these different theories and findings in a book people will enjoy reading from first page to last? Kahneman’s attempt was unsuccessful, but his encyclopedic book is still readable because its topic is so compelling. Haidt’s approach is to discuss the science in the context of his own story of intellectual development. The product reads like a postmodern hero’s journey in which the unreliable narrator returns right back to where he started, but with a heightened awareness of how small his neighborhood really is. It’s a riveting trip down the rabbit hole of self-reflection where the distinction between is and ought gets blurred and erased and reinstated, as do the distinctions between intuition and reason, and even self and other. Since, as Haidt reports, liberals tend to score higher on the personality trait called openness to new ideas and experiences, he seems to have decided on a strategy of uncritically adopting several points of conservative rhetoric—like suggesting liberals are out-of-touch with most normal people—in order to subtly encourage less open members of his audience to read all the way through. Who, after all, wants to read a book by a liberal scientist pointing out all the ways conservatives go wrong in their thinking?

The Elephant in the Room
            Haidt’s first move is to challenge the primacy of thinking over intuiting. If you’ve ever debated someone into a corner, you know simply demolishing the reasons behind a position will pretty much never be enough to change anyone’s mind. Citing psychologist Tom Gilovich, Haidt explains that when we want to believe something, we ask ourselves, “Can I believe it?” We begin a search, “and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have justification, in case anyone asks.” But if we don’t like the implications of, say, global warming, or the beneficial outcomes associated with free markets, we ask a different question:

when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. (84)

Haidt’s early research was designed to force people into making weak moral arguments so that he could explore the intuitive foundations of judgments of right and wrong. When presented with stories involving incest, or eating the family dog, which in every case were carefully worded to make it clear no harm would result to anyone—the incest couldn’t result in pregnancy; the dog was already dead—“subjects tried to invent victims” (24). It was clear that they wanted there to be a logical case based on somebody getting hurt so they could justify their intuitive answer that a wrong had been done.

They said things like ‘I know it’s wrong, but I just can’t think of a reason why.’ They seemed morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively. These subjects were reasoning. They were working quite hard reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. (25)

Reading this section, you get the sense that people come to their beliefs about the world and how to behave in it by asking the same three questions they’d ask before deciding on a t-shirt: how does it feel, how much does it cost, and how does it make me look? Quoting political scientist Don Kinder, Haidt writes, “Political opinions function as ‘badges of social membership.’ They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support” (86)—or like the skinny jeans showing everybody how hip you are.
Image courtesy of Useable Learning

           Kahneman uses the metaphor of two systems to explain the workings of the mind. System 1, intuition, does most of the work most of the time. System 2 takes a lot more effort to engage and can never manage to operate independently of intuition. Kahneman therefore proposes educating your friends about the common intuitive mistakes—because you’ll never recognize them yourself. Haidt uses the metaphor of an intuitive elephant and a cerebrating rider. He first used this image for an earlier book on happiness, so the use of the GOP mascot was accidental. But because of the more intuitive nature of conservative beliefs it’s appropriate. Far from saying that republicans need to think more, though, Haidt emphasizes the point that rational thought is never really rational and never anything but self-interested. He argues,

the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (46)

The futility of trying to avoid motivated reasoning provides Haidt some justification of his own to engage in what can only be called pandering. He cites cultural psychologists Joe Henrich, Steve Heine, and Ara Noenzayan, who argued in their 2010 paper “The Weirdest People in the World?” that researchers need to do more studies with culturally diverse subjects. Haidt commandeers the acronym WEIRD—western, educated, industrial, rich, and democratic—and applies it somewhat derisively for most of his book, even though it applies both to him and to his scientific endeavors. Of course, he can’t argue that what’s popular is necessarily better. But he manages to convey that attitude implicitly, even though he can’t really share the attitude himself.
  
            Haidt is at his best when he’s synthesizing research findings into a holistic vision of human moral nature; he’s at his worst, his cringe-inducing worst, when he tries to be polemical. He succumbs to his most embarrassingly hypocritical impulses in what are transparently intended to be concessions to the religious and the conservative. WEIRD people are more apt to deny their intuitive, judgmental impulses—except where harm or oppression are involved—and insist on the fair application of governing principles derived from reasoned analysis. But apparently there’s something wrong with this approach:

Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. (28)

This is disingenuous. For one thing, he doesn’t refer to the rationalist delusion throughout the book; it only shows up one other time. Both instances implicate the New Atheists. Haidt coins the term rationalist delusion in response to Dawkins’s The God Delusion. An atheist himself, Haidt is throwing believers a bone. To make this concession, though, he’s forced to seriously muddle his argument. “I’m not saying,” he insists,

we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. (90)

As far as I know, neither Harris nor Dawkins has ever declared himself dictator of reason—nor, for that matter, did Mill or Rawls (Hitchens might have). Haidt, in his concessions, is guilty of making points against arguments that were never made. He goes on to make a point similar to Kahneman’s.

We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. (90)

What Haidt probably realizes but isn’t saying is that the environment he’s describing is a lot like scientific institutions in academia. In other words, if you hang out in them, you’ll be WEIRD.

A Taste for Self-Righteousness
            The divide over morality can largely be reduced to the differences between the urban educated and the poor not-so-educated. As Haidt says of his research in South America, “I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university” (22). One of the major differences he and his research assistants serendipitously discovered was that educated people think it’s normal to discuss the underlying reasons for moral judgments while everyone else in the world—who isn’t WEIRD—thinks it’s odd:

But what I didn’t expect was that these working-class subjects would sometimes find my request for justifications so perplexing. Each time someone said that the people in a story had done something wrong, I asked, “Can you tell me why that was wrong?” When I had interviewed college students on the Penn campus a month earlier, this question brought forth their moral justifications quite smoothly. But a few blocks west, this same question often led to long pauses and disbelieving stares. Those pauses and stares seemed to say, You mean you don’t know why it’s wrong to do that to a chicken? I have to explain it to you? What planet are you from? (95)

The Penn students “were unique in their unwavering devotion to the ‘harm principle,’” Mill’s dictum that laws are only justified when they prevent harm to citizens. Haidt quotes one of the students as saying, “It’s his chicken, he’s eating it, nobody is getting hurt” (96). (You don’t want to know what he did before cooking it.)

Having spent a little bit of time with working-class people, I can make a point that Haidt overlooks: they weren’t just looking at him as if he were an alien—they were judging him. In their minds, he was wrong just to ask the question. The really odd thing is that even though Haidt is the one asking the questions he seems at points throughout The Righteous Mind to agree that we shouldn’t ask questions like that:

There’s more to morality than harm and fairness. I’m going to try to convince you that this principle is true descriptively—that is, as a portrait of the moralities we see when we look around the world. I’ll set aside the question of whether any of these alternative moralities are really good, true, or justifiable. As an intuitionist, I believe it is a mistake to even raise that emotionally powerful question until we’ve calmed our elephants and cultivated some understanding of what such moralities are trying to accomplish. It’s just too easy for our riders to build a case against every morality, political party, and religion that we don’t like. So let’s try to understand moral diversity first, before we judge other moralities. (98)
The Brits always get better book covers.

But he’s already been busy judging people who base their morality on reason, taking them to task for worshipping it. And while he’s expending so much effort to hold back his own judgments he’s being judged by those whose rival conceptions he’s trying to understand. His open-mindedness and disciplined restraint are as quintessentially liberal as they are unilateral.

            In the book’s first section, Haidt recounts his education and his early research into moral intuition. The second section is the story of how he developed his Moral Foundations Theory. It begins with his voyage to Bhubaneswar, the capital of Orissa in India. He went to conduct experiments similar to those he’d already been doing in the Americas. “But these experiments,” he writes, “taught me little in comparison to what I learned just from stumbling around the complex social web of a small Indian city and then talking with my hosts and advisors about my confusion.” It was an earlier account of this sojourn Haidt had written for the online salon The Edge that first piqued my interest in his work and his writing. In both, he talks about his two “incompatible identities.”

On one hand, I was a twenty-nine-year-old liberal atheist with very definite views about right and wrong. On the other hand, I wanted to be like those open-minded anthropologists I had read so much about and had studied with. (101)

The people he meets in India are similar in many ways to American conservatives. “I was immersed,” Haidt writes, “in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine” (102). The conversion to what he calls pluralism doesn’t lead to any realignment of his politics. But supposedly for the first time he begins to feel and experience the appeal of other types of moral thinking. He could see why protecting physical purity might be fulfilling. This is part of what's known as the “ethic of divinity,” and it was missing from his earlier way of thinking. He also began to appreciate certain aspects of the social order, not to the point of advocating hierarchy or rigid sex roles but seeing value in the complex network of interdependence.

Image Courtesy of The New York Times
            The story is thoroughly engrossing, so engrossing that you want it to build up into a life-changing insight that resolves the crisis. That’s where the six moral dimensions come in (though he begins with just five and only adds the last one much later), which he compares to the different dimensions of taste that make up our flavor palette. The two that everyone shares but that liberals give priority to whenever any two or more suggest different responses are Care and Harm—hurting people is wrong and we should help those in need—and Fairness. The other three from the original set are Loyalty, Authority, and Sanctity, loyalty to the tribe, respect for the hierarchy, and recognition of the sacredness of the tribe’s symbols, like the flag. Libertarians are closer to liberals; they just rely less on the Care dimension and much more on the recently added sixth one, Liberty from Opression, which Haidt believes evolved in the context of ancestral egalitarianism similar to that found among modern nomadic foragers. Haidt suggests that restricting yourself to one or two dimensions is like swearing off every flavor but sweet and salty, saying,

many authors reduce morality to a single principle, usually some variant of welfare maximization (basically, help people, don’t hurt them). Or sometimes it’s justice or related notions of fairness, rights, or respect for individuals and their autonomy. There’s The Utilitarian Grill, serving only sweeteners (welfare), and The Deontological Diner, serving only salts (rights). Those are your options. (113)

Haidt doesn’t make the connection between tribalism and the conservative moral trifecta explicit. And he insists he’s not relying on what’s called the Naturalistic Fallacy—reasoning that what’s natural must be right. Rather, he’s being, he claims, strictly descriptive and scientific.

Moral judgment is a kind of perception, and moral science should begin with a careful study of the moral taste receptors. You can’t possibly deduce the list of five taste receptors by pure reasoning, nor should you search for it in scripture. There’s nothing transcendental about them. You’ve got to examine tongues. (115)

But if he really were restricting himself to description he would have no beef with the utilitarian ethicists like Mill, the deontological ones like Kant, or for that matter with the New Atheists, all of whom are operating in the realm of how we should behave and what we should believe as opposed to how we’re naturally, intuitively primed to behave and believe. At one point, he goes so far as to present a case for Kant and Jeremy Bentham, father of utilitarianism, being autistic (the trendy psychological diagnosis du jour) (120). But, like a lawyer who throws out a damning but inadmissible comment only to say “withdrawn” when the defense objects, he assures us that he doesn’t mean the autism thing as an ad hominem.
From The Moral Foundation Website

            I think most of my fellow liberals are going to think Haidt’s metaphor needs some adjusting. Humans evolved a craving for sweets because in our ancestral environment fruits were a rare but nutrient-rich delicacy. Likewise, our taste for salt used to be adaptive. But in the modern world our appetites for sugar and salt have created a health crisis. These taste receptors are also easy for industrial food manufacturers to exploit in a way that enriches them and harms us. As Haidt goes on to explain in the third section, our tribal intuitions were what allowed us to flourish as a species. But what he doesn’t realize or won’t openly admit is that in the modern world tribalism is dangerous and far too easily exploited by demagogues and PR experts.

In his story about his time in India, he makes it seem like a whole new world of experiences was opened to him. But this is absurd (and insulting). Liberals experience the sacred too; they just don’t attempt to legislate it. Liberals recognize intuitions pushing them toward dominance and submission. They have feelings of animosity toward outgroups and intense loyalty toward members of their ingroup. Sometimes, they even indulge these intuitions and impulses. The distinction is not that liberals don’t experience such feelings; they simply believe they should question whether acting on them is appropriate in the given context. Loyalty in a friendship or a marriage is moral and essential; loyalty in business, in the form of cronyism, is profoundly immoral. Liberals believe they shouldn’t apply their personal feelings about loyalty or sacredness to their judgments of others because it’s wrong to try to legislate your personal intuitions, or even the intuitions you share with a group whose beliefs may not be shared in other sectors of society. In fact, the need to consider diverse beliefs—the pluralism that Haidt extolls—is precisely the impetus behind the efforts ethicists make to pare down the list of moral considerations.

           Moral intuitions, like food cravings, can be seen as temptations requiring discipline to resist. It’s probably no coincidence that the obesity epidemic tracks the moral divide Haidt found when he left the Penn campus. As I read Haidt’s account of Drew Westen’s fMRI experiments with political partisans, I got a bit anxious because I worried a scan might reveal me to be something other than what I consider myself. The machine in this case is a bit like the Sorting Hat at Hogwarts, and I hoped, like Harry Potter, not to be placed in Slytherin. But this hope, even if it stems from my wish to identify with the group of liberals I admire and feel loyalty toward, cannot be as meaningless as Haidt’s “intuitionism” posits.

            Ultimately, the findings Haidt brings together under the rubric of Moral Foundations Theory don’t lend themselves in any way to his larger program of bringing about greater understanding and greater civility. He fails to understand that liberals appreciate all the moral dimensions but don’t think they should all be seen as guides to political policies. And while he may want there to be less tribalism in politics he has to realize that most conservatives believe tribalism is politics—and should be.

Resistance to the Hive Switch is Futile

            “We are not saints,” Haidt writes in the third section, “but we are sometimes good team players” (191). Though his efforts to use Moral Foundations to understand and appreciate conservatives lead to some bizarre contortions and a profound misunderstanding of liberals, his synthesis of research on moral intuitions with research and theorizing on multi-level selection, including selection at the level of the group, is an important contribution to psychology and anthropology. He writes that

anytime a group finds a way to suppress selfishness, it changes the balance of forces in a multi-level analysis: individual-level selection becomes less important, and group-level selection becomes more powerful. For example, if there is a genetic basis for feelings of loyalty and sanctity (i.e., the Loyalty and Sanctity Foundations), then intense intergroup competition will make these genes become more common in the next generation. (194)

The most interesting idea in this section is that humans possess what Haidt calls a “hive switch” that gets flipped whenever we engage in coordinated group activities. He cites historian William McNeil who recalls an “altered state of consciousness” when he was marching in formation with fellow soldiers in his army days. He describes it as a “sense of pervasive well-being…a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life” (221). Sociologist Emile Durkheim referred to this same experience as “collective effervescence.” People feel it today at football games, at concerts as they dance to a unifying beat, and during religious rituals. It’s a profoundly spiritual experience, and it likely evolved to create a greater sense of social cohesion within groups competing with other groups.

            Surprisingly, the altruism inspired by this sense of the sacred triggered by coordinated activity, though primarily directed at fellow group members—parochial altruism—can also flow out in ways that aren’t entirely tribal.  Haidt cites political scientists Robert Putnam and David Campbell’s book, American Grace: How Religion Divides and Unites Us, where they report the finding that “the more frequently people attend religious services, the more generous and charitable they become across the board” (267); they do give more to religious charities, but they also give more to secular ones. Putnam and Campbell write that “religiously observant Americans are better neighbors and better citizens.” The really astonishing finding from Putnam and Campbell’s research, though, is that the social advantages enjoyed by religious people had nothing to do with the actual religious beliefs. Haidt explains,

These beliefs and practices turned out to matter very little. Whether you believe in hell, whether you pray daily, whether you are a Catholic, Protestant, Jew, or Mormon… none of these things correlated with generosity. The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people. (267)

The Sacred foundation, then, is an integral aspect of our sense of community, as well as a powerful inspiration for altruism. Haidt cites the work of Richard Sosis, who combed through all the records he could find on communes in America. His central finding is that “just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.” Socis went on to identify “one master variable” which accounted for the difference between success and failure for religious groups: “the number of costly sacrifices that each commune demanded from its members” (257). But sacrifices demanded by secular groups made no difference whatsoever. Haidt concludes,

In other words, the very ritual practices that the New Atheists dismiss as costly, inefficient, and irrational turn out to be a solution to one of the hardest problems humans face: cooperation without kinship. Irrational beliefs can sometimes help the group function more rationally, particularly when those beliefs rest upon the Sanctity foundation. Sacredness binds people together, and then blinds them to the arbitrariness of the practice. (257)

This section captures the best and the worst of Haidt's work. The idea that humans have an evolved sense of the sacred, and that it came about to help our ancestral groups cooperate and cohere—that’s a brilliant contribution to theories going back through D.S.Wilson, Emile Durkheim, all the way back to Darwin. Contemplating it sparks a sense of wonder that must emerge from that same evolved feeling for the sacred. But then he uses the insight in the service of a really lame argument.

            The costs critics of religion point to aren’t the minor personal ones like giving up alcohol or fasting for a few days. Haidt compares studying the actual, “arbitrary” beliefs and practices of religious communities to observing the movements of a football for the purpose of trying to understand why people love watching games. It’s the coming together as a group, he suggests, the sharing of goals and mutual direction of attention, the feeling of shared triumph or even disappointment. But if the beliefs and rituals aren’t what’s important then there’s no reason they have to be arbitrary—and there’s no reason they should have to entail any degree of hostility toward outsiders. How then can Haidt condemn Harris and Dawkins for “worshipping reason” and celebrating the collective endeavor known as science? Why doesn’t he recognize that for highly educated people, especially scientists, discovery is sacred? He seriously mars his otherwise magnificent work by wrongly assuming anyone who doesn’t think flushing an American flag down the toilet is wrong has no sense of the sacred, shaking his finger at them, effectively saying, rallying around a cause is what being human is all about, but what you flag-flushers think is important just isn’t worthy—even though it’s exactly what I think is important too, what I’ve devoted my career and this book you're holding to anyway.

            As Kahneman stresses in his book, resisting the pull of intuition takes a great deal of effort. The main difference between highly educated people and everyone else isn’t a matter of separate moral intuitions. It’s a different attitude toward intuitions in general. Those of us who worship reason believe in the Enlightenment ideals of scientific progress and universal human rights. I think most of us even feel those ideals are sacred and inviolable. But the Enlightenment is a victim of its own success. No one remembers the unchecked violence and injustice that were the norms before it came about—and still are the norms in many parts of the world. In some academic sectors, the Enlightenment is even blamed for some of the crimes its own principles are used to combat, like patriarchy and colonialism. Intuitions are still very much a part of human existence, even among those who are the most thoroughly steeped in Enlightenment values. But worshipping them is far more dangerous than worshipping reason. As the world becomes ever more complicated, nostalgia for simpler times becomes an ever more powerful temptation. And surmounting the pull of intuition may ultimately be an impossible goal. But it’s still a worthy, and even sacred ideal.  

But if Haidt’s attempt to inspire understanding and appreciation misfires how are we to achieve the goal of greater civility and less partisanship? Haidt does offer some useful suggestions. Still, I worry that his injunction to “Talk to the elephant” will merely contribute to the growing sway of the burgeoning focus-groupocracy. Interestingly, the third stage of the Robber's Cave experiment may provide some guidance. Sherif and his colleagues did manage to curtail the escalating hostility between the Eagles and the Rattlers. And all it took was some shared goals they had to cooperate to achieve, like when their bus got stuck on the side of the road and all the boys in both groups had to work together to work it free. Maybe it’s time for a mission to Mars all Americans could support (credit Neil de Grasse Tyson). Unfortunately, the conservatives would probably never get behind it. Maybe we should do another of our liberal conspiracy hoaxes to convince them China is planning to build a military base on the Red Planet. Then we’ll be there in no time.


Why I Am Not a Feminist—and You Shouldn’t Be Either part 3: Engendering Gender Madness


             "As a professional debunker I feel like I know bunk when I see it, and Wertheim has well captured the genre: 'In all likelihood there will be an abundant use of CAPITAL LETTERS and exclamation points!!! Important sections will be underlined or bolded, or circled, for emphasis.'"


This is from Skeptic editor Michael Shermer's review of a book on the demarcation problem, the thorny question of how to recognize whether ideas are revolutionary or just, well, bunk. Obviously, if someone's writing begs for attention in way that seems meretricious or unhinged, you're likely dealing with a bunk peddler. What to make, then, of these lines, to which I have not added any formatting?

"Honestly, I can’t think of a better way to make a girl in grade school question whether she’ll have any interest in or aptitude for science than to present her with a 'science for girls' kit."

"And, science kits that police these gender stereotypes run the risk of alienating boys from science, too."

"I really don’t think that science kits should be segregated by gender, but if you are going to segregate them at least make the experiments for girls NOT SO LAME."

"If girls are at all interested in science, then it must be in a pretty, feminine way that reinforces notions of beauty. It’s mystical. The chemistry of perfumery is hidden behind 'perfection.' But boys get actual physics and chemistry—just like that, with no fancy modifiers. This division is NOT okay..."

To the first, I’d say, really? You must have a very limited imagination. To the second, I’d say, really? Isn’t “police” a strong term for science kits sold at a toy store? I agree with the third, but I think the author needs to settle down. And to the fourth, I’d say, well, if the kids really want kits of this nature—and if they don’t want them the manufacturer won’t be offering them for long—you’d have to demonstrate that they actually cause some harm before you can say, in capitals or otherwise, they’re not okay.

Were these breathless fulminations posted on the pages of some poststructuralist site for feminist rants? The first and second are from philosopher Janet Stemwedel’s blog at Scientific American. The third is from a blog hosted by the American Geophysical Union and was written by geologist Evelyn Mervine. And the fourth is from anthropologist Krystal D’Costa’s blog, also at Scientific American.

           You’d hope these blog posts, as emphatic as they are, would provide links to some pretty compelling research on the dangers of pandering to kids’ and parents’ gender stereotypes. One of the posts has a link to a podcast about research on how vaginas are supposed to smell. Another of Stemwedel’s posts on the issue links to yet another post, by Christie Wilcox, in which she not-so-gently takes the journal Nature to task for publishing what was supposed to be a humorous piece on gender differences. It’s only through this indirect route that you can find any actual evidence—in any of these posts—that stereotyping is harmful. “Reinforcing negative gender stereotypes is anything but harmless,” Wilcox declares. But does humor based on stereotypes in fact reinforce them, or does it make them seem ridiculous? How far are we really willing to go to put a stop to this type of humor? It seems to me that gender and racial and religious stereotypes are the bread-and-butter of just about every comedian in the business. 

            The science Wilcox refers to has nothing to do with humor but instead demonstrates a phenomenon psychologists call stereotype threat. It’s a fascinating topic—really one of the most fascinating in psychology in my opinion. It may even be an important factor in the underrepresentation of women in STEM fields. Still, the connection between research on stereotypes and performance—stereotype boost has also been documented—and humor is tenuous. And the connection with pink and pretty microscopes is even more nebulous.

           Helping women in STEM fields feel more welcome is a worthy cause. Gender stereotypes probably play some role in their current underrepresentation. I take these authors at their word that they routinely experience the ill effects of common misconceptions about women’s cognitive abilities, so I sympathize with their frustration to a degree. I even have to admit that it’s a testament to the success of past feminists that the societal injustices their modern counterparts rail against are so much less overt—so subtle. But they may actually be getting too subtle; decrying them sort of resembles the righteous, evangelical declaiming of conspiracy theorists. If you can imagine a way that somebody may be guilty of reinforcing stereotypes, you no longer even have to shoulder the burden of proving they’re guilty.

          The takeaway from all this righteously indignant finger-pointing is that you should never touch anything with even a remote resemblance to a stereotype. Allow me some ironic capitals of my own: STEREOTYPES BAD!!! This message, not surprisingly, even reaches into realms where a casual dismissal of science is fashionable, and skepticism about the value of empirical research, expressed in tortured prose, is an ascendant virtue—or maybe I have the direction of the influence backward.

           On two separate occasions now, one of my colleagues in the English department has posted the story of a baby named Storm on Facebook. Storm’s parents opted against revealing the newborn’s sex to friends and any but immediate family to protect her or him from those nasty stereotypes. In the comments under these links were various commendations and expressions of solidarity. Storm’s parents, most agreed, are heroes. Parents bragged about all their own children’s androgynous behavior, expressing their desire to rub it in the faces of “gender nazis.” 

From the Toronto Star
             From what I can tell, Storm’s parents had no idea the story of their unorthodox parenting would go viral, so we probably shouldn’t condemn them for using their child to get media attention. And I don’t think the “experiment,” as some have called it, poses any direct threat to Storm’s psychological well-being. But Storm’s parents are jousting with windmills. They’re assuming that gender is something imposed on children by society—those chimerical gender nazis—through a process called socialization. The really disheartening thing is that even the bloggers at Scientific American make this mistake; they assume that sparkly pink science kits that help girls explore the chemistry of lipstick and perfume send direct messages about who and what girls should be, and that the girls will receive and embrace these messages without resistance, as if the little tykes were noble savages with pristine spirits forever vulnerable to the tragic overvaluing of outward beauty.

            When they’re thinking clearly, all parents know a simple truth that gets completely discounted in discussions of gender—it’s really hard to get through to your kids even with messages you’re sending deliberately and explicitly. The notion that you can accidentally send some subtle cue that’s going to profoundly shape a child’s identity deserves a lot more skepticism than it gets (ask my conservative parents, especially my Catholic mom). This is because identity is something children actively create for themselves, not the sum total of all the cultural assumptions foisted on them as they grow up. Children’s minds are not receptacles for all our ideological garbage. They rummage around for their own ideological garbage, and they don’t just pick up whatever they find lying around.

            Psychologist John Money was a prominent advocate of the theory that gender is determined completely through socialization. So he advised the parents of a six-month-old boy whose penis had been destroyed in a botched circumcision to have the testicles removed as well and to raise the boy as a girl. The boy, David Reimer, never thought of himself as a girl, despite his parents’ and Money’s efforts to socialize him as one. Money nevertheless kept declaring success, claiming Reimer (who was called Brenda at the time) proved his theory of gender development. By age 13, however, the poor kid was suicidal. At 14, he declared himself a boy, and later went on to get further surgeries to reconstruct his genitals. In his account, written with John Colapinto, As Nature Made Him: The Boy Who Was Raised as a Girl, Reimer says that Money’s ministrations were in no way therapeutic—they were traumatic. Having read about Reimer in Steven Pinker’s book The Blank Slate: The Modern Denial of Human Nature, I thought of John Money every time I came across the term gender nazi in the Facebook comments about Storm (though I haven’t read Colapinto’s book in its entirety and don’t claim to know the case in enough detail to support such a severe charge).

            Reimer’s case is by no means the only evidence that gender identity and gender-typical behavior are heavily influenced by hormones. Psychiatrist William Reiner and urologist John Gearhart report that raising boys (who’ve been exposed in utero to more testosterone) as girls after surgery to remove underdeveloped sex organs tends not to result in feminine behaviors—or even feminine identity. Of the 16 boys in their study, 2 were raised as boys, while 14 were raised as girls. Five of the fourteen remained female throughout the study, but 4 spontaneously declared themselves to be male, and 4 others decided they were male after being informed of the surgery they’d undergone. All 16 of the children displayed “moderate to marked” degrees of male-typical behavior. The authors write, “At the initial assessment, the parents of only four subjects assigned to female sex reported that their child had never stated a wish to be a boy.”

            An earlier study of so-called pseudo-hermaphrodites, boys with a hormone disorder who are born looking like girls but who become more virile in adolescence, revealed that of 18 participants who were raised as girls, all but one changed their gender identity to male. There is also a condition some girls are born with called Congenital Adrenal Hyperplasia (CAH), which is characterized by an increased amount of male hormones in their bodies. It often leads to abnormal testes and the need for surgery. But Sheri Berenbaum and J. Michael Bailey found that in the group of girls with CAH they studied, increased levels of male-typical behavior could not be explained by the development of male genitalia or the age of surgery. The hormones themselves are the likely cause of the differences. 
From Psychology Today and Satoshi Kanazawa

           One particularly fascinating finding about kids’ preferences for toys comes from the realm of ethology. It turns out that rhesus monkeys show preferences for certain types of toys depending on their gender—and they’re the same preferences you would expect. Girls will play with plush dolls or with wheeled vehicles, but boys are much more likely to go for the cars and trucks. And the difference is even more pronounced in vervet monkeys, with both females and males spending significantly more time with toys we might in other contexts call “stereotypical.” There’s even some good preliminary evidence that chimpanzees play with sticks differently depending on their gender, with males using them as tools or weapons and females cradling them like babies.

            Are gender roles based solely on stereotypes and cultural contingencies? In The Blank Slate, Pinker excerpts large sections of anthropologist Donald Brown’s inventory of behaviors that have been observed by ethnographers in all cultures that have been surveyed. Brown’s book is called Human Universals, and it casts serious doubt on theories that rule out every factor influencing development except socialization. Included in the inventory: “classification of sex,” “females do more direct child care,” “male and female and adult and child seen as having different natures,” “males more aggressive,” and “sex (gender) terminology is fundamentally binary” (435-8). These observations are based on societies, not individuals, who vary much more dramatically one to the next. The point isn’t that genes or biology determine behavioral outcomes; the relationship between biology and behavior isn’t mechanistic—it’s probabilistic. But the probabilities tend to be much higher than anyone in English departments assumes—higher even than the bloggers at Scientific American assume.

            Interestingly, even though there are resilient differences in math test scores between boys and girls—with boys’ scores showing the same average but stretching farther at each tail of the bell curve—researchers exploring women’s underrepresentation in STEM fields have ruled out the higher aptitude of a small subset of men as the most important factor. They’ve also ruled out socialization. Reviewing multiple sources of evidence, Stephen Ceci and Wendy Williams find that

            the omnipresent claim that sex differences in mathematics 
            result from early socialization (i.e., parents and teachers 
            inculcating a ‘‘math is for boys’’ attitude) fails empirical 
            scrutiny. One cannot assert that socialization causes girls to 
            opt out of math and science when girls take as many math 
            and science courses as boys in grades K–12, achieve higher 
            grades in them, and major in college math in roughly equal 
            numbers to males. Moreover, survey evidence of parental 
            attitudes and behaviors undermines the socialization 
            argument, at least for recent cohorts. (3)

If it’s not ability, and it’s not socialization, then how do we explain the greater desire on the part of men to pursue careers in math-intensive fields? Ceci and Williams believe it’s a combination of divergent preferences and the biological constraints of childbearing. Women tend to be more interested in social fields; while men like fields with a focus on objects and abstractions. However, girls with CAH show preferences closer to those of boys. (Cool, huh?)

  Ceci and Williams also point out that women who excel at math tend to score highly in tests of verbal reasoning as well, giving them more fields to choose from. (A recent longitudinal study replicates this finding - 3-26-2013.) This is interesting to me because if women are more likely to pursue careers dealing with people and words, they’re also more likely to be exposed to the strain of feminism that views science as just another male conspiracy to justify and perpetuate the patriarchal status quo. Poststructuralism and New Historicism are all the rage in the English department I study in, and deconstructing scientific texts is de rigueur. Might Derrida, Lacan, Foucault, and all their feminist successors be at fault for women’s underrepresentation in STEM fields at least as much as toys and stereotypes?

            I have little doubt that if society were arranged to optimize women’s interest in STEM fields they would be much better represented in them. But society isn’t a very easy thing to manipulate. We have to consider the possibility that the victory would be Pyrrhic. In any case, we should avoid treating children like ideological chess pieces. There’s good evidence that we couldn’t keep little kids from seeking gender cues even if we tried, and trying strikes me as cruel. None of this is to say that biology determines everything, or that gender role development is simple. In fact, my problem with the feminist view of gender is that it’s far too crude to account for such a complex phenomenon. The feminists are arm chair pontificators at best and conspiracy theorists at worst. They believe stereotypes can only be harmful. That’s akin to saying that the rules of grammar serve solely to curtail our ability to freely express ourselves. While grammar need not be as rigid as many once believed, doing away with it altogether would reduce language to meaningless babble. Humans need stereotypes and roles. We cannot live in a cultural vacuum.

            At the same time, in keeping with the general trend toward tribalism, the feminists’ complaints about pink microscopes are unfair to boys and young men. Imagine being a science-obsessed teenage boy who comes across a bunch of rants on the website for your favorite magazine. They all say, in capital and bolded letters, that suggesting to girls that trying to be pretty is a worthwhile endeavor represents some outrageous offense, that it will cause catastrophic psychological and economic harm to them. It doesn’t take a male or female genius to figure out that the main source of teenage girls’ desire to be pretty is the realization that pretty girls get more attention from hot guys. If a toy can arouse so much ire for suggesting a girl might like to be pretty, then young guys had better control their responses to hot girls—think of the message it sends. So we’re back to the idea that male attraction is inherently oppressive. Since most men can’t help being attracted to women, well, shame on them, right? 


(Full disclosure: probably as a result of a phenomenon called assortive pairing, I find ignorance of science to be a huge turn-off.)
Check out part 2 on "The Objectionable Concept of Objectification."
And part 1 on earnings.
These posts have generated pretty lengthy comment threads on Facebook, so stay tuned as well for updates based on my concession of points and links to further evidence.
And, as always, tell me what you think and share this with anyone you think would rip it apart (or anyone who might just enjoy it).
Update: Just a few minutes after posting this, I came across Evolutionary Psychologist Jesse Bering's Facebook update saying he was being unfairly attacked by feminists for his own Scientific American blog. If you'd like to show your solidarity, go to http://blogs.scientificamerican.com/bering-in-mind/.
Go here to read my response to commenters.

Why I Am Not a Feminist - and You Shouldn't Be Either part 2: The Objectionable Concept of Objectification

From eatingdisordersfacts.org
          Feminists theorize that one of the ways men subjugate women is by objectifying them. The idea is that a man, as part of the wider male conspiracy, makes a point of letting girls and young women know they’re constantly being ogled by people who are evaluating them and comparing them to other women—based solely on their physical features. Even compliments can contribute to this heightened awareness and concern for appearance, since they let women know what aspects of their persons are attention-worthy. The most heinous example of objectification is the casual dismissal of a woman’s ideas in the workplace and the substitution of some remark about her appearance in place of the serious consideration her idea deserves.


            Or maybe the most heinous example of objectification is the parading of impossibly attractive actresses and dangerously thin models all over the media landscape, setting the standards of beauty so high young women can never even hope to compete. In Hollywood, directors are fond of lovingly sweeping their cameras over their favorite parts of a female’s anatomy to let every young woman viewing the films know precisely what men find most appealing. The lustful male gaze is thus a powerful tool of oppression because it causes women to feel self-conscious and insecure—or so the feminist theory suggests (or, rather, one of the feminist theories).

            Looking through the  ever-looming feminist lens at statistics about how much more common self-esteem issues and eating disorders are among girls has a predictable impact on how we view boys. “Inevitably, boys are resented,” writes Christina Hoff Sommers in her book The War Against Boys: How Misguided Feminism is Harming Our Young Men, setting them up to be

seen both as the unfairly privileged gender and as obstacles on the path to gender justice for girls. There is an understandable dialectic: the more girls are portrayed as diminished, the more boys are regarded as needing to be taken down a notch and reduced in importance. (23-4) (excerpt)

The effect on young boys of being taught this theory of oppression by objectification must be akin to the effect of Catholic preachings about the fallen state of man and the danger to their souls of succumbing to the temptations of carnal desire. At some point, they’re going to start experiencing that desire, they’re not going to be able to do anything about it, and it’s going to make them feel pretty guilty. It’s a bit similar as well to what young homosexuals must experience growing up with families who believe attraction toward members of the same sex is sinful and unnatural.

            Men look at women and assess their attractiveness. They even get aroused merely from the sight of women who have certain features. Movie-makers and marketers know all about men’s fondness for checking out women. I’m not going to cite any of the research from the field of evolutionary psychology that explores whether or not men’s passion for beautiful women is something that occurs reliably in diverse cultures, or whether or not there are certain features that are considered beautiful by men all over the world. I’m not going to recite the logic of natural selection as it pertains to mate selection and the relative cost of reproduction. You can find that stuff anywhere, and you’ve probably already got some response to it worked out.

I’m going to do my best to explain why objectification can’t be a valid theory and doesn’t in any way establish the need for any social and political movement pitting the genders against each other at as purely practical a level as I can manage.

Objectification goes wrong before even getting beyond the term itself. Men aren’t—humans aren’t—with a few rare exceptions, attracted to or sexually aroused by objects. By being attracted to or sexually aroused by a woman, a man is in fact acknowledging her humanity. We humans are physical beings, and sex is a physical act. It stands to reason that in assessing a potential sexual partner’s compatibility, we focus a great deal on physical attributes. We have to distinguish humans from objects obviously, and we have to have some other criteria on which to base our decisions about who to couple with. For one thing, we need a way to figure out whether the prospective partner is mature enough for sex—so features signaling sexual maturity tend to be seen as attractive. And, since most people prefer to couple with members of one sex over the other, features signaling that membership will also tend to be seen as attractive.

Individualist feminist (there’s got to be a better term) Wendy McElroy, in a defense of pornography, points out the flaw in thinking of objectification as automatically and invariably degrading, using logic very similar to mine:


The assumed degradation is often linked to the 'objectification'   
of women: that is, porn converts them into sexual objects. What 
does this mean? If taken literally, it means nothing because 
objects don't have sexuality; only beings do. But to say that porn 
portrays women as 'sexual beings' makes for poor rhetoric. 
Usually, the term 'sex objects' means showing women as 'body 
parts', reducing them to physical objects. What is wrong with 
this? Women are as much their bodies as they are their minds or 
souls. No one gets upset if you present women as 'brains' or as 
'spiritual beings'. If I concentrated on a woman's sense of humor 
to the exclusion of her other characteristics, is this degrading? 
Why is it degrading to focus on her sexuality?

Few women, as far as I know, complain about being treated as sexual beings by men they happen to be attracted to. The trouble arises when they’re treated that way when it’s inappropriate, as in the work situation I’ve described. The problem in such situations—and of course I agree it’s a problem—isn’t that the woman is seen as an object; it’s not even that she’s being recognized as attractive; it’s that someone is refusing to see her as more than merely a sexual being.

            But why do men have to be so obsessed with sex? And why does it seem like a woman’s role as a sexual being takes precedence over her other roles so frequently? Practically speaking, if two people who don’t know each other are going to begin a physical relationship, at least one of them must be motivated to pursue and get to know the other. Since the pursuer doesn’t yet know anything about the pursued, all there is to go on is physical appearance. Think about this for a second or two and you’ll come to a realization most women take for granted and, as long as it’s not in the context of a discussion about gender oppression, freely admit: Being the one who is the most motivated to pursue a relationship puts you at a disadvantage. An attractive woman has the power to accept or reject overtures from any of her suitors—and the more attractive she is the more of them she’ll have to choose from.
From CDC

           It’s just as legitimate to look at the numbers of women who suffer from eating disorders or undergo risky surgeries to improve their looks as evidence of an intense desire on the part of females to have the upper hand over men. The problem young girls face is the same problem young boys face—competition for attractive partners is unavoidable. Judging from suicide statistics, the consequences of this competition are even direr for the boys. The explanation for girls’ increasing self-consciousness and their more readily resorting to more extreme measures is probably the simple fact that media technology has opened the world up to everyone like never before, so that now the standards of beauty are determined by a contest with a much larger pool of contestants—not to mention the technological wonders of digital alteration.

            All the panic notwithstanding, this wider field of competition may actually be a societal boon. Some people of both genders harm themselves trying to be thin or athletic. At the same time, though, the obesity epidemic is doing even more harm. It’s easy to find stats and figures on anorexia, but how many people, after seeing a Victoria’s Secret model or that Twilight kid with his shredded abs, simply forgo that extra helping they were tempted to devour? And the competition extends beyond the realm of physical appearance. We don’t usually complain about how the work of geniuses makes it difficult for us to say anything interesting—even though we have to assume many first dates end in disappointment owing to lackluster conversation. What’s so special about attractiveness that it calls for protection from high standards? (This is not to say that there aren't plenty of other good reasons not to watch crap TV and read glitzy crap magazines.)

            Even if women admit that they like sex and that male attention is flattering, most of them will still attest to having experienced unwanted or inappropriate sexual attention or commentary at some point. While a lot of the time their complaints about this issue are probably bragging in disguise, that at least sometimes male attention can be downright scary or just outrageously inappropriate is undeniable. Still, women have to keep in mind that men like to tease their friends, often aggressively, and the point at which intimate liberty-taking shades into something more malicious is often ambiguous.

            And if you think a workplace dominated by females would be some kind of peaceful utopia, you probably haven’t spent much time around groups of women. If a man has a problem with you, he’s much more likely to tell you directly. Women, on the other hand, are much more likely to smile to your face and then attack your reputation when your back is turned. This is one of those patterns that emerge reliably across cultures; psychologists call it indirect aggression. I’m citing it because it’s not about beauty standards or male desire—and because it underscores the point that when a man makes some comment about a woman’s "proper role" it’s an act of aggression perpetrated by an individual, not an act of political or economic oppression for which the entire gender is guilty.
From shortsupport.org 1
            Those perpetrators are also much more likely to be at the bottom of the workplace hierarchy than they are at the top. Studies of natural hierarchy formation find that self-sacrifice and altruism are key determiners of status. There is also strong evidence that people resort to aggression primarily to compensate for low status. Although unwanted sexual advances aren’t acts of aggression, a rejected man’s effort to save face can certainly be frightening. The important thing to keep in mind, though, is that even these face-saving measures aren’t politically motivated. The guy’s not belittling the woman from a position of power; his position is in fact pitiable. (I'll even make a prediction: the guy who's bugging you--he's short isn't he?)

            What do neuroscientists and psychologists say about the nature of men’s lustful gazes? A small preliminary imaging study presented at an AAAS meeting in Chicago by Susan Fiske seemed to offer some support for the idea of objectification. When men were put in scanners and allowed to look at pictures of women, the region of the brain that motivates and manages male conspiracies lit up like a Christmas tree--sorry, couldn't help myself. Here is the claim Fiske actually made: 

            I’m not saying that they literally think these photographs of 
            women are photographs of tools per se, or photographs of 
            non-humans, but what the brain imaging data allow us to do   
            is to look at it as scientific metaphor. That is, they are  
            reacting to these photographs as people react to objects.   
            (Quoted here)

However, Fiske goes on to say that when she matched the scans with surveys of attitudes she discovered that “the hostile sexists were likely to deactivate the part of the brain that thinks about other people’s intentions.” So along with the part of the brain associated with using tools, people who aren’t “hostile sexists” actually do think about naked people’s intentions. This finding has actually been replicated.

            The most comprehensive study to date on how people’s attitudes are affected by viewing pictures of scantily clad women and men concludes that while seeing skin does in fact lead to a diminishment in assessments of agency, it leads to an increase in assessments of a capacity to experience either pleasure or pain. The authors write: 

            To the extent that this modified framework concerning    
            perceptions of the mind and body turns out to be correct, it is 
            inaccurate to describe the body focus as inducing  
            “objectification.” People who seem especially embodied are 
            not treated as mere physical objects but, instead, like 
            nonhuman animals, as beings who are less capable of 
            thinking or reasoning but who may be even more capable of 
            desires, sensations, emotions, and passions. (12)

Looking at other humans like they’re animals isn’t much better than looking at them like objects—but the study was of people looking at pictures of individuals they’d never met. Assuming a capacity for desires, sensations, emotions, and passions is, at least in my opinion, a really good start considering the pictures are of naked people in sexually suggestive poses; people with more clothes were perceived to be more like robots. (So show more skin to hide your agendas, as if you didn't already know.) The authors not only take issue with the term objectification; they also failed to discover any justification for thinking the changes that occur in attitudes toward strangers based on how much skin they’re showing are only experienced by men:

Objectification is often discussed in terms of men objectifying women …, but we found that both men and women strip agency and confer experience to both men and women when a bodily focus is induced. (11)

This study’s findings dovetail almost perfectly with those of a study that found men who watch a moderate amount of pornography demonstrate less sexist attitudes in general, but when sexism does emerge in relation to porn it tends toward so-called "benevolent sexism," the supposedly paternalistic, protective, and worshipful variety (the measures for which are shot through with dubious feminist assumptions).

            Benevolent or not, men's feelings toward women in porn are probably the starkest proof that objectification is a nonsensical idea: if men were aroused by objects or instruments, the women in x-rated videos would be passive and inert as often as they are active and enthusiastic. I don't have any numbers to cite on this but I'd say most men, by far, cringe at the thought of taking pleasure without reciprocating. Advocates of objectification theory seem to worry that someone will sneak up behind a man and slap him on the back while he's looking at a woman as a sexual being, causing his mind to get stuck that way. I can't be the only man who on more than one occasion has had sex with one woman only to drive to work a short time afterward and speak to other women in a purely professional capacity. Guys looking at porn and then going to work--got to be happening millions of times a day. People shift modes all the time.
  
            The study that questions the term objectification is titled “More Than a Body: Mind Perception and the Nature of Objectification.” Tellingly, when Peircarlo Valdesolo reported on it for Scientific American, the headline read “How Our Brains Turn Women Into Objects.” In my future post on the hysteria (yes, I’m using this term with a sexist etymology ironically) over the “gendering” of children, I’m going to point out how this flagship publication for popular science seems to be bowing to pressure to be more feminist-friendly. Valdesolo, to be fair, did include a subheading: “There is, it turns out, more than one kind of ‘objectification’.” Those quotation marks notwithstanding, I still have to object—no, in fact, there aren’t any kinds of objectification. (A later “60-Second Mind” podcast has a much more accurate title and subheading: “How We View Half-Naked Men and Women: Research finds that scantily-clad women and men are judged in similar ways.”)

            Make no mistake, those hostile sexists are out there. But not all of them are men. Some people, women and men, are hell-bent on plunging this country back into the dark ages and on dispelling all the evolution craziness that gets taught in schools, all the global warming crap, all the godlessness. These people are sure to belittle and disparage anyone, woman or man, with more liberal or libertarian leanings (and we them). Make no mistake on the point too that while mixing up objectification and attraction is wrong and offensive, there are acts that really do deny the humanity and sovereignty of women and men. In America, we can be glad that it's overwhelmingly more likely for the most disadvantaged people to be either the perpetrators or the victims of such acts. I believe, nonetheless, that by targeting the forces behind their disadvantage we can and should be doing more to prevent such acts.

           The stats on part 1 of Why I Am Not a Feminist: Earnings are still blowing up. But the comments have stopped coming in. Please let me know what you think. Feel free as well to share this post with anyone you think can tear it apart.
Read part 3: Engendering Gender Madness
Read my response to commenters.

Why I Am Not a Feminist—and You Shouldn’t Be Either part 1: Earnings

From a Georgetown University study called "Education, Occupation, and Lifetime Earnings"
           In order to establish beyond all doubt the continuing criticality of the battle for women’s equality, feminists rely heavily on data demonstrating an earnings discrepancy between genders. Women make less money in America, and therefore women are not yet equal. If women aren’t making as much as men who work in the same industry, if women aren’t making as much as men with the same education level, isn’t that an injustice? So how can I claim something is wrong with feminism, a movement seeking equal rights and equal treatment and equal pay for half the population of the country?


            There’s a point at which dwelling on the crimes committed against a group of people becomes a subtle form of bigotry toward other groups. Jews like to rehearse their long history of persecution for a reason. Focusing on anti-Semitism can bolster solidarity among Jews—if for no other reason than that it fosters suspicion of gentiles. This is not to minimize the true horrors and hatreds faced by God’s chosen people, but rather to point out that no matter how horrible their past is it doesn’t justify atrocities against other groups of people.

            I’m not writing merely to bemoan male-bashing, and I'm not suggesting feminists are guilty of atrocities (though a case could be made that they are). I’m writing because the good cause of equal rights and equal pay shades with distressing frequency into sloppy thinking and unscientific, perfervid preaching. Feminism has become a free-floating ideology, a cause inspiring blind frenzies and impassioned pronouncements about mysterious evils unlikely to exist in the world of living, breathing humans. And, yes, it is unfair to men, mean to boys, and counterproductive to women.

            I am an advocate of universal human rights, and many of my positions overlap with those of feminists. A pregnant woman has the right to choose whether or not to carry her baby to term. Any type of legal or educational enforcement of gender roles is a violation of the right of individuals to choose their own lifestyles, educational trajectories, careers, and the nature of their relationships. But this freedom in regard to gender roles also means that girls and boys, women and men, have just as much of a right to choose to be traditional or stereotypical in any of these domains. Any law or educational policy that goes after any aspect of gender freely chosen or naturally occurring is just as much of an injustice as one that forces individuals to take on roles that don’t fit them.
From a 2011 Gallup Report
           
          If it were true that the figures showing earnings discrepancies in fact represented compelling evidence of hiring or promoting biases favoring men, I would support the cause of reform—not in the name of women’s rights, but in the name of human rights, in the name of fairness. As stark an image as they paint, however, the results of the studies these figures come from are no more proof of bias than a study showing boys win more often in school sports would be proof of cheating. Just as you would have to address the question of how many girls are even playing sports, you have to ask how many women are applying for top-paying positions. Fortunately, several studies have looked at the application and hiring process directly—at least in academic fields.
From a CDC 2011 Report
            Before discussing those results, though, I’d like to point out (only somewhat flippantly) that earnings aren’t the only area in which reliable gender differences occur. Men have more heart attacks than women. And men tend to die at an earlier age than women, heart disease being the single most common cause of death. One of the main concerns of feminists is the so-called objectification of women and, more specifically, the theory that media portrayals of underweight actresses and models instill in young girls the conviction that they must be dangerously skinny to be attractive. Might it also be the case that media portrayals of extremely wealthy men instill in boys the notion that in order to be attractive they must make extremely large incomes, incomes they go to dangerous lengths to secure, say, by working long hours, spending little time with family and friends, ignoring their health, stressing themselves out, and working themselves into early graves.

            A 2010 study published in the Proceedings of the National Academy of Sciences by Daniel Kahneman and Angus Deaton begins its discussion of results thus:

            More money does not necessarily buy more happiness, but 
            less money is associated with emotional pain. Perhaps 
            $75,000 is a threshold beyond which further increases in 
            income no longer improve individuals’ ability to do what 
            matters most to their emotional well-being, such as spending 
            time with people they like, avoiding pain and disease, and 
            enjoying leisure. According to the ACS, mean (median) US 
            household income was $71,500 ($52,000) in 2008, and about 
            a third of households were above the $75,000 threshold. It 
            also is likely that when income rises beyond this value, the 
            increased ability to purchase positive experiences is 
            balanced, on average, by some negative effects. recent 
            psychological study using priming methods provided 
            suggestive evidence of a possible association between high 
            income and a reduced ability to savor small pleasures. (4)


Perhaps a monomaniacal lusting after money is a pathology, one that men suffer from in much greater numbers than women. But my point isn’t that I think we should try to do something to protect these men from harm; it’s rather that income is not necessarily an absolute good. So why should it be a benchmark for women’s rights that they make dollar for dollar what men make? We have to at least consider the possibility that women have it as good or better than men already today.

            Still, if a woman wants to go toe-to-toe with her male counterparts to see who can earn more, there should be no institutional barriers hampering her ability to compete. Before we look at those earnings charts and imagine sinister cabals of Scotch-swigging conspirators, however, we must determine whether or not the numbers result from choices freely made by women. “Gender Differences at Critical Transitions in the Careers of Science, Math, and Engineering Faculty” is the 2010 report of a task force established to investigate this very question. The main finding:


For the most part, male and female faculty in science, engineering, and mathematics have enjoyed comparable opportunities within the university, and gender does not appear to have been a factor in a number of important career transitions and outcomes. (153)


How does the study account for the underrepresentation of women in these fields? “Women accounted for about 17 percent of applications for both tenure-track and tenured positions in the departments surveyed” (154). So the plain fact is that women apply for these positions less frequently. Could it be because they despair of their chances for getting an interview? It turns out that “The percentage of women who were interviewed for tenure-track or tenured positions was higher than the percentage of women who applied” (157), which does sound a bit like discrimination—against men. And it gets better (or worse): “For all disciplines the percentage of tenure-track women who received the first job offer was greater than the percentage in the interview pool” (157). Fewer women applying to positions in these fields, not discriminatory hiring or promoting, explains their underrepresentation.

            Reviewing this and several other research programs, Stephen Ceci and Wendy Williams, in a report likewise published in the Proceedings of the National Academy of Sciences titled "Understanding current causes of women's underrepresentation in science", explain that 

            Despite frequent assertions that women’s current       
            underrepresentation in math-intensive fields is caused by sex 
            discrimination by grant agencies, journal reviewers, and 
            search committees, the evidence shows women fare as well 
            as men in hiring, funding, and publishing (given comparable 
            resources). That women tend to occupy positions offering 
            fewer resources is not due to women being bypassed in 
            interviewing and hiring or being denied grants and journal 
            publications because of their sex. It is due primarily to 
            factors surrounding family formation and childrearing, 
            gendered expectations, lifestyle choices, and career 
            preferences—some originating before or during adolescence 
            —and secondarily to sex differences at the extreme right tail 
            of mathematics performance on tests used as gateways to 
            graduate school admission. As noted, women in 
            math-intensive fields are interviewed and hired slightly in 
            excess of their representation among PhDs applying for 
            tenure-track positions. The primary factors in women’s 
            underrepresentation are preferences and choices—both freely 
            made and constrained: “Women choose at a young age not to 
            pursue math-intensive careers, with few adolescent girls 
            expressing desires to be engineers or physicists, preferring 
            instead to be medical doctors, veterinarians, biologists, 
            psychologists, and lawyers. Females make this choice 
            despite earning higher math and science grades than males 
            throughout schooling”. (5)

These "math-intensive" fields (Wallstreet?) are central to our economy and accordingly tend to mean higher pay for those who chose them. Since the study that compared incomes by gender and education level failed to account for what field the education or the career was in, the differences in fields chosen probably explains the difference in pay. The PNAS study authors cite a Government Accountability Office report whose findings accorded well with this explanation. Ceci and Williams write that

            the GAO report mentions studies of pay differentials,  
            demonstrating that nearly all current salary differences 
            can be accounted for by factors other than 
            discrimination, such as women being disproportionately 
            employed at teaching-intensive institutions paying less 
            and providing less time for research. (4)

Conservatives are fond of the principle that equality of opportunity doesn’t mean equality of outcome. Though they are demonstrably wrong when it comes to economic inequality in general (since inequality and mobility are negatively correlated), the principle is completely sound. I have no doubt that some men are barring the doors of employment to some women in America today. There are probably places where the reverse is true as well. But feminism is a body of facile assumptions that leads to ready conclusions of questionable validity. The assumption of discrimination when faced with earnings discrepancies is just one example.

Feminism is the political and social effort to attain equality between the sexes. While this sounds perfectly innocuous, even admirable, it frames relations between women and men as fundamentally antagonistic; it’s us versus them. Even a whiff of tribalism tends to make otherwise admirable efforts take tragic turns. How many relationships have been undermined by the idea that difference means inequality means oppression, by the notion that within every man lurks the impulse to dehumanize and dominate women.

In future posts, I’m going to look at the faulty assumptions inspired by feminism in the realms of sex and attraction—i.e. the bizarre notion of objectification—and in the upbringing of children, where so much pointless hand-wringing takes place over whether gender stereotypes are being subtly imposed. For now, I’m going to close with some questions from a graduate level textbook, Theory into Practice: An Introduction to Literary Criticism by Ann Dobie. They’re from a section devoted to helping burgeoning scholars learn to write feminist essays about literature. The idea is to pose these questions to yourself as you’re reading. See if you can spot the assumptions. See if you think they’re valid or fair.

-What stereotypes of women do you find? Are they oversimplified, demeaning, untrue? For example, are all blondes understood to be dumb?
-Examine the roles women play in a work. Are they minor, supportive, powerless, obsequious? Or are they independent and influential?
-How do the male characters talk about the female characters?
-How do the male characters treat the female characters?
-How do the female characters act toward the male characters?
-Who are the socially and politically powerful characters?
-What attitudes toward women are suggested by the answers to these questions?
-Do the answers to these questions indicate that the work lends itself more naturally to a study of differences between the male and female characters, a study of power imbalances between the sexes, or a study of unique female experience? (121-2)

In case you missed it, let me quote from the first page of the chapter: "The premise that unites those who call themselves feminist critics is the assumption that Western culture is fundamentally patriarchal, creating an imbalance of power which marginalizes women and their work" (104). While I acknowledge the assumption was historically justified, I have a feeling people will keep making it long after its promise of a better tomorrow is exhausted.
Read part 2: The Objectionable Concept of Objectification
and part 3: Engendering Gender Madness
Read my response to commenters.

Tax Demagoguery

Image Courtesy of historiann.com

        Robert Frank, in The Darwin Economy, begins with the premise that having a government is both desirable and unavoidable, and that to have a government we must raise revenue somehow. He then goes on to argue that since taxes act as disincentives to whatever behavior is being taxed we should tax behaviors that harm citizens. The U.S. government currently taxes behaviors we as citizens ought to encourage, like hiring workers and making lots of money through productive employment. Frank’s central proposal is to impose a progressive consumption tax. He believes this is the best way to discourage “positional arms races,” those situations in which trying to keep up with the Joneses leads to harmful waste with no net benefit as everyone's efforts cancel each other out. One of his examples is house size:

“The explosive growth of CEO pay in recent decades, for example, has led many executives to build larger and larger mansions. But those mansions have long since passed the point at which greater absolute size yields additional utility. Most executives need or want larger mansions simply because the standards that define large have changed” (61).

The crucial point here is that this type of wasteful spending doesn’t just harm the CEOs. Runaway spending at the top of the income ladder affects those on the lower rungs through a phenomenon Frank calls “expenditure cascades”:

“Top earners build bigger mansions simply because they have more money. The middle class shows little evidence of being offended by that. On the contrary, many seem drawn to photo essays and TV programs about the lifestyles of the rich and famous. But the larger mansions of the rich shift the frame of reference that defines acceptable housing for the near-rich, who travel in many of the same social circles… So the near-rich build bigger, too, and that shifts the relevant framework for others just below them, and so on, all the way down the income scale. By 2007, the median new single-family house built in the United States had an area of more than 2,300 square feet, some 50 percent more than its counterpart from 1970” (61-2).

This growth in house size has occurred despite the stagnation of incomes for median earners. In the wake of the collapse of the housing market, it’s easy to see how serious this type of damage can be to society.

           Frank closes a chapter titled “Taxing Harmful Activities” with a section whose heading poses the question, “A Slippery Slope?” You can imagine a tax system that socially engineers your choices down to the sugar content of your beverages. “It’s a legitimate concern,” he acknowledges (193). But taxing harmful activities is still a better idea than taxing saving and job creation. Like any new approach, it risks going off track or going too far, but for each proposed tax a cost-benefit analysis can be done. As I’ve tried over the past few days to arrive at a list of harmful activities that are in immediate need of having a tax imposed on them, one occurred to me that I haven’t seen mentioned anywhere else before: demagoguery.

Wouldn't want to appear partisan.
Image courtesy of tvguide.com
           Even bringing up the topic makes me uncomfortable. Free speech is one of the central pillars of our democracy. So the task becomes defining demagoguery in a way that doesn’t stifle the ready exchange of ideas. But first let me answer the question of why this particular behavior made my shortlist. A quick internet search will make it glaringly apparent that large numbers of Tea Party supporters believe things that are simply not true. And, having attended my local Occupy Wall Street protest, I can attest there were some whacky ideas being broadcast there as well. The current state of political discourse in America is chaotic at best and tribal at worst. Policies are being enacted every day based on ideas with no validity whatsoever. The promulgation of such ideas is doing serious harm to our society—and, worse, it’s making rational, substantive debate and collectively beneficial problem-solving impossible.

           So, assuming we can kill a couple of birds with a tax stone, how would we go about actually implementing the program? I propose forming a group of researchers and journalists whose task is to investigate complaints by citizens. Organizations like Factcheck.org and Politifact.com have already gone a long way toward establishing the feasibility of such a group. Membership will be determined by nominations from recognized research institutions like the American Academy for the Advancement of Science and the Pew Research Center, to whom appeals can be made in the event of intensely contended rulings by the group itself. Anyone who's accepted payment for any type of political activism will be ineligible for membership. The money to pay for the group and provide it with the necessary resources can come from the tax itself (though that might cause a perverse incentive if members' pay isn't independent of their findings) or revenues raised by taxes on other harmful activities.

         The first step will be the complaint, which can be made by any citizen. If the number of complaints reaches some critical mass or if the complaints are brought by recognized experts in the relevant field, then the research group will investigate. Once the group has established with a sufficient degree of certainty that a claim is false, anyone who broadcasts the claim will be taxed an amount determined by the size of the audience. The complaints, reports of the investigations, and the findings can all be handled through a website. We may even want to give the individual who made the claim a chance to correct her- or himself before leveling the tax. Legitimate news organizations already do this, so they’d have nothing to worry about.

           Talk show hosts who repeatedly make false claims will be classified as demagogues and have to pay a fixed rate to obviate any need for the research group to watch every show and investigate every claim. But anyone who is designated a demagogue must advertise the designation on the screen or at regular intervals on the air—along with a link or address to the research groups’ site, where the audience can view a list of the false claims that earned him or her the designation.
Individuals speaking to each other won’t be affected. And bloggers with small audiences, if they are taxed at all, won’t be taxed much—or they can simply correct their mistakes. Demagogues like Rush Limbaugh and Michael Moore will still be free to spew nonsense; but they’ll have to consider the costs—because the harm they cause by being sloppy or mendacious doesn’t seem to bother them.

Image Courtesy of crooksandliars.com
         Now, a demagogue isn't defined as someone who makes false claims; it's someone who uses personal charisma and tactics for whipping people into emotional frenzies to win support for a cause. I believe the chief strategy of demagogues is to incite tribalism, a sense of us-vs-them. But making demagogues pay for their false claims would, I believe, go a long way toward undermining their corrosive influence on public discourse.

      Finally, I refer you, dear reader, to a video that highlights the problem. It's from a liberal - whose show I don't watch, so I can't say how much of a demagogue she is - but I'm sure she'd find some whoppers on her own side of the divide were she inclined to look.

What's Wrong with The Darwin Economy?

          I can easily imagine a conservative catching a glimpse of the cover of Robert Frank’s new book and having his interest piqued. The title, The Darwin Economy, evokes that famous formulation, “survival of the fittest,” but in the context of markets, which suggests a perspective well in keeping with the anti-government principles republicans and libertarians hold dear. The subtitle, Liberty, Competition, and the Common Good, further facilitates the judgment of the book by its cover as another in the long tradition of paeans to the glorious workings of unregulated markets.

            The Darwin Economy puts forth an argument that most readers, even those who keep apace of the news and have a smidgen of background in economics, have probably never heard, namely that the divergence between individual and collective interests, which Adam Smith famously suggested gets subsumed into market forces which inevitably redound to the common good, in fact leads predictably to outcomes that are detrimental to everyone involved. His chief example is a hypothetical business that can either pay to have guards installed to make its power saws safer for workers to operate or leave the saws as they are and pay the workers more for taking on the added risk.

            This is exactly the type of scenario libertarians love. What right does government have to force businesses in this industry to install the guards? Governmental controls end up curtailing the freedom of workers to choose whether to work for a company with better safety mechanisms or one that offers better pay. It robs citizens of the right to steer their own lives and puts decisions in the hands of those dreaded Washington bureaucrats. “The implication,” Frank writes, “is that, for well-informed workers at least, Adam Smith’s invisible hand would provide the best combinations of wages and safety even without regulation” (41).

            Frank challenges the invisible hand doctrine by demonstrating that it fails to consider the full range of the ramifications of market competition, most notably the importance of relative position. But The Darwin Economy offers no support for the popular liberal narrative about exploitative CEOs. Frank writes: “many of the explanations offered by those who have denounced market outcomes from the left fail the no-cash-on-the-table test. These critics, for example, often claim that we must regulate workplace safety because workers would otherwise be exploited by powerful economic elites” (36). But owners and managers are motivated by profits, not by some perverse desire to see their workers harmed.

"Mobility isn’t perfect, but people change jobs far more frequently than in the past. And even when firms know that most of their employees are unlikely to move, some do move and others eventually retire or die. So employers must maintain their ability to attract a steady flow of new applicants, which means they must nurture their reputations. There are few secrets in the information age. A firm that exploits its workers will eventually experience serious hiring difficulties" (38).

This is what Frank means by the no-cash-on-the-table test: companies who maintain a reputation for being good to their people attract more talented applicants, thus increasing productivity, thus increasing profits. There’s no incentive to exploit workers just for the sake of exploiting them, as many liberals seem to suggest.

            What makes Frank convincing, and what makes him something other than another liberal in the established line-up, is that he’s perfectly aware of the beneficial workings of the free market, as far as they go. He bases his policy analyses on a combination of John Stuart Mill’s harm principle—whereby the government only has the right to regulate the actions of a citizen if those actions are harmful to other citizens—and Ronald Coase’s insight that government solutions to harmful actions should mimic the arrangements that the key players would arrive at in the absence of any barriers to negotiation. “Before Coase,” Frank writes,  
       
"it was common for policy discussions of activities that cause harm to others to be couched in terms of perpetrators and victims. A factory that created noise was a perpetrator, and an adjacent physician whose practice suffered as a result was a victim. Coase’s insight was that externalities like noise or smoke are purely reciprocal phenomena. The factory’s noise harms the doctor, yes; but to invoke the doctor’s injury as grounds for prohibiting  the noise would harm the factory owner" (87).

This is a far cry from the naïve thinking of some liberal do-gooder. Frank, following Coase, goes on to suggest that what would formerly have been referred to as the victim should foot the bill for a remedy to the sound pollution if it’s cheaper for him than for the factory. At one point, Frank even gets some digs in on Ralph Nader for his misguided attempts to protect the poor from the option of accepting payments for seats when their flights are overbooked.

            Though he may be using the same market logic as libertarian economists, he nevertheless arrives at very different conclusions vis-à-vis the role and advisability of government intervention. Whether you accept his conclusions or not hinges on how convincing you find his thinking about the role of relative position. Getting back to the workplace safety issue, we might follow conventional economic theory and apply absolute values to the guards protecting workers from getting injured by saws. If the value of the added safety to an individual worker exceeds the dollar amount increase he or she can expect to get at a company without the guards, that worker should of course work at the safer company. Unfortunately, considerations of safety are abstract, and they force us to think in ways we tend not to be good at. And there are other, more immediate and concrete considerations that take precedence over most people’s desire for safety.

            If working at the company without the guards on the saws increases your income enough for you to move to a house in a better school district, thus affording your children a better education, then the calculations of the absolute worth of the guards’ added safety go treacherously awry. Frank explains

"the invisible-hand narrative assumes that extra income is valued only for the additional absolute consumption it supports. A higher wage, however, also confers a second benefit for certain (and right away) that safety only provides in the rare cases when the guard is what keeps the careless hand from the blade—the ability to consume more relative to others. That fact is nowhere more important than in the case of parents’ desires to send their children to the best possible schools…. And because school quality is an inherently relative concept, when others also trade safety for higher wages, no one will move forward in relative terms. They’d succeed only in bidding up the prices of houses in better school districts" (40).

Housing prices go up. Kids end up with no educational advantage. And workers are less safe. But any individual who opted to work at the safer company for less pay would still have to settle for an inferior school district. This is a collective action problem, so individuals are trapped, which of course is something libertarians are especially eager to avoid.

            Frank draws an analogy with many of the bizarre products of what Darwin called sexual selection, most notably those bull elk battling it out on the cover of the book. Antler size places each male elk in a relative position; in their competition for mates, absolute size means nothing. So natural selection—here operating in place of Smith’s invisible hand—ensures that the bull with the largest antlers reproduces and that antler size accordingly undergoes runaway growth. But what’s good for mate competition is bad for a poor elk trying to escape from a pack of hungry wolves. If there were some way for a collective agreement to be negotiated that forced every last bull elk to reduce the size of his antlers by half, none would object, because they would all benefit. This is the case as well with the workers' decision to regulate safety guards on saws. And Frank gives several other examples, both in the animal kingdom and in the realms of human interactions.

            I’m simply not qualified to assess Frank’s proposals under the Coase principle to tax behaviors that have harmful externalities, like the production of CO2, including a progressive tax on consumption. But I can’t see any way around imposing something that achieves the same goals at some point in the near future.

            My main criticism of The Darwin Economy is that the first chapter casual conservative readers will find once they’ve cracked the alluring cover is the least interesting of the book because it lists the standard litany of liberal complaints. A book as cogent and lucid as this one, a book which manages to take on abstract principles and complex scenarios while still being riveting, a book which contributes something truly refreshing and original to the exhausted and exhausting debates between liberals and conservatives, should do everything humanly possible to avoid being labeled into oblivion. Alas, the publishers and book-sellers would never allow a difficult-to-place book to grace their shelves or online inventories.

Gravitating Toward Tribal: The Danger of Free-Floating Ideologies

Image from the movie Zardoz. Courtesy of Thersic.com
          Ideologies are usually conceived through a coupling of comfortable tradition with a calculation of self-interest. But they can also be borne of good faith efforts at understanding. More important than their origin and development is the degree to which they are grounded. If you work out a comprehensive and adequately complex ideology that serves to explain an otherwise incomprehensible phenomenon and possibly even offers some guidance for dealing with an otherwise chaotic and frightening dynamic, you’ve created a theory that will appeal to human minds desperate for understanding and a sense, no matter how meager, of control. But does the ideology match up with reality? That’s an entirely different question.

            Free-floating ideologies, those that persist solely owing to the comforts they provide and the conveniences they secure, survive confrontations with reality and subsist despite vast lacuna in empirical support because human perception operates through a process of cross-referencing sensory inputs with prior knowledge. What we see is largely determined by what we’re looking for, and how we see it by what we believe about it. Patterns arising in what ought to be random incidents often sustain beliefs—even though in most contexts humans are terrible at calculating probabilities. A natural confirmation bias has us perceiving and remembering all the times predictions arising from our theories come to fruition while missing or forgetting all the times they fail. We tend to enjoy the company of like-minded others, and rather idiotically have our convictions bolstered by their common acceptance by those with whom we’ve chosen to associate.

            Unmoored ideologies gravitate toward certain predictable tracks in human cognition. We like to think there’s some sort of agency behind everything, an intelligence governing the universe. To think that no one’s in charge of all the swirling and colliding galaxies is variously unsettling and terrifying to us. So we take in the sublime beauty of quiet sunsets and wonder at the beneficence of the creator. Or we note coincidences in our lives, the way they fall together in a meaningful, beneficial way, and we feel a need to express gratitude to the guiding divinity. This is mostly innocent. Though it can lead to complacence and willful ignorance of entire regions where this supposedly beneficent guide has deigned never to set foot, and it can add an extra layer of grief in response to catastrophe, the comfort of believing in an invisible protector and guide has little immediate cost.

            Much more worrying is the gravitation of free-floating ideologies toward tribalism. The pseudo-scientific cult that has arisen around certain varieties of psychotherapy has bequeathed to our culture the horrifying belief that an unknown portion of the population, predominantly male, can induce the modern equivalent of demonic possession, severe psychological trauma, through an inverted laying-on of hands. The ideology has made monsters of men. The fetishizing of free markets likewise entails a belief in a loathsome variety of sub-humans. The economy, true believers assert, is a battle between the makers and the moochers, the producers and the parasites. As a conservative friend put in, in a discussion of healthcare reform, “Giving insurance to the slugs will just make them bigger slugs.”

            If you challenge someone’s beliefs by suggesting theirs is an ideology divorced from reality, as everyone does who advocates for one set of beliefs in opposition to another, the proper response is to insist that the ideology emerged from an awareness of facts through inductive reasoning. But sunsets, no matter how sublime, don’t really provide any evidence for the existence of an intelligent agency behind the curtain of the cosmos. Troubled young women with histories of abuse don’t prove that sexual experiences in childhood cause a wild assortment of psychological maladjustments. And the higher incarceration rate for impoverished groups doesn’t in any way establish some fundamental divide between good and bad types of people.

            Once ideologies reach a certain stage of development, they become all but immune to contradictory evidence. When the facts cooperate, they are trumpeted. When they don’t, the devout have recourse to principles. I’ve referred advocates of particular varieties of psychotherapy to evidence that they’re ineffective. In response, I didn’t get references to other bodies of evidence supporting the beliefs and practices in question; rather, I got an explanation of how the therapeutic techniques were supposed to work. Present a free market purist with evidence that market competition doesn’t led to innovation, or leads to detrimental innovations, and you’ll likely get a lecture explaining the principles behind how it’s supposed to work, according to the free market ideology, rather than evidence that it does, in fact, work in the theorized way. This convenient toggling back and forth between inductive and deductive reasoning literally allows us to explain away disconnects between our ideologies and the world.

            It is the tendency of free-floating ideologies toward tribalism that leads me to advocate a strict adherence to science in matters of public concern. It wasn’t merely coincidence that the enlightenment represented the inception of both the traditions of science and universal human rights, which have suffered through a traumatic childhood of their own, and are now living out a tumultuous adolescence. The tendency toward tribalism is also why I’m wary of commercial fiction which almost invariably makes characters represent ideas and personal qualities, only to pit the good guys against the bad. J.K. Rowling can claim all she wants that the Harry Potter books teach kids the evils of bigotry, but any work with goodies and baddies taps into tribal instincts. Literary fiction, on the other hand, at its best, is an exercise in empathy.

Occupy Fort Wayne

What is one to do when the purpose of events like this is to rouse passion for a cause, only he believes there's already too much passion, too little knowledge, too little thinking, too little reason?
Heather Bureau from WOWO put a camera in my face at one point. "Can you tell me why you're here today?"
I had no idea who she was. "Are you going to put it on TV?"
"No, on the radio--or on the website." She pulled her hair aside to show me the WOWO logo on her shirt.
















"I wanted to check it out. We're here because of income inequality. And the sway of the rich in politics. Plus, I guess we're all liberals." I really wanted her to go away.
The first speaker filled us in on "Occupation Etiquette." You hold up your arm and wave your hand when you agree with what's been said.
And the speakers use short sentences the crowd repeats to make sure everyone can hear.
The call and response reminded me of church.

Rallies like this are to gather crowds, so when talking heads say their views are the views of the people they can point to the throngs of people who came to their gatherings.
But what is one to do about the guy carrying the sign complaining about illegal immigrants?
What about all the people saying we need to shut down the Fed?

What about the guy who says, "There's two kinds of people: the kind who work for money, and the kind who work for people"?
Why was I there? Well, I really do think inequality is a problem. But I support the Fed. And I'm first and foremost against tribalism. As soon as someone says, "There's two types of people..." then I know I'm somewhere I don't belong.
We shouldn't need political rallies to whip us up into a frenzy. We're obligated as citizens to pay attention to what's going on--and to vote.
Maybe the Occupy Protests will get some issues into the news cycle that weren't there before. If so, I'm glad I went.
But politics is a battle of marketing strategies. Holding up signs and shouting to be heard--well, let's not pretend our voices are independent.
Some fool laughingly shouts about revolution. But I'm not willing to kill anyone over how much bankers make.
Why was I there today? It seemed like the first time someone was talking about the right issue. Sort of.

Tyler Durden and Occupy Wall Street

"Without pain, without sacrifice, we would have nothing."
Dear Occupy Wall Street,

It's true that wealth inequality in America is a disgrace.

It's true that we've allowed PR and marketing to become monsters.

We've given up our minds for the sake of convenience and entertainment.

It's true that far too many of us have bought into the bullshit narrative that markets have magical powers, that allowing businesses to poison the earth redounds to the collective benefit, that the existence of multibillionaires is somehow good for everyone.

I agree with you completely on these points. And I agree that the rich have far to much political sway.

But messages mean nothing unless they cost something.

Occupy Wall Street--good work so far, but way too few people really care what you have to say.

Being heard is not a matter coming together and shouting. There's something pathetic about how much your protests resemble parties and festivals.

In Vietnam, monks protested by lighting themselves on fire. Gandhi went on hunger strikes.

It's not what you believe or what you're willing to shout or Sharpie on signs.

How seriously people take your message is a matter of how much you're willing to give up.

Sincerely,
Some boring writer

Disgraceful Business


I get spam emails asking me if I’d be willing to campaign for this or that political candidate. They always make me think for a minute: I have strong political views; I believe electing officials with certain beliefs is bad for the country; so maybe I should campaign.

But the fact is campaigns are a disgraceful business. Citizens shouldn’t need to be chased down and bombarded with marketing and PR gimmicks. It’s their duty as citizens to research the candidates and the issues on their own and to vote for the one they decide will best represent them.

This is naïve, I know. We have been trained by the media our whole lives. We want first to be entertained, second to be uplifted, and third to be told how great we are. When it comes to making decisions, we don’t want to have to take any active part in discovering the best course of action. We want to sit back and be allowed to play as passive a role as possible. We’re not looking to be convinced or persuaded—we’re looking to be sold.

In the past two weeks, I’ve gotten around fifty calls from some company whose purpose is to get donations on behalf of charities who hire them. It turns out Doctors without Borders hired them. And since I’ve donated to this cause in the past they see me as a good target for their campaign. But all the other times I donated I simply went online and entered an amount, without getting a single call. Since I’ve been getting calls, I haven’t made a donation.

I go to Sears and buy socks. Not a single brand offers any guarantee that they weren’t manufactured in sweat shops. Then I get to the register and I’m bombarded with more marketing. This is called POS, or point of sale marketing. “Would you like to save 15 percent by signing up for…?” No, I just want to buy some fucking socks.

So many companies are vying for our attention and trying to squeeze money out of us that civic and economic life in this country can no longer deal with actual ideas or values. Every encounter is based on strategies and every strategy is contingent on some number it generates.

As we’re fatted on entertainment and passivity, businesses and political parties continue running their focus groups and assessing campaigns, figuring out better and better ways to parasitize us. And we sit cheering on our favorite football team.