READING SUBTLY

This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
 

Dennis Junk Dennis Junk

Projecting Power, Competing for Life, & Supply Side Math

If a George Bush or a Mitt Romney swaggers around projecting his strength by making threats and trying to push around intergovernmental organizations, some people will naturally be intimidated and back down. But those probably weren’t the people we needed to worry about in the first place.

Some issues I feel are being skirted in the debates:

1. How the Toughest Guy Projects his Power

The Republican position on national security is that the best way to achieve peace is by “projecting power,” and they are fond of saying that Democrats invite aggression by “projecting weakness.” The idea is that no one will start a fight he knows he won’t win, nor will he threaten to start a fight with someone he knows will call his bluff. This is why Republican presidents often suffer from Cowboy Syndrome.

In certain individual relationships, this type of dynamic actually does establish itself—or rather the dominant individual establishes this type of dynamic. But in the realm of national security we aren’t dealing with individuals. With national security, we’re basically broadcasting to the world the level of respect we have for them. If a George Bush or a Mitt Romney swaggers around projecting his strength by making threats and trying to push around intergovernmental organizations, some people will naturally be intimidated and back down. But those probably weren’t the people we needed to worry about in the first place.

The idea that shouting out to the world that the US is the toughest country around and we’re ready to prove it is somehow going to deter Al Qaeda militants and others like them is dangerously naïve. We can’t hope for all the nations of the world to fall into some analog of Battered Wife Syndrome. Think about it this way, everyone knows that the heavyweight champion MMA guy is the toughest fighter in the world. If you want to project power, there’s no better way to do it than by winning that belt. Now we have to ask ourselves: Do fewer people want to fight the champion? We might also ask: Does a country like Canada get attacked more because of its lame military?

The very reason organizations like Al Qaeda ever came into existence was that America was projecting its power too much. The strategy of projecting power may as well have been devised by teenage boys—and it continues to appeal to people with that mindset.

2. Supplying more Health and Demanding not to Die

Paul Ryan knows that his voucher system for Medicare is going to run into the problem that increasing healthcare costs will quickly surpass whatever amount is allotted to individuals in the vouchers—that’s the source of the savings the program achieves. But it’s not that he wants to shortchange seniors. Rather, he’s applying a principle from his economic ideology, the one that says the best way to control costs is to make providers compete. If people can shop around, the reasoning goes, they’ll flock toward the provider with the lowest prices—the same way we all do with consumer products. Over time, all the providers have to find ways to become more efficient so they can cut costs and stay in business.

Sounds good, right? But the problem is that healthcare services aren’t anything like consumer goods. Supply and demand doesn’t work in the realm of life and death. Maybe, before deciding which insurance company should get our voucher, we’ll do some research. But how do you know what types of services you’re going to need before you sign up? You’re not going to find out that your plan doesn’t cover the service you need until you need the service. And at that point the last thing you’re going to want to do is start shopping around again. Think about it, people shop around for private insurance now--are insurance companies paragons of efficiency? 

Another problem is that you can’t shop around to find better services once industry standards have set in. For example—if you don’t like how impersonal your cell phone service is, can you just drop your current provider and go to another? If you do, you’re just going to run into the same problem again. What’s the lowest price you can pay for cable or internet services? The reason Comcast and Dish Network keep going back and forth with their commercials about whose service is better is that there is fundamentally very little difference.

Finally, insurance is so complicated that only people who can afford accountants or financial advisors, only people who are educated and have the time to research their options, basically only people with resources are going to be able to make prudent decisions. This is why the voucher system, over time, is just going to lead to further disadvantages for the poor and uneducated, bring about increased inequality, and exacerbate all the side effects of inequality, like increased violent crime.

3. Demand Side Never Shows up for the Debate

The reason Romney and Ryan aren’t specifying how they’re going to pay for their tax cuts, while at the same time increasing the budget for the military, while at the same time decreasing the deficit, is that they believe, again based on their economic ideology, that the tax cuts will automatically lead to economic growth. The reasoning is that if people have more money after taxes, they’ll be more likely to spend it. This includes business owners who will put the money toward expanding their businesses, which of course entails hiring new workers. All this cycles around to more money for everyone, more people paying that smaller percentage but on larger incomes, so more revenue comes in, and now we can sit back and watch the deficit go down. This is classic supply side economics.

Sounds good, right? The problem is that businesses only invest in expansion when there’s increasing demand for their products or services, and the tax cuts for lower earners won’t be enough to significantly increase that demand. If there's no demand, rich people don't invest and hire; they buy bigger houses and such. The supply side theory has been around for a long time—and it simply doesn’t work. The only reliable outcome of supply side policies is increasing wealth inequality.

What works is increasing demand—that’s demand side economics. You do this by investing in education, public resources, and infrastructure. Those construction workers building roads and bridges and maintaining parks and monuments get jobs when their companies are hired by the government—meaning they get paid with our tax money. Of course, they get taxed on it, thus helping to support more projects. Meanwhile, unemployment goes down by however many people are hired. These people have more income, and thus create more demand. The business owners expand their businesses—hire more people. As the economy grows, the government can scale back its investment.

Demand side economics can also focus on human capital - including healthcare because it's hard to work when you're sick or dying and you're not going to be creating any demand when you're bankrupt from hospital and insurance payments. Government can also help the economy by investing in people's education, because educated people tend to get better jobs, make more money, and—wait for it, create more demand. (Not to mention innovation.) Job training can work the same way.

Supply side versus demand side is at the heart of most policy debates. The supply side ideology has all kinds of popular advocates, from Ayn Rand to Rush Limbaugh. The demand siders seem more mum, but that might just be because I live in Indiana. In any case, the demand siders have much better evidence supporting their ideas, even though they lose in terms of rhetoric as the knee jerk response to their ideas is to (stupidly, inaccurately) label them socialist. As Bill Clinton pointed out and the fact checkers corroborated, Democrats do a better job creating jobs. 

4. Climate Change?

Also read:
TED MCCORMICK ON STEVEN PINKER AND THE POLITICS OF RATIONALITY

THE IDIOCY OF OUTRAGE: SAM HARRIS'S RUN-INS WITH BEN AFFLECK AND NOAM CHOMSKY

WHAT'S WRONG WITH THE DARWIN ECONOMY?

FROM RAGS TO REPUBLICAN

Read More
Dennis Junk Dennis Junk

Freud: The Falsified Cipher

Upon entering a graduate program in literature, I was appalled to find that Freud’s influence was alive and well in the department. Didn’t they know that nearly all of Freud’s theories have been disproven? Didn’t they know psychoanalysis is pseudoscience?

[As I'm hard at work on a story, I thought I'd post an essay from my first course as a graduate student on literary criticism. It was in the fall of 2009, and I was shocked and appalled that not only were Freud's ideas still being taught but there was no awareness whatsoever that psychology had moved beyond them. This is my attempt at righting the record while keeping my tone in check.]

The matter of epistemology in literary criticism is closely tied to the question of what end the discipline is supposed to serve. How critics decide what standard of truth to adhere to is determined by the role they see their work playing, both in academia and beyond. Freud stands apart as a literary theorist, professing in his works a commitment to scientific rigor in a field that generally holds belief in even the possibility of objectivity as at best naïve and at worst bourgeois or fascist. For the postmodernists, both science and literature are suspiciously shot through with the ideological underpinnings of capitalist European male hegemony, which they take as their duty to undermine. Their standard of truth, therefore, seems to be whether a theory or application effectively exposes one or another element of that ideology to “interrogation.” Admirable as the values underlying this patently political reading of texts are, the science-minded critic might worry lest such an approach merely lead straight back to the a priori assumptions from which it set forth. Now, a century after Freud revealed the theory and practice of psychoanalysis, his attempt to interpret literature scientifically seems like one possible route of escape from the circularity (and obscurantism) of postmodernism. Unfortunately, Freud’s theories have suffered multiple devastating empirical failures, and Freud himself has been shown to be less a committed scientist than an ingenious fabulist, but it may be possible to salvage from the failures of psychoanalysis some key to a viable epistemology of criticism.

A text dating from early in the development of psychoanalysis shows both the nature of Freud’s methods and some of the most important substance of his supposed discoveries. Describing his theory of the Oedipus complex in The Interpretation of Dreams, Freud refers vaguely to “observations on normal children,” to which he compares his experiences with “psychoneurotics” to arrive at his idea that both display, to varying degrees, “feelings of love and hatred to their parents” (920). There is little to object to in this rather mundane observation, but Freud feels compelled to write that his discovery is confirmed by a legend,

…a legend whose profound and universal power to move can only be understood if the hypothesis I have put forward in regard to the psychology of children has an equally universal validity. (920)

He proceeds to relate the Sophocles drama from which his theory gets its name. In the story, Oedipus is tricked by fate into killing his father and marrying his mother. Freud takes this as evidence that the love and hatred he has observed in children are of a particular kind. According to his theory, any male child is fated to “direct his first sexual impulse towards his mother” and his “first murderous wish against his father” (921). But Freud originally poses this idea as purely hypothetical. What settles the issue is evidence he gleans from dream interpretations. “Our dreams,” he writes, “convince us that this is so” (921). Many men, it seems, confided to him that they dreamt of having sex with their mothers and killing their fathers.

Freud’s method, then, was to seek a thematic confluence between men’s dreams, the stories they find moving, and the behaviors they display as children, which he knew mostly through self-reporting years after the fact. Indeed, the entire edifice of psychoanalysis is purported to have been erected on this epistemic foundation. In a later essay on “The Uncanny,” Freud makes the sources of his ideas even more explicit. “We know from psychoanalytic experience,” he writes, “that the fear of damaging or losing one’s eyes is a terrible one in children” (35). A few lines down, he claims that, “A study of dreams, phantasies and myths has taught us that anxiety about one’s eyes…is a substitute for the dread of being castrated” (36). Here he’s referring to another facet of the Oedipus complex which theorizes that the child keeps his sexual desire for his mother in check because of the threat of castration posed by his jealous father. It is through this fear of his father, which transforms into grudging respect, and then into emulation, that the boy learns his role as a male in society. And it is through the act of repressing his sexual desire for his mother that he first develops his unconscious, which will grow into a general repository of unwanted desires and memories (Eagleton 134).

But what led Freud to this theory of repression, which suggests that we have the ability to willfully forget troubling incidents and drive urges to some portion of our minds to which we have no conscious access? He must have arrived at an understanding of this process in the same stroke that led to his conclusions about the Oedipus complex, because, in order to put forth the idea that as children we all hated one parent and wanted to have sex with the other, he had to contend with the fact that most people find the idea repulsive. What accounts for the dramatic shift between childhood desires and those of adults? What accounts for our failure to remember the earlier stage? The concept of repression had to be firmly established before Freud could make such claims. Of course, he could have simply imported the idea from another scientific field, but there is no evidence he did so. So it seems that he relied on the same methods—psychoanalysis, dream interpretation, and the study of myths and legends—to arrive at his theories as he did to test them. Inspiration and confirmation were one and the same.

Notwithstanding Freud’s claim that the emotional power of the Oedipus legend “can only be understood” if his hypothesis about young boys wanting to have sex with their mothers and kill their fathers has “universal validity,” there is at least one alternative hypothesis which has the advantage of not being bizarre. It could be that the point of Sophocles’s drama was that fate is so powerful it can bring about exactly the eventualities we most desire to avoid. What moves audiences and readers is not any sense of recognition of repressed desires, but rather compassion for the man who despite, even because of, his heroic efforts fell into this most horrible of traps. (Should we assume that the enduring popularity of W.W. Jacobs’s story, “The Monkey’s Paw,” which tells a similar fated story about a couple who inadvertently wish their son dead, proves that all parents want to kill their children?) The story could be moving because it deals with events we would never want to happen. It is true however that this hypothesis fails to account for why people enjoy watching such a tragedy being enacted—but then so does Freud’s. If we have spent our conscious lives burying the memory of our childhood desires because they are so unpleasant to contemplate, it makes little sense that we should find pleasure in seeing those desires acted out on stage. And assuming this alternative hypothesis is at least as plausible as Freud’s, we are left with no evidence whatsoever to support his theory of repressed childhood desires.

To be fair, Freud did look beyond the dreams and myths of men of European descent to test the applicability of his theories. In his book Totem and Taboo he inventories “savage” cultures and adduces the universality among them of a taboo against incest as further proof of the Oedipus complex. He even goes so far as to cite a rival theory put forth by a contemporary:

Westermarck has explained the horror of incest on the ground that “there is an innate aversion to sexual intercourse between persons living very closely together from early youth, and that, as such persons are in most cases related by blood, this feeling would naturally display itself in custom and law as a horror of intercourse between near kin.” (152)

To dismiss Westermarck’s theory, Freud cites J. G. Frazer, who argues that laws exist only to prevent us from doing things we would otherwise do or prod us into doing what we otherwise would not. That there is a taboo against incest must therefore signal that there is no innate aversion, but rather a proclivity, for incest. Here it must be noted that the incest Freud had in mind includes not just lust for the mother but for sisters as well. “Psychoanalysis has taught us,” he writes, again vaguely referencing his clinical method, “that a boy’s earliest choice of objects for his love is incestuous and that those objects are forbidden ones—his mother and sister” (22). Frazer’s argument is compelling, but Freud’s test of the applicability of his theories is not the same as a test of their validity (though it seems customary in literary criticism to conflate the two).

As linguist and cognitive neuroscientist Steven Pinker explains in How the Mind Works, in tests of validity Westermarck beats Freud hands down. Citing the research of Arthur Wolf, he explains that without setting out to do so, several cultures have conducted experiments on the nature of incest aversion. Israeli kibbutzim, in which children grew up in close proximity to several unrelated agemates, and the Chinese and Taiwanese practice of adopting future brides for sons and raising them together as siblings are just two that Wolf examined. When children from the kibbutzim reached sexual maturity, even though there was no discouragement from adults for them to date or marry, they showed a marked distaste for each other as romantic partners. And compared to more traditional marriages, those in which the bride and groom grew up in conditions mimicking siblinghood were overwhelmingly “unhappy, unfaithful, unfecund, and short” (459). The effect of proximity in early childhood seems to apply to parents as well, at least when it comes to fathers’ sexual feelings for their daughters. Pinker cites research that shows the fathers who sexually abuse their daughters tend to be the ones who have spent the least time with them as infants, while the stepdads who actually do spend a lot of time with their stepdaughters are no more likely to abuse them than their biological parents. These studies not only favor Westermarck’s theory; they also provide a counter to Frazer’s objection to it. Human societies are so complex that we often grow up in close proximity with people who are unrelated, or don’t grow up with people who are, and therefore it is necessary for there to be a cultural proscription—a taboo—against incest in addition to the natural mechanism of aversion.

Among biologists and anthropologists, what is now called the Westermarck effect has displaced Freud’s Oedipus complex as the best explanation for incest avoidance. Since Freud’s theory of childhood sexual desires has been shown to be false, the question arises of where this leaves his concept of repression. According to literary critic—and critic of literary criticism—Frederick Crews, repression came to serve in the 1980’s and 90’s a role equivalent to the “spectral evidence” used in the Salem witch trials. Several psychotherapists latched on to the idea that children can store reliable information in their memories, especially when that information is too terrible for them to consciously handle. And the testimony of these therapists has led to many convictions and prison sentences. But the evidence for this notion of repression is solely clinical—modern therapists base their conclusions on interactions with patients, just as Freud did. Unfortunately, researchers outside the clinical setting are unable to find any phenomenon answering to the description of repressed but retrievable memories. Crews points out that there are plenty of people who are known to have survived traumatic experiences: “Holocaust survivors make up the most famous class of such subjects, but whatever group or trauma is chosen, the upshot of well-conducted research is always the same” (158). That upshot:

Unless a victim received a physical shock to the brain or was so starved or sleep deprived as to be thoroughly disoriented at the time, those experiences are typically better remembered than ordinary ones. (159, emphasis in original)

It seems here, as with incest aversion, Freud got the matter exactly wrong—and with devastating fallout for countless families and communities. But Freud was sketchy when it came to whether or not it was memories of actual events that were repressed or just fantasies. The crux of his argument was that we repress unacceptable and inappropriate drives and desires.

And the concept of repressed desires is integral to the use of psychoanalysis in literary criticism. In The Interpretation of Dreams, Freud distinguishes between the manifest content of dreams and their latent content. Having been exiled from consciousness, troublesome desires press against the bounds of the ego, Freud’s notional agent in charge of tamping down uncivilized urges. In sleep, the ego relaxes, allowing the desires of the id, from whence all animal drives emerge, an opportunity for free play. Even in dreams, though, full transparency of the id would be too disconcerting for the conscious mind to accept, so the ego disguises all the elements which surface with a kind of code. Breaking this code is the work of psychoanalytic dream interpretation. It is also the basis for Freud’s analysis of myths and the underlying principle of Freudian literary criticism. (In fact, the distinction between manifest and latent content is fundamental to many schools of literary criticism, though they each have their own version of the true nature of the latent content.) Science writer Steven Johnson compares Freud’s conception of repressed impulses to compressed gas seeping through the cracks of the ego’s defenses, emerging as slips of the tongue or baroque dream imagery. “Build up enough pressure in the chamber, though, and the whole thing explodes—into uncontrolled hysteria, anxiety, madness” (191). The release of pressure, as it were, through dreams and through various artistic media, is sanity-saving.

Johnson’s book, Mind Wide Open: Your Brain and the Neuroscience of Everyday life, takes the popular currency of Freud’s ideas as a starting point for his exploration of modern science. The subtitle is a homage to Freud’s influential work The Psychopathology of Everyday Life. Perhaps because he is not a working scientist, Johnson is able to look past the shaky methodological foundations of psychoanalysis and examine how accurately its tenets map onto the modern findings of neuroscience. Though he sees areas of convergence, like the idea of psychic conflict and that of the unconscious in general, he has to admit in his conclusion that “the actual unconscious doesn’t quite look like the one Freud imagined” (194). Rather than a repository of repressed fantasies, the unconscious is more of a store of implicit, or procedural, knowledge. Johnson explains, “Another word for unconscious is ‘automated’—the things you do so well you don’t even notice doing them” (195). And what happens to all the pressurized psychic energy resulting from our repression of urges? “This is one of those places,” Johnson writes, “where Freud’s metaphoric scaffolding ended up misleading him” (198). Instead of a steam engine, neuroscientists view the brain as type of ecosystem, with each module competing for resources; if the module goes unused—the neurons failing to fire—then the strength of their connections diminishes.

What are the implications of this new conception of how the mind works for the interpretation of dreams and works of art? Without the concept of repressed desires, is it still possible to maintain a distinction between the manifest and latent content of mental productions? Johnson suggests that there are indeed meaningful connections that can be discovered in dreams and slips of the tongue. To explain them, he points again to the neuronal ecosystem, and to the theory that “Neurons that fire together wire together.” He writes:

These connections are not your unconscious speaking in code. They’re much closer to free-associating. These revelations aren’t the work of some brilliant cryptographer trying to get a message to the frontlines without enemy detection. They’re more like echoes, reverberations. One neuronal group fires, and a host of others join in the chorus. (200-201)

Mind Wide Open represents Johnson’s attempt to be charitable to the century-old, and now popularly recognized, ideas of psychoanalysis. But in this description of the shortcomings of Freud’s understanding of the unconscious and how it reveals itself, he effectively discredits the epistemological underpinnings of any application of psychoanalysis to art. It’s not only the content of the unconscious that Freud got outrageously wrong, but the very nature of its operations. And if Freud could so confidently look into dreams and myths and legends and find in them material that simply wasn’t there, it is cause for us to marvel at the power of his preconceptions to distort his perceptions.

Ultimately, psychoanalysis failed to move from the realm of proto-science to that of methodologically well-founded science, and got relegated rather to the back channel of pseudoscience by the hubris of its founder. And yet, if Freud had relied on good science, his program of interpreting literature in terms of the basic themes of human nature, and even his willingness to let literature inform his understanding of those themes, may have matured into a critical repertoire free of the obscurantist excesses and reality-denying absurdities of postmodernism. (Anthropologist Clifford Geertz once answered a postmodernist critic of his work by acknowledging that perfect objectivity is indeed impossible, but then so is a perfectly germ-free operating room; that shouldn’t stop us from trying to be as objective and as sanitary as our best methods allow.)

            Critics could feasibly study the production of novels by not just one or a few authors, but a large enough sample—possibly extending across cultural divides—to analyze statistically. They could pose questions systematically to even larger samples of readers. And they could identify the themes in any poem or novel which demonstrate the essential (in the statistical sense) concerns of humanity that have been studied by behavioral scientists, themes like status-seeking, pair-bonding, jealousy, and even the overwhelming strength of the mother-infant bond. “The human race has produced only one successfully validated epistemology,” writes Frederick Crews (362). That epistemology encompasses a great variety of specific research practices, but they all hold as inviolable the common injunction “to make a sharp separation between hypothesis and evidence” (363). Despite his claims to scientific legitimacy, Freud failed to distinguish himself from other critical theorists by relying too much on his own intuitive powers, a reliance that all but guarantees succumbing to the natural human tendency to discover in complex fields precisely what you’ve come to them seeking.

Also read:

WHY SHAKESPEARE NAUSEATED DARWIN: A REVIEW OF KEITH OATLEY'S "SUCH STUFF AS DREAMS"

NICE GUYS WITH NOTHING TO SAY: BRETT MARTIN’S DIFFICULTY WITH “DIFFICULT MEN” AND THE FAILURE OF ARTS SCHOLARSHIP

GETTING GABRIEL WRONG: PART 1 OF 3

Read More
Dennis Junk Dennis Junk

The Truth about Grownups

A poem about how true some of the things young people think about old people are… but also about how true what old people think of young people is.


What you suspect of us grownups is true,

at least of some of us—we

just want you to do exactly

what we say, because we sort of

hate you for being young

and feel the balance should be 

struck by your obedience.


We want you to think what

we think—because you allowing

us to convince you makes us feel

wise and smart and like we have something

to show for all that youth we wasted.

We’re jailors and slave-drivers,

self-righteous power-trippers,

bent on punishing you for the

disappointment and mediocrity

of our lame-ass grownup lives, seeking

in our control over you some semblance of

vindication or salvation.

And, oh yes, your first thought

should be resist, escape,

recriminate—doubt and question.


Why should you follow our

instruction, respect our

decisions, follow our example—

unless you want to end up

like us?

Old and boring and bossy.

No, you’re not condemned to

be like us, not quite,

but the generations shift

with no one’s consent,

dumping you in a place

bearing no mark of your own design,

and looking around in

the vast indifference, the struggle

lost without your ever really

sensing you’d adequately

taken it up—there is

something like concern,

something like worry,

something like a genuine

wish to pass on whatever

you can of

preparedness.

All your discoveries

will seem worthy of

handing down - 

even the ones that get

thrown back in your face.

What we think of you kids

is right too, at least some of you:

you’re oblivious to

your own inconsequence—

have no sense of what

anything’s worth, can’t

imagine losing

sight of a promise

that vanishes in the distance

or recedes like a mirage

on the horizon.

Also read:
GRACIE - INVISIBLE FENCES

Secret Dancers

IN HONOR OF CHARLES DICKENS ON THE 200TH ANNIVERSARY OF HIS BIRTH

THE TREE CLIMBER: A STORY INSPIRED BY W.S. MERWIN

Read More
Dennis Junk Dennis Junk

Bedtime Ghost Story for Adults

A couple meets a nice old lady who lives in the apartment behind theirs. Little do they know they’ll end up adopting her cat when she dies. This is the setup to the story a man tells his girlfriend when she asks him to improvise one. It’s good enough to surprise them both.

I had just moved into the place on Berry Street with my girlfriend and her two cats. A very old lady lived in the apartment behind us. She came out to the dumpster while I was breaking down boxes and throwing them in. “Can you take those out?” she asked in her creaky voice. I explained I had nowhere else to put them if I did. “But it gets filled up and I can’t get anything in there,” she complained. I said she could come knock on our door if she ever had to throw something in the dumpster and it was too full. I’d help her.

A couple nights later, just as we were about to go to bed, my girlfriend asked me to tell her a story. When we first started dating, I would improvise elaborate stories at her request—to impress her and because it was fun. I hadn’t done it in a while.

******

“There was a couple who just moved into a new apartment,” I began as we climbed into bed.

“Uh-huh,” she said, already amused.

“And this apartment was at the front part of a really old house, and there was a really old lady who lived in the apartment behind theirs. Well, they got all their stuff moved in and they thought their place was really awesome and everything was going great. And the old lady liked the couple a lot… She liked them because she liked their cat.”

“Oh, they have a cat, huh? You didn’t say anything about a cat.”

“I just did.”

“What color is this cat?”

“Orange.”

“Oh, okay.”

“What happened was that one day the cat went missing and it turned out the cat had wandered to the old lady’s porch and she let it in her apartment. And she really liked it. But the girl was like, ‘Where’s my cat?’ and she went looking for it and got all worried. Finally, she knocked on the old lady’s door and asked if she’d seen it.

“The old lady invited the girl in to give her her cat back and while they were talking the old lady was thinking, wow, I really like this girl and she has a really nice cat and I liked having the cat over here. And the old lady had grown up in New Orleans, so she and her sisters were all into voodoo and hoodoo and spells and stuff. They were witches.”

“Oh man.”

“Yeah, so the old lady was a witch. And since she liked the young girl so much she decided to do something for her, so while she was talking to her she had something in her hand. And she held up her hand and blew it in the girl’s face. It was like water and ashes or something. The girl had no idea what it was and she was really weirded out and like, ‘What the hell did she do that for?’ But she figured it was no big deal. The lady was really old and probably a little dotty she figured. But she still kind of hurried up and got her cat and went home.

“Well, everything was normal until the boyfriend came home, and then the girl was all crazy and had to have sex with him immediately. They ended up having sex all night. And from then on it was like whenever they saw each other they couldn’t help themselves and they were just having sex all the time.”

“Oh boy.”

“Eventually, it was getting out of hand because they were both exhausted all day and they never talked to their friends and they started missing work and stuff. But they were really happy. It was great. So the girl started wondering if maybe the old lady had done something to her when she blew that stuff in her face. And then she thought maybe she should go and ask her, the old lady, if that’s what had happened. And if it was she thought, you know, she should thank her. She thought about all this for a long time, but then she would see the boyfriend and of course after that she would forget everything and eventually she just stopped thinking about it.

“Then one day their cat went missing, their other cat.”

“What color is this one?”

“Black. And, since she found the other cat at the old lady’s before, the girl thought maybe she should go and ask the old lady again. So one day when she was getting home from work she saw the old lady sitting on her porch and she goes up to talk to her. And she’s trying to make small talk and tell the old lady about the cat and ask her if she’s seen it when the old lady turns around and, like, squints and wrinkles her nose and kind of goes like this—looking back—and says, ‘You didn’t even thank me!’ before walking away and going in her door.”

“Ahh.”

“Yeah, and the girl’s all freaked out by it too.”

“Oh!—I’m gonna have to roll over and make sure she’s not out there.”

“Okay… So the girl’s all freaked out, but she’s still like, ‘Where’s my cat?’ So one time after they just had sex for like the umpteenth time she tells her boyfriend we gotta find the cat. And the boyfriend is like, ‘All right, I’m gonna go talk to this old lady and find out what the hell happened to our cat.’”

“Oh! What did you do to Mikey?”

“I didn’t do anything. Just listen… Anyway, he’s determined to find out if the cat’s in this old lady’s apartment. So he goes and knocks on her door and is all polite and everything. But the old lady just says, ‘You didn’t even thank me!’ and slams the door on him. He doesn’t know what else to do at this point so he calls the police, and he tells them that their cat’s missing and the last time, when the other cat was missing, it turned up at the old lady’s house. And he told them the old lady was acting all weird and stuff too.

“But of course the police can’t really do anything because there’s no way anyone knows the cat’s in the old lady’s house and they tell him to just wait and see if maybe the cat ran away or whatever. And the girl’s all upset and the guy’s getting all pissed off and trying to come up with some kind of scheme to get into the old lady’s house.

“–But they never actually get around to doing anything because they’re having so much sex and, even though they still miss the cat and everything, a lot of the time they almost forget about it. And it just goes on like this for a long time with the couple suspicious of the old lady and wondering where their cat is but not being able to do anything.

“And this goes on until one day—when the old lady just mysteriously dies. When the police get to her apartment, sure enough there’s the couple’s black cat.”

“Ooh, Mikey.”

“So the police come and tell the guy, you know, hey, we found your cat, just like you said. And the guy goes and gets the cat and brings it home. But while he’s in the old lady’s apartment he’s wondering the whole time about the spell she put on him and his girlfriend, and he’s a little worried that maybe since she died the spell might be broken. But he gets the cat and takes it home. And when his girlfriend comes home it’s like she gets all excited to see it, but only for like a minute, and then it’s like before and they can’t help themselves. They have to have sex.

“Well, this goes on and on and things get more and more out of hand until both of them lose their jobs, their friends just drift away because they never talk to them, and eventually they can’t pay their rent so they lose their apartment. So they get their cats and as much of their stuff as they can and they go to this spot they know by the river where some of their hippie friends used to camp. And they just live there like before, with their cats, just having sex all the time.

“One night after they just had sex again, they’re sitting by the campfire and the guy says, ‘You know, we lost our jobs and our friends and our apartment, and we’re living in the woods here by the river, and you’d think we’d be pretty miserable. But I think I have everything I need right here.’ He’s thinking about having sex again even as he’s saying this. And he’s like, ‘Really, I’m happy as hell. I don’t remember ever being this happy.’

“And the girl is like, ‘Yeah, me too. I actually kind of like living out here with you.’

“So they’re about to start having sex again when the black cat turns and looks at them and says, ‘And you didn’t even thank me!’”

Also read

THE SMOKING BUDDHA: ANOTHER GHOST STORY FOR ADULTS (AND YOUNG ADULTS TOO)

And

THE TREE CLIMBER: A STORY INSPIRED BY W.S. MERWIN

Read More
Dennis Junk Dennis Junk

How to be Interesting: Dead Poets and Causal Inferences

Henry James famously wrote, “The only obligation to which in advance we may hold a novel without incurring the accusation of being arbitrary, is that it be interesting.” But how does one be interesting? The answer probably lies in staying one step ahead of your readers, but every reader moves at their own pace.

No one writes a novel without ever having read one. Though storytelling comes naturally to us as humans, our appreciation of the lengthy, intricately rendered narratives we find spanning the hundreds of pages between book covers is contingent on a long history of crucial developments, literacy for instance. In the case of an individual reader, the faithfulness with which ontogeny recapitulates phylogeny will largely determine the level of interest taken in any given work of fiction. In other words, to appreciate a work, it is necessary to have some knowledge of the literary tradition to which it belongs. T.S. Eliot’s famous 1919 essay “Tradition and the Individual Talent” eulogizes great writers as breathing embodiments of the entire history of their art. “The poet must be very conscious of the main current,” Eliot writes,

which does not at all flow invariably through the most distinguished reputations. He must be quite aware of the obvious fact that art never improves, but that the material of art is never quite the same. He must be aware that the mind of Europe—the mind of his own country—a mind which he learns in time to be much more important than his own private mind—is a mind which changes, and that this change is a development which abandons nothing en route, which does not superannuate either Shakespeare, or Homer, or the rock of the Magdalenian draughtsmen.

Though Eliot probably didn’t mean to suggest that to write a good poem or novel you have to have thoroughly mastered every word of world literature, a condition that would’ve excluded most efforts even at the time he wrote the essay, he did believe that to fully understand a work you have to be able to place it in its proper historical context. “No poet,” he wrote,

no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists. You cannot value him alone; you must set him, for contrast and comparison, among the dead.

If this formulation for what goes into the appreciation of art is valid, then as time passes and historical precedents accumulate the burden of knowledge that must be shouldered to sustain adequate interest in or appreciation for works in the tradition will be getting constantly bigger. Accordingly, the number of people who can manage it will be getting constantly smaller.

But what if there is something like a threshold awareness of literary tradition—or even of current literary convention—beyond which the past ceases to be the most important factor influencing your appreciation for a particular work? Once your reading comprehension is up to snuff and you’ve learned how to deal with some basic strategies of perspective—first person, third person omniscient, etc.—then you’re free to interpret stories not merely as representative of some tradition but of potentially real people and events, reflective of some theme that has real meaning in most people’s lives. Far from seeing the task of the poet or novelist as serving as a vessel for artistic tradition, Henry James suggests in his 1884 essay “The Art of Fiction” that

The only obligation to which in advance we may hold a novel without incurring the accusation of being arbitrary, is that it be interesting. That general responsibility rests upon it, but it is the only one I can think of. The ways in which it is at liberty to accomplish this result (of interesting us) strike me as innumerable and such as can only suffer from being marked out, or fenced in, by prescription. They are as various as the temperament of man, and they are successful in proportion as they reveal a particular mind, different from others. A novel is in its broadest definition a personal impression of life; that, to begin with, constitutes its value, which is greater or less according to the intensity of the impression.

Writing for dead poets the way Eliot suggests may lead to works that are historically interesting. But a novel whose primary purpose is to represent, say, Homer’s Odyssey in some abstract way, a novel which, in other words, takes a piece of literature as its subject matter rather than some aspect of life as it is lived by humans, will likely only ever be interesting to academics. This isn’t to say that writers of the past ought to be ignored; rather, their continuing relevance is likely attributable to their works’ success in being interesting. So when you read Homer you shouldn’t be wondering how you might artistically reconceptualize his epics—you should be attending to the techniques that make them interesting and wondering how you might apply them in your own work, which strives to artistically represent some aspect of live. You go to past art for technical or thematic inspiration, not for traditions with which to carry on some dynamic exchange.

Representation should, as a rule of thumb, take priority over tradition. And to insist, as Eliot does, as an obvious fact or otherwise, that artistic techniques never improve is to admit defeat before taking on the challenge. But this leaves us with the question of how, beyond a devotion to faithful representations of recognizably living details, one manages to be interesting. Things tend to interest us when they’re novel or surprising. That babies direct their attention to incidents which go against their expectations is what allows us to examine what those expectations are. Babies, like their older counterparts, stare longer at bizarre occurrences. If a story consisted of nothing but surprising incidents, however, we would probably lose interest in it pretty quickly because it would strike us as chaotic and incoherent. Citing research showing that while surprise is necessary in securing the interest of readers but not sufficient, Sung-Il Kim, a psychologist at Korea University, explains that whatever incongruity causes the surprise must somehow be resolved. In other words, the surprise has to make sense in the shifted context.

In Aspects of the Novel, E.M. Forster makes his famous distinction between flat and round characters with reference to the latter’s ability to surprise readers. He notes however that surprise is only half the formula, since a character who only surprises would seem merely erratic—or would seem like something other than a real person. He writes,

The test of a round character is whether it is capable of surprising in a convincing way. If it never surprises, it is flat. If it does not convince, it is a flat pretending to be round. It has the incalculability of life about it—life within the pages of a book. And by using it sometimes alone, more often in combination with the other kind, the novelist achieves his task of acclimatization and harmonizes the human race with the other aspects of his work. (78)

Kim discovered that this same dynamic is at play even in the basic of unit of a single described event, suggesting that the convincing surprise is important for all aspects of the story, not just character. He went on to test the theory that what lies at the heart of our interest in these seeming incongruities that are in time resolved is our tendency to anticipate the resolution. When a brief description involves some element that must be inferred, it is considered more interesting, and it proves more memorable, than when the same incident is described in full detail without any demand for inference. However, when researchers rudely distract readers in experiments, keeping them from being able to infer, the differences in recall and reported interest vanish.

Kim proposes a “causal bridging inference” theory to explain what makes a story interesting. If there aren’t enough inferences to be made, the story seems boring and banal. But if there are too many then the reader gets overwhelmed and spaces out. “Whether inferences are drawn or not,” Kim writes,

depends on two factors: the amount of background knowledge a reader possesses and the structure of a story… In a real life situation, for example, people are interested in new scientific theories, new fashion styles, or new leading-edge products only when they have an adequate amount of background knowledge on the domain to fill the gap between the old and the new… When a story contains such detailed information that there is no gap to fill in, a reader does not need to generate inferences. In this case, the story would not be interesting even if the reader possessed a great deal of background knowledge. (69)

One old-fashioned and intuitive way of thinking about causal bridge inference theory is to see the task of a writer as keeping one or two steps ahead of the reader. If the story runs ahead by more than a few steps it risks being too difficult to follow and the reader gets lost. If it falls behind, it drags, like the boor who relishes the limelight and so stretches out his anecdotes with excruciatingly superfluous detail.

For a writer, the takeaway is that you want to shock and surprise your readers, which means making your story take unexpected, incongruous turns, but you should also seed the narrative with what in hindsight can be seen as hints to what’s to come so that the surprises never seem random or arbitrary—and so that the reader is trained to seek out further clues to make further inferences. This is what Forster meant when he said characters should change in ways that are both surprising and convincing. It’s perhaps a greater achievement to have character development, plot, and language integrated so that an inevitable surprise in one of these areas has implications for or bleeds into both the others. But we as readers can enjoy on its own an unlikely observation or surprising analogy that we discover upon reflection to be fitting. And of course we can enjoy a good plot twist in isolation too—witness Hollywood and genre fiction.

Naturally, some readers can be counted on to be better at making inferences than others. As Kim points out, this greater ability may be based on a broader knowledge base; if the author makes an allusion, for instance, it helps to know about the subject being alluded to. It can also be based on comprehension skills, awareness of genre conventions, understanding of the physical or psychological forces at play in the plot, and so on. The implication is that keeping those crucial two steps ahead, no more no less, means targeting readers who are just about as good at making inferences as you are and working hard through inspiration, planning, and revision to maintain your lead. If you’re erudite and agile of mind, you’re going to bore yourself trying to write for those significantly less so—and those significantly less so are going to find what is keenly stimulating for you to write all but impossible to comprehend.

Interestingness is also influenced by fundamental properties of stories like subject matter—Percy Fawcett explores the Amazon in search of the lost City of Z is more interesting than Margaret goes grocery shopping—and the personality traits of characters that influence the degree to which we sympathize with them. But technical virtuosity often supersedes things like topic and character. A great writer can write about a boring character in an interesting way. Interestingly, however, the benefit in interest won through mastery of technique will only be appreciated by those capable of inferring the meaning hinted at by the narration, those able to make the proper conceptual readjustments to accommodate surprising shifts in context and meaning. When mixed martial arts first became popular, for instance, audiences roared over knockouts and body slams, and yawned over everything else. But as Joe Rogan has remarked from ringside at events over the past few years fans have become so sophisticated that they cheer when one fighter passes the other’s guard.

What this means is that no matter how steadfast your devotion to representation, assuming your skills continually develop, there will be a point of diminishing returns, a point where improving as a writer will mean your work has greater interest but to a shrinking audience. My favorite illustration of this dilemma is Steven Millhauser’s parable “In the Reign of Harad IV,” in which “a maker of miniatures” carves and sculpts tiny representations of a king’s favorite possessions. Over time, though, the miniaturist ceases to care about any praise he receives from the king or anyone else at court and begins working to satisfy an inner “stirring of restlessness.” His creations become smaller and smaller, necessitating greater and greater magnification tools to appreciate. No matter how infinitesimal he manages to make his miniatures, upon completion of each work he seeks “a farther kingdom.” It’s one of the most interesting short stories I’ve read in a while.

Causal Bridging Inference Model:

Read More
Dennis Junk Dennis Junk

What's the Point of Difficult Reading?

For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them.

You sit reading the first dozen or so pages of some celebrated classic and gradually realize that having to sort out how the ends of the long sentences fix to their beginnings is taking just enough effort to distract you entirely from the setting or character you’re supposed to be getting to know. After a handful of words you swear are made up and a few tangled metaphors you find yourself riddling over with nary a resolution, the dread sinks in. Is the whole book going to be like this? Is it going to be one of those deals where you get to what’s clearly meant to be a crucial turning point in the plot but for you is just another riddle without a solution, sending you paging back through the forest of verbiage in search of some key succession of paragraphs you spaced out while reading the first time through? Then you wonder if you’re missing some other kind of key, like maybe the story’s an allegory, a reference to some historical event like World War II or some Revolution you once had to learn about but have since lost all recollection of. Maybe the insoluble similes are allusions to some other work you haven’t read or can’t recall. In any case, you’re not getting anything out of this celebrated classic but frustration leading to the dual suspicion that you’re too ignorant or stupid to enjoy great literature and that the whole “great literature” thing is just a conspiracy to trick us into feeling dumb so we’ll defer to the pseudo-wisdom of Ivory Tower elites.

If enough people of sufficient status get together and agree to extol a work of fiction, they can get almost everyone else to agree. The readers who get nothing out of it but frustration and boredom assume that since their professors or some critic in a fancy-pants magazine or the judges of some literary award committee think it’s great they must simply be missing something. They dutifully continue reading it, parrot a few points from a review that sound clever, and afterward toe the line by agreeing that it is indeed a great work of literature, clearly, even if it doesn’t speak to them personally. For instance, James Joyce’s Ulysses, utterly nonsensical to anyone without at least a master’s degree, tops the Modern Library’s list of 100 best novels in the English language. Responding to the urging of his friends to write out an explanation of the novel, Joyce scoffed, boasting,

I’ve put in so many enigmas and puzzles that it will keep the professors busy for centuries arguing over what I meant, and that’s the only way of ensuring one’s immortality.

He was right. To this day, professors continue to love him even as Ulysses and the even greater monstrosity Finnegan’s Wake do nothing but bore and befuddle everyone else—or else, more fittingly, sit inert or unchecked-out on the shelf, gathering well-deserved dust.

Joyce’s later novels are not literature; they are lengthy collections of loosely connected literary puzzles. But at least his puzzles have actual solutions—or so I’m told. Ulysses represents the apotheosis of the tradition in literature called modernism. What came next, postmodernism, is even more disconnected from the universal human passion for narrative. Even professors aren’t sure what to do with it, so they simply throw their hands up, say it’s great, and explain that the source of its greatness is its very resistance to explanation. Jonathan Franzen, whose 2001 novel The Corrections represented a major departure from the postmodernism he began his career experimenting with, explained the following year in The New Yorker how he’d turned away from the tradition. He’d been reading the work of William Gaddis “as a kind of penance” (101) and not getting any meaning out of it. Of the final piece in the celebrated author’s oeuvre, Franzen writes,

The novel is an example of the particular corrosiveness of literary postmodernism. Gaddis began his career with a Modernist epic about the forgery of masterpieces. He ended it with a pomo romp that superficially resembles a masterpiece but punishes the reader who tries to stay with it and follow its logic. When the reader finally says, Hey, wait a minute, this is a mess, not a masterpiece, the book instantly morphs into a performance-art prop: its fraudulence is the whole point! And the reader is out twenty hours of good-faith effort. (111)

In other words, reading postmodern fiction means not only forgoing the rewards of narratives, having them replaced by the more taxing endeavor of solving multiple riddles in succession, but those riddles don’t even have answers. What’s the point of reading this crap? Exactly. Get it?

You can dig deeper into the meaningless meanderings of pomos and discover there is in fact an ideology inspiring all the infuriating inanity. The super smart people who write and read this stuff point to the willing, eager complicity of the common reader in the propagation of all the lies that sustain our atrociously unjust society (but atrociously unjust compared to what?). Franzen refers to this as the Fallacy of the Stupid Reader,

wherein difficulty is a “strategy” to protect art from cooptation and the purpose of art is to “upset” or “compel” or “challenge” or “subvert” or “scar” the unsuspecting reader; as if the writer’s audience somehow consisted, again and again, of Charlie Browns running to kick Lucy’s football; as if it were a virtue in a novelist to be the kind of boor who propagandizes at friendly social gatherings. (109)

But if the author is worried about art becoming a commodity does making the art shitty really amount to a solution? And if the goal is to make readers rethink something they take for granted why not bring the matter up directly, or have a character wrestle with it, or have a character argue with another character about it? The sad fact is that these authors probably just suck, that, as Franzen suspects, “literary difficulty can operate as a smoke screen for an author who has nothing interesting, wise, or entertaining to say” (111).

Not all difficulty in fiction is a smoke screen though. Not all the literary emperors are naked. Franzen writes that “there is no headache like the headache you get from working harder on deciphering a text than the author, by all appearances, has worked on assembling it.” But the essay, titled “Mr. Difficult,” begins with a reader complaint sent not to Gaddis but to Franzen himself. And the reader, a Mrs. M. from Maryland, really gives him the business:

Who is it that you are writing for? It surely could not be the average person who enjoys a good read… The elite of New York, the elite who are beautiful, thin, anorexic, neurotic, sophisticated, don’t smoke, have abortions tri-yearly, are antiseptic, live in penthouses, this superior species of humanity who read Harper’s and The New Yorker. (100)

In this first part of the essay, Franzen introduces a dilemma that sets up his explanation of why he turned away from postmodernism—he’s an adherent of the “Contract model” of literature, whereby the author agrees to share, on equal footing, an entertaining or in some other way gratifying experience, as opposed to the “Status model,” whereby the author demonstrates his or her genius and if you don’t get it, tough. But his coming to a supposed agreement with Mrs. M. about writers like Gaddis doesn’t really resolve Mrs. M.’s conflict with him.

The Corrections, after all, the novel she was responding to, represents his turning away from the tradition Gaddis wrote in. (It must be said, though, that Freedom, Franzen’s next novel, is written in a still more accessible style.)

The first thing we must do to respond properly to Mrs. M. is break down each of Franzen’s models into two categories. The status model includes writers like Gaddis whose difficulty serves no purpose but to frustrate and alienate readers. But Franzen’s own type specimen for this model is Flaubert, much of whose writing, though difficult at first, rewards any effort to re-read and further comprehend with a more profound connection. So it is for countless other writers, the one behind number two on the Modern Library’s ranking for instance—Fitzgerald and Gatsby. As for the contract model, Franzen admits,

Taken to its free-market extreme, Contract stipulates that if a product is disagreeable to you the fault must be the product’s. If you crack a tooth on a hard word in a novel, you sue the author. If your professor puts Dreiser on your reading list, you write a harsh student evaluation… You’re the consumer; you rule. (100)

Franzen, in declaring himself a “Contract kind of person,” assumes that the free-market extreme can be dismissed for its extremity. But Mrs. M. would probably challenge him on that. For many, particularly right-leaning readers, the market not only can but should be relied on to determine which books are good and which ones belong in some tiny niche. When the Modern Library conducted a readers' poll to create a popular ranking to balance the one made by experts, the ballot was stuffed by Ayn Rand acolytes and scientologists. Mrs. M. herself leaves little doubt as to her political sympathies. For her and her fellow travelers, things like literature departments, National Book Awards—like the one The Corrections won—Nobels and Pulitzers are all an evil form of intervention into the sacred workings of the divine free market, un-American, sacrilegious, communist. According to this line of thinking, authors aren’t much different from whores—except of course literal whoring is condemned in the bible (except when it isn’t).

A contract with readers who score high on the personality dimension of openness to new ideas and experiences (who tend to be liberal), those who have spent a lot of time in the past reading books like The Great Gatsby or Heart of Darkness or Lolita (the horror!), those who read enough to have developed finely honed comprehension skills—that contract is going to look quite a bit different from one with readers who attend Beck University, those for whom Atlas Shrugged is the height of literary excellence. At the same time, though, the cult of self-esteem is poisoning schools and homes with the idea that suggesting that a student or son or daughter is anything other than a budding genius is a form of abuse. Heaven forbid a young person feel judged or criticized while speaking or writing. And if an author makes you feel the least bit dumb or ignorant, well, it’s an outrage—heroes like Mrs. M. to the rescue.

One of the problems with the cult of self-esteem is that anticipating criticism tends to make people more, not less creative. And the link between low self-esteem and mental disorders is almost purely mythical. High self-esteem is correlated with school performance, but as far as researchers can tell it’s the performance causing the esteem, not the other way around. More invidious, though, is the tendency to view anything that takes a great deal of education or intelligence to accomplish as an affront to everyone less educated or intelligent. Conservatives complain endlessly about class warfare and envy of the rich—the financially elite—but they have no qualms about decrying intellectual elites and condemning them for flaunting their superior literary achievements. They see the elitist mote in the eye of Nobel laureates without noticing the beam in their own.

         What’s the point of difficult reading? Well, what’s the point of running five or ten miles? What’s the point of eating vegetables as opposed to ice cream or Doritos? Difficulty need not preclude enjoyment. And discipline in the present is often rewarded in the future. It very well may be that the complexity of the ideas you’re capable of understanding is influenced by how many complex ideas you attempt to understand. No matter how vehemently true believers in the magic of markets insist otherwise, markets don’t have minds. And though an individual’s intelligence need not be fixed a good way to ensure children never get any smarter than they already are is to make them feel fantastically wonderful about their mediocrity. We just have to hope that despite these ideological traps there are enough people out there determined to wrap their minds around complex situations depicted in complex narratives about complex people told in complex language, people who will in the process develop the types of minds and intelligence necessary to lead the rest of our lazy asses into a future that’s livable and enjoyable. For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them. How do we get the rewards without wasting our time?

Read More
Dennis Junk Dennis Junk

Can’t Win for Losing: Why There Are So Many Losers in Literature and Why It Has to Change

David Lurie’s position in Disgrace is similar to that of John Proctor in The Crucible (although this doesn’t come out nearly as much in the movie version). And it’s hard not to see feminism in its current manifestations—along with Marxism and postcolonialism—as a pernicious new breed of McCarthyism infecting academia and wreaking havoc with men and literature alike.

Ironically, the author of The Golden Notebook, celebrating its 50th anniversary this year, and considered by many a “feminist bible,” happens to be an outspoken critic of feminism. When asked in a 1982 interview with Lesley Hazelton about her response to readers who felt some of her later works were betrayals of the women whose cause she once championed, Doris Lessing replied,

What the feminists want of me is something they haven't examined because it comes from religion. They want me to bear witness. What they would really like me to say is, ‘Ha, sisters, I stand with you side by side in your struggle toward the golden dawn where all those beastly men are no more.’ Do they really want people to make oversimplified statements about men and women? In fact, they do. I've come with great regret to this conclusion.

Lessing has also been accused of being overly harsh—“castrating”—to men, too many of whom she believes roll over a bit too easily when challenged by women aspiring to empowerment. As a famous novelist, however, who would go on to win the Nobel prize in literature in 2007, she got to visit a lot of schools, and it gradually dawned on her that it wasn’t so much that men were rolling over but rather that they were being trained from childhood to be ashamed of their maleness. In a lecture she gave to the Edinburgh book festival in 2001, she said,

Great things have been achieved through feminism. We now have pretty much equality at least on the pay and opportunities front, though almost nothing has been done on child care, the real liberation. We have many wonderful, clever, powerful women everywhere, but what is happening to men? Why did this have to be at the cost of men? I was in a class of nine- and 10-year-olds, girls and boys, and this young woman was telling these kids that the reason for wars was the innately violent nature of men. You could see the little girls, fat with complacency and conceit while the little boys sat there crumpled, apologising for their existence, thinking this was going to be the pattern of their lives.

Lessing describes how the teacher kept casting glances expectant of her approval as she excoriated these impressionable children. 

Elaine Blair, in “Great American Losers,” an essay that’s equal parts trenchant and infuriatingly obtuse, describes a dynamic in contemporary fiction that’s similar to the one Lessing saw playing out in the classroom.

The man who feels himself unloved and unlovable—this is a character that we know well from the latest generation or two of American novels. His trials are often played for sympathetic laughs. His loserdom is total: it extends to his stunted career, his squalid living quarters, his deep unease in the world.

At the heart of this loserdom is his auto-manifesting knowledge that women don’t like him. As opposed to men of earlier generations who felt entitled to a woman’s respect and admiration, Blair sees this modern male character as being “the opposite of entitled: he approaches women cringingly, bracing for a slap.” This desperation on the part of male characters to avoid offending women, to prove themselves capable of sublimating their own masculinity so they can be worthy of them, finds its source in the authors themselves. Blair writes,

Our American male novelists, I suspect, are worried about being unloved as writers—specifically by the female reader. This is the larger humiliation looming behind the many smaller fictional humiliations of their heroes, and we can see it in the way the characters’ rituals of self-loathing are tacitly performed for the benefit of an imagined female audience.

Blair quotes a review David Foster Wallace wrote of a John Updike novel to illustrate how conscious males writing literature today are of their female readers’ hostility toward men who write about sex and women without apologizing for liking sex and women—sometimes even outside the bounds of caring, committed relationships. Labeling Updike as a “Great Male Narcissist,” a distinction he shares with writers like Philip Roth and Norman Mailer, Wallace writes,

Most of the literary readers I know personally are under forty, and a fair number are female, and none of them are big admirers of the postwar GMNs. But it’s John Updike in particular that a lot of them seem to hate. And not merely his books, for some reason—mention the poor man himself and you have to jump back:

“Just a penis with a thesaurus.”

“Has the son of a bitch ever had one unpublished thought?”

“Makes misogyny seem literary the same way Rush 

[Limbaugh] makes fascism seem funny.”

And trust me: these are actual quotations, and I’ve heard even

worse ones, and they’re all usually accompanied by the sort of

facial expressions where you can tell there’s not going to be

any profit in appealing to the intentional fallacy or talking

about the sheer aesthetic pleasure of Updike’s prose.

Since Wallace is ready to “jump back” at the mere mention of Updike’s name, it’s no wonder he’s given to writing about characters who approach women “cringingly, bracing for a slap.”

Blair goes on to quote from Jonathan Franzen’s novel The Corrections, painting a plausible picture of male writers who fear not only that their books will be condemned if too misogynistic—a relative term which has come to mean "not as radically feminist as me"—but they themselves will be rejected. In Franzen’s novel, Chip Lambert has written a screenplay and asked his girlfriend Julia to give him her opinion. She holds off doing so, however, until after she breaks up with him and is on her way out the door. “For a woman reading it,” she says, “it’s sort of like the poultry department. Breast, breast, breast, thigh, leg” (26). Franzen describes his character’s response to the critique:

It seemed to Chip that Julia was leaving him because “The Academy Purple” had too many breast references and a draggy opening, and that if he could correct these few obvious problems, both on Julia’s copy of the script and, more important, on the copy he’d specially laser-printed on 24-pound ivory bond paper for [the film producer] Eden Procuro, there might be hope not only for his finances but also for his chances of ever again unfettering and fondling Julia’s own guileless, milk-white breasts. Which by this point in the day, as by late morning of almost every day in recent months, was one of the last activities on earth in which he could still reasonably expect to take solace for his failures. (28)

If you’re reading a literary work like The Corrections, chances are you’ve at some point sat in a literature class—or even a sociology or culture studies class—and been instructed that the proper way to fulfill your function as a reader is to critically assess the work in terms of how women (or minorities) are portrayed. Both Chip and Julia have sat through such classes. And you’re encouraged to express disapproval, even outrage if something like a traditional role is enacted—or, gasp, objectification occurs. Blair explains how this affects male novelists:

When you see the loser-figure in a novel, what you are seeing is a complicated bargain that goes something like this: yes, it is kind of immature and boorish to be thinking about sex all the time and ogling and objectifying women, but this is what we men sometimes do and we have to write about it. We fervently promise, however, to avoid the mistake of the late Updike novels: we will always, always, call our characters out when they’re being self-absorbed jerks and louts. We will make them comically pathetic, and punish them for their infractions a priori by making them undesirable to women, thus anticipating what we imagine will be your judgments, female reader. Then you and I, female reader, can share a laugh at the characters’ expense, and this will bring us closer together and forestall the dreaded possibility of your leaving me.

In other words, these male authors are the grownup versions of those poor school boys Lessing saw forced to apologize for their own existence. Indeed, you can feel this dynamic, this bargain, playing out when you’re reading these guys’ books. Blair’s description of the problem is spot on. Her theory of what caused it, however, is laughable.

Because of the GMNs, these two tendencies—heroic virility and sexist condescension—have lingered in our minds as somehow yoked together, and the succeeding generations of American male novelists have to some degree accepted the dyad as truth. Behind their skittishness is a fearful suspicion that if a man gets what he wants, sexually speaking, he is probably exploiting someone.

The dread of slipping down the slope from attraction to exploitation has nothing to do with John Updike. Rather, it is embedded in terms at the very core of feminist ideology. Misogyny, for instance, is frequently deemed an appropriate label for men who indulge in lustful gazing, even in private. And the term objectification implies that the female whose subjectivity isn’t being properly revered is the victim of oppression. The main problem with this idea—and there are several—is that the term objectification is synonymous with attraction. The deluge of details about the female body in fiction by male authors can just as easily be seen as a type of confession, an unburdening of guilt by the offering up of sins. The female readers respond by assigning the writers some form of penance, like never writing, never thinking like that again without flagellating themselves.

The conflict between healthy male desire and disapproving feminist prudery doesn’t just play out in the tortured psyches of geeky American male novelists. A.S. Byatt, in her Booker prize-winning novel Possession, satirizes the plight of scholars steeped in literary theories for being “papery” and sterile. But the novel ends with a male scholar named Roland overcoming his theory-induced self-consciousness to initiate sex with another scholar named Maud. Byatt describes the encounter:

And very slowly and with infinite gentle delays and delicate diversions and variations of indirect assault Roland finally, to use an outdated phrase, entered and took possession of all her white coolness that grew warm against him, so that there seemed to be no boundaries, and he heard, towards dawn, from a long way off, her clear voice crying out, uninhibited, unashamed, in pleasure and triumph. (551)

The literary critic Monica Flegel cites this passage as an example of how Byatt’s old-fashioned novel features “such negative qualities of the form as its misogyny and its omission of the lower class.” Flegel is particularly appalled by how “stereotypical gender roles are reaffirmed” in the sex scene. “Maud is reduced in the end,” Flegel alleges, “to being taken possession of by her lover…and assured that Roland will ‘take care of her.’” How, we may wonder, did a man assuring a woman he would take care of her become an act of misogyny?

Perhaps critics like Flegel occupy some radical fringe; Byatt’s book was after all a huge success with audiences and critics alike, and it did win Byatt the Booker. The novelist Martin Amis, however, isn’t one to describe his assaults as indirect. He routinely dares to feature men who actually do treat women poorly in his novels—without any authorial condemnation.

Martin Goff, the non-intervening director of the Booker Prize committee, tells the story of the 1989 controversy over whether or not Amis’s London Fields should be on the shortlist. Maggie Gee, a novelist, and Helen McNeil, a professor, simply couldn’t abide Amis’s treatment of his women characters. “It was an incredible row,” says Goff.

Maggie and Helen felt that Amis treated women appallingly in the book. That is not to say they thought books which treated women badly couldn't be good, they simply felt that the author should make it clear he didn't favour or bless that sort of treatment. Really, there was only two of them and they should have been outnumbered as the other three were in agreement, but such was the sheer force of their argument and passion that they won. David [Lodge] has told me he regrets it to this day, he feels he failed somehow by not saying, “It's two against three, Martin's on the list”.

In 2010, Amis explained his career-spanning failure to win a major literary award, despite enjoying robust book sales, thus:

There was a great fashion in the last century, and it's still with us, of the unenjoyable novel. And these are the novels which win prizes, because the committee thinks, “Well it's not at all enjoyable, and it isn't funny, therefore it must be very serious.”

Brits like Hilary Mantel, and especially Ian McEwan are working to turn this dreadful trend around. But when McEwan dared to write a novel about a neurosurgeon who prevails in the end over an afflicted, less privileged tormenter he was condemned by critic Jennifer Szalai in the pages of Harper’s Magazine for his “blithe, bourgeois sentiments.” If you’ve read Saturday, you know the sentiments are anything but blithe, and if you read Szalai’s review you’ll be taken aback by her articulate blindness.

Amis is probably right in suggesting that critics and award committees have a tendency to mistake misery for profundity. But his own case, along with several others like it, hint at something even more disturbing, a shift in the very idea of what role fictional narratives play in our lives.

The sad new reality is that, owing to the growing influence of ideologically extreme and idiotically self-righteous activist professors, literature is no longer read for pleasure and enrichment—it’s no longer even read as a challenging exercise in outgroup empathy. Instead, reading literature is supposed by many to be a ritual of male western penance. Prior to taking an interest in literary fiction, you must first be converted to the proper ideologies, made to feel sufficiently undeserving yet privileged, the beneficiary of a long history of theft and population displacement, the scion and gene-carrier of rapists and genocidaires—the horror, the horror. And you must be taught to systematically overlook and remain woefully oblivious of all the evidence that the Enlightenment was the best fucking thing that ever happened to the human species. Once you’re brainwashed into believing that so-called western culture is evil and that you’ve committed the original sin of having been born into it, you’re ready to perform your acts of contrition by reading horrendously boring fiction that forces you to acknowledge and reflect upon your own fallen state.

Fittingly, the apotheosis of this new literary tradition won the Booker in 1999, and its author, like Lessing, is a Nobel laureate. J.M. Coetzee’s Disgrace chronicles in exquisite free indirect discourse the degradation of David Lurie, a white professor in Cape Town, South Africa, beginning with his somewhat pathetic seduction of black student, a crime for which he pays with the loss of his job, his pension, and his reputation, and moving on to the aftermath of his daughter’s rape at the hands of three black men who proceed to rob her, steal his car, douse him with spirits and light him on fire. What’s unsettling about the novel—and it is a profoundly unsettling novel—is that its structure implies that everything that David and Lucy suffer flows from his original offense of lusting after a young black woman. This woman, Melanie, is twenty years old, and though she is clearly reluctant at first to have sex with her teacher there’s never any force involved. At one point, she shows up at David’s house and asks to stay with him. It turns out she has a boyfriend who is refusing to let her leave him without a fight. It’s only after David unheroically tries to wash his hands of the affair to avoid further harassment from this boyfriend—while stooping so low as to insist that Melanie make up a test she missed in his class—that she files a complaint against him.

David immediately comes clean to university officials and admits to taking advantage of his position of authority. But he stalwartly refuses to apologize for his lust, or even for his seduction of the young woman. This refusal makes him complicit, the novel suggests, in all the atrocities of colonialism. As he’s awaiting a hearing to address Melanie’s complaint, David gets a message:

On campus it is Rape Awareness Week. Women Against Rape, WAR, announces a twenty-four-hour vigil in solidarity with “recent victims”. A pamphlet is slipped under his door: ‘WOMEN SPEAK OUT.’ Scrawled in pencil at the bottom is a message: ‘YOUR DAYS ARE OVER, CASANOVA.’ (43)

During the hearing, David confesses to doctoring the attendance ledgers and entering a false grade for Melanie. As the attendees become increasingly frustrated with what they take to be evasions, he goes on to confess to becoming “a servant of Eros” (52). But this confession only enrages the social sciences professor Farodia Rassool:

Yes, he says, he is guilty; but when we try to get specificity, all of a sudden it is not abuse of a young woman he is confessing to, just an impulse he could not resist, with no mention of the pain he has caused, no mention of the long history of exploitation of which this is part. (53)

There’s also no mention, of course, of the fact that already David has gone through more suffering than Melanie has, or that her boyfriend deserves a great deal of the blame, or that David is an individual, not a representative of his entire race who should be made to answer for the sins of his forefathers.

After resigning from his position in disgrace, David moves out to the country to live with his daughter on a small plot of land. The attack occurs only days after he’s arrived. David wants Lucy to pursue some sort of justice, but she refuses. He wants her to move away because she’s clearly not safe, but she refuses. She even goes so far as to accuse him of being in the wrong for believing he has any right to pronounce what happened an injustice—and for thinking it is his place to protect his daughter. And if there’s any doubt about the implication of David’s complicity she clears it up. As he’s pleading with her to move away, they begin talking about the rapists’ motivation. Lucy says to her father,

When it comes to men and sex, David, nothing surprises me anymore. Maybe, for men, hating the woman makes sex more exciting. You are a man, you ought to know. When you have sex with someone strange—when you trap her, hold her down, get her under you, put all your weight on her—isn’t it a bit like killing? Pushing the knife in; exiting afterwards, leaving the body behind covered in blood—doesn’t it feel like murder, like getting away with murder? (158)

The novel is so engrossing and so disturbing that it’s difficult to tell what the author’s position is vis à vis his protagonist’s degradation or complicity. You can’t help sympathizing with him and feeling his treatment at the hands of Melanie, Farodia, and Lucy is an injustice. But are you supposed to question that feeling in light of the violence Melanie is threatened with and Lucy is subjected to? Are you supposed to reappraise altogether your thinking about the very concept of justice in light of the atrocities of history? Are we to see David Lurie as an individual or as a representative of western male colonialism, deserving of whatever he’s made to suffer and more?

Personally, I think David Lurie’s position in Disgrace is similar to that of John Proctor in The Crucible (although this doesn’t come out nearly as much in the movie version). And it’s hard not to see feminism in its current manifestations—along with Marxism and postcolonialism—as a pernicious new breed of McCarthyism infecting academia and wreaking havoc with men and literature alike. It’s really no surprise at all that the most significant developments in the realm of narratives lately haven’t occurred in novels at all. Insofar as the cable series contributing to the new golden age of television can be said to adhere to a formula, it’s this: begin with a bad ass male lead who doesn’t apologize for his own existence and has no qualms about expressing his feelings toward women. As far as I know, these shows are just as popular with women viewers as they are with the guys.

When David first arrives at Lucy’s house, they take a walk and he tells her a story about a dog he remembers from a time when they lived in a neighborhood called Kenilworth.

It was a male. Whenever there was a bitch in the vicinity it would get excited and unmanageable, and with Pavlovian regularity the owners would beat it. This went on until the poor dog didn’t know what to do. At the smell of a bitch it would chase around the garden with its ears flat and its tail between its legs, whining, trying to hide…There was something so ignoble in the spectacle that I despaired. One can punish a dog, it seems to me, for an offence like chewing a slipper. A dog will accept the justice of that: a beating for a chewing. But desire is another story. No animal will accept the justice of being punished for following its instincts.

Lucy breaks in, “So males must be allowed to follow their instincts unchecked? Is that the moral?” David answers,

No, that is not the moral. What was ignoble about the Kenilworth spectacle was that the poor dog had begun to hate its own nature. It no longer needed to be beaten. It was ready to punish itself. At that point it would be better to shoot it.

“Or have it fixed,” Lucy offers. (90)

Also Read:

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

And:
FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

Read More
Dennis Junk Dennis Junk

The Storytelling Animal: a Light Read with Weighty Implications

The Storytelling Animal is not groundbreaking. But the style of the book contributes something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams, through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be. The effect is that we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe.

A review of Jonathan Gottschall's The Storytelling Animal: How Stories Make Us Human

Vivian Paley, like many other preschool and kindergarten teachers in the 1970s, was disturbed by how her young charges always separated themselves by gender at playtime. She was further disturbed by how closely the play of each gender group hewed to the old stereotypes about girls and boys. Unlike most other teachers, though, Paley tried to do something about it. Her 1984 book Boys and Girls: Superheroes in the Doll Corner demonstrates in microcosm how quixotic social reforms inspired by the assumption that all behaviors are shaped solely by upbringing and culture can be. Eventually, Paley realized that it wasn’t the children who needed to learn new ways of thinking and behaving, but herself. What happened in her classrooms in the late 70s, developmental psychologists have reliably determined, is the same thing that happens when you put kids together anywhere in the world. As Jonathan Gottschall explains,

Dozens of studies across five decades and a multitude of cultures have found essentially what Paley found in her Midwestern classroom: boys and girls spontaneously segregate themselves by sex; boys engage in more rough-and-tumble play; fantasy play is more frequent in girls, more sophisticated, and more focused on pretend parenting; boys are generally more aggressive and less nurturing than girls, with the differences being present and measurable by the seventeenth month of life. (39)

Paley’s study is one of several you probably wouldn’t expect to find discussed in a book about our human fascination with storytelling. But, as Gottschall makes clear in The Storytelling Animal: How Stories Make Us Human, there really aren’t many areas of human existence that aren’t relevant to a discussion of the role stories play in our lives. Those rowdy boys in Paley’s classes were playing recognizable characters from current action and sci-fi movies, and the fantasies of the girls were right out of Grimm’s fairy tales (it’s easy to see why people might assume these cultural staples were to blame for the sex differences). And the play itself was structured around one of the key ingredients—really the key ingredient—of any compelling story, trouble, whether in the form of invading pirates or people trying to poison babies.

The Storytelling Animal is the book to start with if you have yet to cut your teeth on any of the other recent efforts to bring the study of narrative into the realm of cognitive and evolutionary psychology. Gottschall covers many of the central themes of this burgeoning field without getting into the weedier territories of game theory or selection at multiple levels. While readers accustomed to more technical works may balk at wading through all the author’s anecdotes about his daughters, Gottschall’s keen sense of measure and the light touch of his prose keep the book from getting bogged down in frivolousness. This applies as well to the sections in which he succumbs to the temptation any writer faces when trying to explain one or another aspect of storytelling by making a few forays into penning abortive, experimental plots of his own.

None of the central theses of The Storytelling Animal is groundbreaking. But the style and layout of the book contribute something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion, the way most science books do. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams—which contra Freud are seldom centered on wish-fulfillment—through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be (or actually is, if you’ve read any of D.F.Wallace’s last novel about an IRS clerk). The effect is that instead of simply having a new idea to toss around we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe. And we appreciate just how integral story is to almost everything we do.

This gloss of Gottschall’s approach gives a sense of what is truly original about The Storytelling Animal—it doesn’t seal off narrative as discrete from other features of human existence but rather shows how stories permeate every aspect of our lives, from our dreams to our plans for the future, even our sense of our own identity. In a chapter titled “Life Stories,” Gottschall writes,

This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all of the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television, while they eat pork rinds dipped in Miracle Whip. (171)

If you find this observation a tad unsettling, imagine it situated on a page underneath a mug shot of John Wayne Gacy with a caption explaining how he thought of himself “more as a victim than as a perpetrator.” For the most part, though, stories follow an easily identifiable moral logic, which Gottschall demonstrates with a short plot of his own based on the hypothetical situations Jonathan Haidt designed to induce moral dumbfounding. This almost inviolable moral underpinning of narratives suggests to Gottschall that one of the functions of stories is to encourage a sense of shared values and concern for the wider community, a role similar to the one D.S. Wilson sees religion as having played, and continuing to play in human evolution.

Though Gottschall stays away from the inside baseball stuff for the most part, he does come down firmly on one issue in opposition to at least one of the leading lights of the field. Gottschall imagines a future “exodus” from the real world into virtual story realms that are much closer to the holodecks of Star Trek than to current World of Warcraft interfaces. The assumption here is that people’s emotional involvement with stories results from audience members imagining themselves to be the protagonist. But interactive videogames are probably much closer to actual wish-fulfillment than the more passive approaches to attending to a story—hence the god-like powers and grandiose speechifying.

William Flesch challenges the identification theory in his own (much more technical) book Comeuppance. He points out that films that have experimented with a first-person approach to camera work failed to capture audiences (think of the complicated contraption that filmed Will Smith’s face as he was running from the zombies in I am Legend). Flesch writes, “If I imagined I were a character, I could not see her face; thus seeing her face means I must have a perspective on her that prevents perfect (naïve) identification” (16). One of the ways we sympathize with one another, though, is to mirror them—to feel, at least to some degree, their pain. That makes the issue a complicated one. Flesch believes our emotional involvement comes not from identification but from a desire to see virtuous characters come through the troubles of the plot unharmed, vindicated, maybe even rewarded. Attending to a story therefore entails tracking characters' interactions to see if they are in fact virtuous, then hoping desperately to see their virtue rewarded.

Gottschall does his best to avoid dismissing the typical obsessive Larper (live-action role player) as the “stereotypical Dungeons and Dragons player” who “is a pimply, introverted boy who isn’t cool and can’t play sports or attract girls” (190). And he does his best to end his book on an optimistic note. But the exodus he writes about may be an example of another phenomenon he discusses. First the optimism:

Humans evolved to crave story. This craving has, on the whole, been a good thing for us. Stories give us pleasure and instruction. They simulate worlds so we can live better in this one. They help bind us into communities and define us as cultures. Stories have been a great boon to our species. (197)

But he then makes an analogy with food cravings, which likewise evolved to serve a beneficial function yet in the modern world are wreaking havoc with our health. Just as there is junk food, so there is such a thing as “junk story,” possibly leading to what Brian Boyd, another luminary in evolutionary criticism, calls a “mental diabetes epidemic” (198). In the context of America’s current education woes, and with how easy it is to conjure images of glazy-eyed zombie students, the idea that video games and shows like Jersey Shore are “the story equivalent of deep-fried Twinkies” (197) makes an unnerving amount of sense.

Here, as in the section on how our personal histories are more fictionalized rewritings than accurate recordings, Gottschall manages to achieve something the playful tone and off-handed experimentation don't prepare you for. The surprising accomplishment of this unassuming little book (200 pages) is that it never stops being a light read even as it takes on discoveries with extremely weighty implications. The temptation to eat deep-fried Twinkies is only going to get more powerful as story-delivery systems become more technologically advanced. Might we have already begun the zombie apocalypse without anyone noticing—and, if so, are there already heroes working to save us we won’t recognize until long after the struggle has ended and we’ve begun weaving its history into a workable narrative, a legend?

Also read:

WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

And:

HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT

Read More
Dennis Junk Dennis Junk

Madness and Bliss: Critical versus Primitive Readings in A.S. Byatt’s Possession: a Romance

Possession can be read as the novelist’s narrative challenge to the ascendant critical theories, an “undermining of facile illusions” about language and culture and politics—a literary refutation of current approaches to literary criticism.

Part 1 of 2

“You have one of the gifts of the novelist at least,” Christabel LaMotte says to her cousin Sabine de Kercoz in A.S. Byatt’s Possession: a Romance, “you persist in undermining facile illusions” (377). LaMotte is staying with her uncle and cousin, Sabine later learns, because she is carrying the child of the renowned, and married, poet Randolph Henry Ash. The affair began when the two met at a breakfast party where they struck up an impassioned conversation that later prompted Ash to instigate a correspondence. LaMotte too was a poet, so each turned out to be an ideal reader for the other’s work. Just over a hundred years after this initial meeting, in the present day of Byatt’s narrative, the literary scholar Roland Mitchell finds two drafts of Ash’s first letter to LaMotte tucked away in the pages of a book he’s examining for evidence about the great poet’s life, and the detective work begins.

Roland, an unpaid research assistant financially dependent on the girlfriend he’s in a mutually unfulfilling relationship with, is overtaken with curiosity and embarks on a quest to piece together the story of what happened between LaMotte and Ash. Knowing next to nothing about LaMotte, Mitchell partners with the feminist scholar Maud Bailey, who one character describes as “a chilly mortal” (159), and a stilted romance develops between them as they seek out the clues to the earlier, doomed relationship. Through her juxtaposition of the romance between the intensely passionate, intensely curious nineteenth century couple and the subdued, hyper-analytic, and sterile modern one, the novelist Byatt does some undermining of facile illusions of her own.

       Both of the modern characters are steeped in literary theory, but Byatt’s narrative suggests that their education and training is more a hindrance than an aid to true engagement with literature, and with life. It is only by breaking with professional protocol—by stealing the drafts of the letter from Ash to LaMotte—and breaking away from his mentor and fellow researchers that Roland has a chance to read, and experience, the story that transforms him. “He had been taught that language was essentially inadequate, that it could never speak what was there, that it only spoke itself” (513). But over the course of the story Roland comes to believe that this central tenet of poststructuralism is itself inadequate, along with the main tenets of other leading critical theories, including psychoanalysis. Byatt, in a later book of criticism, counts herself among the writers of fiction who “feel that powerful figures in the modern critical movements feel almost a gladiatorial antagonism to the author and the authority the author claims” (6).

Indeed, Possession can be read as the novelist’s narrative challenge to the ascendant critical theories, an “undermining of facile illusions” about language and culture and politics—a literary refutation of current approaches to literary criticism. In the two decades since the novel’s publication, critics working in these traditions have been unable to adequately respond to Byatt’s challenge because they’ve been unable to imagine that their ideas are not simply impediments to pleasurable reading but that they’re both wrong and harmful to the creation and appreciation of literature.

       The possession of the title refers initially to how the story of LaMotte and Ash’s romance takes over Maud and Roland—in defiance of the supposed inadequacy of language. If words only speak themselves, then true communication would be impossible. But, as Roland says to Maud after they’ve discovered some uncanny correspondences between each of the two great poets’ works and the physical setting the modern scholars deduce they must’ve visited together, “People’s minds do hook together” (257). This hooking-together is precisely what inspires them to embark on their mission of discovery in the first place. “I want to—to—follow the path,” Maud says to Roland after they’ve read the poets’ correspondence together.

I feel taken over by this. I want to know what happened, and I want it to be me that finds out. I thought you were mad when you came to Lincoln with your piece of stolen letter.

Now I feel the same. It isn’t professional greed. It’s something more primitive. (239)

Roland interrupts to propose the label “Narrative curiosity” for her feeling of being taken over, to which she responds, “Partly” (239). Later in the story, after several more crucial discoveries, Maud proposes revealing all they’ve learned to their academic colleagues and returning to their homes and their lives. Roland worries doing so would mean going back “Unenchanted.” “Are we enchanted?” Maud replies. “I suppose we must start thinking again, sometime” (454). But it’s the primitive, enchanted, supposedly unthinking reading of the biographical clues about the poets that has brought the two scholars to where they are, and their journey ends up resulting in a transformation that allows Maud and Roland to experience the happy ending LaMotte and Ash were tragically deprived of.

Before discovering and being possessed by the romance of the nineteenth century poets, both Maud and Roland were living isolated and sterile lives. Maud, for instance, always has her hair covered in a kind of “head-binding” and twisted in tightly regimented braids that cause Roland “a kind of sympathetic pain on his own skull-skin” (282). She later reveals that she has to cover it because her fellow feminists always assume she’s “dyeing it to please men.” “It’s exhausting,” Roland has just said. “When everything’s a deliberate political stance. Even if it’s interesting” (295). Maud’s bound head thus serves as a symbol (if read in precisely the type of way Byatt’s story implicitly admonishes her audience to avoid) of the burdensome and even oppressive nature of an ideology that supposedly works for the liberation and wider consciousness of women.

Meanwhile, Roland is troubling himself about the implications of his budding romantic feelings for Maud. He has what he calls a “superstitious dread” of “repeating patterns,” a phrase he repeats over and over again throughout the novel. Thinking of his relations with Maud, he muses,

“Falling in love,” characteristically, combs the appearances of the world, and of the particular lover’s history, out of a random tangle and into a coherent plot. Roland was troubled that the opposite might be true. Finding themselves in a plot, they might suppose it appropriate to behave as though it was a sort of plot. And that would be to compromise some kind of integrity they had set out with. (456)

He later wrestles with the idea that “a Romance was one of the systems that controlled him, as the expectations of Romance control almost everyone in the Western world” (460). Because of his education, he cannot help doubting his own feelings, suspecting that giving in to their promptings would have political implications, and worrying that doing so would result in a comprising of his integrity (which he must likewise doubt) and his free will. Roland’s self-conscious lucubration forms a stark contrast to what Randolph Henry Ash wrote in an early letter to his wife Ellen: “I cannot get out of my mind—as indeed, how should I wish to, whose most ardent desire is to be possessed entirely by the pure thought of you—I cannot get out of my mind the entire picture of you” (500). It is only by reading letters like this, and by becoming more like Ash, turning away in the process from his modern learning, that Roland can come to an understanding of himself and accept his feelings for Maud as genuine and innocent.

Identity for modern literary scholars, Byatt suggests, is a fraught and complicated issue. At different points in the novel, both Maud and Roland engage in baroque, abortive efforts to arrive at a sense of who they are. Maud, reflecting on how another scholar’s writing about Ash says more about the author than about the subject, meditates,

Narcissism, the unstable self, the fractured ego, Maud thought, who am I? A matrix for a susurration of texts and codes? It was both a pleasant and an unpleasant idea, this requirement that she think of herself as intermittent and partial. There was the question of the awkward body. The skin, the breath, the eyes, the hair, their history, which did seem to exist. (273)

Roland later echoes this head-binding poststructuralist notion of the self as he continues to dither over whether or not he should act on his feelings for Maud.

Roland had learned to see himself, theoretically, as a crossing-place for a number of systems, all loosely connected. He had been trained to see his idea of his “self” as an illusion, to be replaced by a discontinuous machinery and electrical message-network of various desires, ideological beliefs and responses, language forms and hormones and pheromones. Mostly he liked this. He had no desire for any strenuous Romantic self-assertion. (459)

But he mistakes that lack of desire for self-assertion as genuine, when it fact it is borne of his theory-induced self-doubt. He will have to discover in himself that very desire to assert or express himself if he wants to escape his lifeless, menial occupation and end his sexless isolation. He and Maud both have to learn how to integrate their bodies and their desires into their conceptions of themselves.

Unfortunately, thinking about sex is even more fraught with exhausting political implications for Byatt’s scholars than thinking about the self. While on a trek to retrace the steps they believe LaMotte and Ash took in the hills of Yorkshire, Roland considers the writing of a psychoanalytic theorist. Disturbed, he asks Maud, “Do you never have the sense that our metaphors eat up our world?” (275). He goes on to explain, that no matter what they tried to discuss,

It all reduced like boiling jam to—human sexuality… And then, really, what is it, what is this arcane power we have, when we see everything is human sexuality? It’s really powerlessness… We are so knowing… Everything relates to us and so we’re imprisoned in ourselves—we can’t see things. (276)

The couple is coming to realize that they can in fact see things, the same things that the couple whose story they're tracking down saw over a century ago. This budding realization inspires in Roland an awareness of how limiting, even incapacitating, the dubious ideas of critical theorizing can be. Through the distorting prism of psychoanalysis, “Sexuality was like thick smoked glass; everything took on the same blurred tint through it. He could not imagine a pool with stones and water” (278).

The irony is that for all the faux sophistication of psychoanalytic sexual terminology it engenders in both Roland and Maud nothing but bafflement and aversion to actual sex. Roland highlights this paradox later, thinking,

They were children of a time and culture that mistrusted love, “in love,” romantic love, romance in toto, and which nevertheless in revenge proliferated sexual language, linguistic sexuality, analysis, dissection, deconstruction, exposure. (458)

Maud sums up the central problem when she says to Roland, “And desire, that we look into so carefully—I think all the looking-into has some very odd effects on the desire” (290). In that same scene, while still in Yorkshire trying to find evidence of LaMotte’s having accompanied Ash on his trip, the two modern scholars discover they share a fantasy, not a sexual fantasy, but one involving “An empty clean bed,” “An empty bed in an empty room,” and they wonder if “they’re symptomatic of whole flocks of exhausted scholars and theorists” (290-1).

Guided by their intense desire to be possessed by the two poets of the previous century, Maud and Roland try to imagine how they would have seen the world, and in so doing they try to imagine what it would be like not to believe in the poststructuralist and psychoanalytic theories they’ve been inculcated with. At first Maud tells Roland, “We live in the truth of what Freud discovered. Whether or not we like it. However we’ve modified it. We aren’t really free to suppose—to imagine—he could possibly have been wrong about human nature” (276). But after they’ve discovered a cave with a pool whose reflected light looks like white fire, a metaphor that both LaMotte and Ash used in poems written around the time they would’ve come to that very place, prompting Maud to proclaim, “She saw this. I’m sure she saw this” (289), the two begin trying in earnest to imagine what it would be like to live without their theories. Maud explains to Roland,

We know all sorts of things, too—about how there isn’t a unitary ego—how we’re made up of conflicting, interacting systems of things—and I suppose we believe that? We know we’re driven by desire, but we can’t see it as they did, can we? We never say the word Love, do we—we know it’s a suspect ideological construct—especially Romantic Love—so we have to make a real effort of imagination to know what it felt like to be them, here, believing in these things—Love—themselves—that what they did mattered—(290)

       Though many critics have pointed out how the affair between LaMotte and Ash parallels the one between Maud and Roland, in some way the trajectories of the two relationships run in opposite directions. For instance, LaMotte leaves Ash as even more of a “chilly mortal” (310) than she was when she first met him. It turns out the term derives from a Mrs. Cammish, who lodged LaMotte and Ash while they were on their trip, and was handed down to the Lady Bailey, Maud’s relative, who applies it to her in a conversation with Roland. And whereas the ultimate falling out between LaMotte and Ash comes in the wake of Ash exposing a spiritualist, whose ideas and abilities LaMotte had invested a great deal of faith in, as a fraud, Roland’s counterpart disillusionment, his epiphany that literary theory as he has learned it is a fraud, is what finally makes the consummation of his relationship with Maud possible. Maud too has to overcome, to a degree, her feminist compunctions to be with Roland. Noting how this chilly mortal is warming over the course of their quest, Roland thinks how, “It was odd to hear Maud Bailey talking wildly of madness and bliss” (360). But at last she lets her hair down.

Sabine’s journal of the time her cousin Christabel stayed with her and her father on the Brittany coast, where she’d sought refuge after discovering she was pregnant, offers Roland and Maud a glimpse at how wrongheaded it can be to give precedence to their brand of critical reading over what they would consider a more primitive approach. Ironically, it is the young aspiring writer who gives them this glimpse as she chastises her high-minded poet cousin for her attempts to analyze and explain the meanings of the myths and stories she’s grown up with. “The stories come before the meanings,” Sabine insists to Christabel. “I do not believe all these explanations. They diminish. The idea of Woman is less than brilliant Vivien, and the idea of Merlin will not allegorise into male wisdom. He is Merlin” (384). These words come from the same young woman who LaMotte earlier credited for her persistence “in undermining facile illusions” (377).

Readers of Byatt’s novel, though not Maud and Roland, both of whom likely already know of the episode, learn about how Ash attended a séance and, reaching up to grab a supposedly levitating wreath, revealed it to be attached to a set of strings connected to the spiritualist. In a letter to Ruskin read for Byatt’s readers by another modern scholar, Ash expresses his outrage that someone would exploit the credulity and longing of the bereaved, especially mothers who’ve lost children. “If this is fraud, playing on a mother’s harrowed feelings, it is wickedness indeed” (423). He also wonders what the ultimate benefit would be if spiritualist studies into other realms proved to be valid. “But if it were so, if the departed spirits were called back—what good does it do? Were we meant to spend our days sitting and peering into the edge of the shadows?” (422). LaMotte and Ash part ways for good after his exposure of the spiritualist as a charlatan because she is so disturbed by the revelation. And, for the reader, the interlude serves as a reminder of past follies that today are widely acknowledged to have depended on trickery and impassioned credulity. So it might be for the ideas of Freud and Derrida and Lacan.

Roland arrives at the conclusion that this is indeed the case. Having been taught that language is inadequate and only speaks itself, he gradually comes to realize that this idea is nonsense. Reflecting on how he was taught that language couldn’t speak about what really existed in the world, he suddenly realizes that he’s been disabused of the idea. “What happened to him was that the ways in which it could be said had become more interesting than the idea that it could not” (513). He has learned through his quest to discover what had occurred between LaMotte and Ash that “It is possible for a writer to make, or remake at least, for a reader, the primary pleasures of eating, or drinking, or looking on, or sex.” People’s minds do in fact “hook together,” as he’d observed earlier, and they do it through language. The novel’s narrator intrudes to explain here near the end of the book what Roland is coming to understand.

Now and then there are readings that make the hairs on the neck, the non-existent pelt, stand on end and tremble, when every word burns and shines hard and clear and infinite and exact, like stones of fire, like points of stars in the dark—readings when the knowledge that we shall know the writing differently or better or satisfactorily, runs ahead of any capacity to say what we know, or how. In these readings, a sense of that text has appeared to be wholly new, never before seen, is followed, almost immediately, by the sense that it was always there, that we the readers, knew it was always there, and have always known it was as it was, though we have now for the first time recognised, become fully cognisant of, our knowledge. (512) (Neuroscientists agree.)

The recognition the narrator refers to—which Roland is presumably experiencing in the scene—is of a shared human nature, and shared human experience, the notions of which are considered by most literary critics to be politically reactionary.

Though he earlier claimed to have no desire to assert himself, Roland discovers he has a desire to write poetry. He decides to turn away from literary scholarship altogether and become a poet. He also asserts himself by finally taking charge and initiating sex with Maud.

And very slowly and with infinite gentle delays and delicate diversions and variations of indirect assault Roland finally, to use an outdated phrase, entered and took possession of all her white coolness that grew warm against him, so that there seemed to be no boundaries, and he heard, towards dawn, from a long way off, her clear voice crying out, uninhibited, unashamed, in pleasure and triumph. (551)

This is in fact, except for postscript focusing on Ash, the final scene of the novel, and it represents Roland’s total, and Maud’s partial transcendence of the theories and habits that hitherto made their lives so barren and lonely.

Read part 2

Related posts:

Read

POSTSTRUCTURALISM: BANAL WHEN IT'S NOT BUSY BEING ABSURD

Read

CAN’T WIN FOR LOSING: WHY THERE ARE SO MANY LOSERS IN LITERATURE AND WHY IT HAS TO CHANGE

Or

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

Useless Art and Beautiful Minds

The paradox of art is that the artist conveys more of him or herself by focusing on the subject of the work. The artists who cast the most powerful spells are the ones who can get the most caught up in things other than themselves. Falling in love exposes as much of the lover as it does of the one who’s loved.

When you experience a work of art you can’t help imagining the mind behind it. Many people go so far as to imagine the events of the natural world as attempts by incorporeal minds to communicate their intentions. Creating art demands effort and clear intentionality, and so we automatically try to understand the creator’s message. Ghanaian artist El Anatsui says the difference between the tools and consumer artifacts that go into his creations and the creations themselves is that you are meant to use bottle caps and can lids, but you are meant to contemplate works of art. 

Ai Weiwei’s marble sculpture of a surveillance camera, for instance, takes its form from an object that has a clear function and transforms it into an object that stands inert, useless but for the irresistible compulsion it evokes to ponder what it means to live under the watchful gaze of an oppressive state. Mastering the challenge posed by his tormenters, taking their tools and turning them into objects of beauty and contemplation, is an obvious intention and thus an obvious message. We look at the sculpture and we feel we understand what Ai Weiwei meant in creating it. 

Not all art is conducive to such ease of recognition, and sometimes unsettledness of meaning is its own meaning. We are given to classifying objects or images, primarily by their use. Asking the question, what is this, is usually the same as asking, what is it for? If we see an image surrounded by a frame hanging on a wall, even the least artistically inclined of us will assume the picture in some way pleases the man or woman who put it there. It could be a picture of a loved one. It could be an image whose symmetry and colors and complexity strike most people as beautiful. It could signify some aspect of group identity. 

Not all art pleases, and sometimes the artist’s intention is to disturb. John Keats believed what he called negative capability, a state in which someone “is capable of being in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason,” to be central to the creation and appreciation of art. If everything we encounter fits neatly into our long-established categories, we will never experience such uncertainty. The temptation is always to avoid challenges to what we know because being so challenged can be profoundly discomfiting. But if our minds are never challenged they atrophy.

        While artists often challenge us to contemplate topics like the slave trade for the manufacture of beer, the relation between industrial manufacturing and the natural world, or surveillance and freedom of expression, the mystery that lies at the foundation of art is the mind of the artist. Once we realize we’re to have an experience of art, we stop wondering, what is this for, and begin pondering what it means.

        Art isn’t however simply an expression of the artist’s thoughts or emotions, and neither should it be considered merely an attempt at rendering some aspect of the real world through one of the representative media. How Anatsui conveys his message about the slave trade is just as important as any attempt to decipher what that message might be. The paradox of art is that the artist conveys more of him or herself by focusing on the subject of the work. The artists who cast the most powerful spells are the ones who can get the most caught up in things other than themselves. Falling in love exposes as much of the lover as it does of the one who’s loved. 

Music and narrative arts rely on the dimension of time, so they illustrate the point more effectively. The pace of the rhythms and the pitch of voices and instruments convey emotion with immediacy and force. Musicians must to some degree experience the emotions they hope to spread through their performances (though they may begin in tranquility and only succumb afterward, affected by their own performance). They are like actors. But audiences do not assume that the man who plays low-pitched, violent music is angry when the song is over. Nor do they assume the singer who croons a plangent love song is at that moment in her life in the throes of an infatuation. The musicians throw themselves into their performances, and perhaps into the writing and composing of the songs, and, to the extent that we forget we’re listening to a performer as we feel or relive the anger or the pangs of love their music immerses us in, they achieve a transcendence we recognize as an experience of true art. We in the audience attribute that transcendence to the musicians, and infer that even though they may not currently be in the state their song inspired they must know a great deal about it.

Likewise a fiction writer accomplishes the most by betraying little or no interest in him or herself. Line by line, scene by scene, if the reader is thinking of the author and not the characters the work is a failure. When the story ceases to be a story and the fates of characters become matters of real concern for the reader, the author has achieved that same artistic transcendence as the musician whose songs take hold of our hearts and make us want to rage, to cry, to dance. But, finishing the chapter, leaving the company of the characters, reemerging from the story, we can marvel at the seeming magic that so consumed us. Contemplating the ultimate outcomes as the unfolding of the plot comes to an end, we’re free to treat the work holistically and see in it the vision of the writer.

The presence of the artist’s mind need not distract from the subject we are being asked to contemplate. But all art can be thought of as an exercise in empathy. More than that, though, the making strange of familiar objects and experiences, the communion of minds assumed to be separated by some hitherto unbridgeable divide, these experiences inspire an approach to living and socializing that is an art in its own right. Sometimes to see something clearly we have to be able to imagine it otherwise. To really hear someone else we have to appreciate ourselves. To break free of our own habits, it helps to know how others think and live.

Also read:
WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

Read More
Dennis Junk Dennis Junk

The Enlightened Hypocrisy of Jonathan Haidt's Righteous Mind

Jonathan Haidt extends an olive branch to conservatives by acknowledging their morality has more dimensions than the morality of liberals. But is he mistaking what’s intuitive for what’s right? A critical, yet admiring review of The Righteous Mind.

A Review of Jonathan Haidt's new book,

The Righteous Mind: Why Good People are Divided by Politics and Religion

Back in the early 1950s, Muzafer Sherif and his colleagues conducted a now-infamous experiment that validated the central premise of Lord of the Flies. Two groups of 12-year-old boys were brought to a camp called Robber’s Cave in southern Oklahoma where they were observed by researchers as the members got to know each other. Each group, unaware at first of the other’s presence at the camp, spontaneously formed a hierarchy, and they each came up with a name for themselves, the Eagles and the Rattlers. That was the first stage of the study. In the second stage, the two groups were gradually made aware of each other’s presence, and then they were pitted against each other in several games like baseball and tug-o-war. The goal was to find out if animosity would emerge between the groups. This phase of the study had to be brought to an end after the groups began staging armed raids on each other’s territory, wielding socks they’d filled with rocks. Prepubescent boys, this and several other studies confirm, tend to be highly tribal.

            So do conservatives.

           This is what University of Virginia psychologist Jonathan Haidt heroically avoids saying explicitly for the entirety of his new 318-page, heavily endnoted The Righteous Mind: Why Good People Are Divided by Politics and Religion. In the first of three parts, he takes on ethicists like John Stuart Mill and Immanuel Kant, along with the so-called New Atheists like Sam Harris and Richard Dawkins, because, as he says in a characteristically self-undermining pronouncement, “Anyone who values truth should stop worshipping reason” (89). Intuition, Haidt insists, is more worthy of focus. In part two, he lays out evidence from his own research showing that all over the world judgments about behaviors rely on a total of six intuitive dimensions, all of which served some ancestral, adaptive function. Conservatives live in “moral matrices” that incorporate all six, while liberal morality rests disproportionally on just three. At times, Haidt intimates that more dimensions is better, but then he explicitly disavows that position. He is, after all, a liberal himself. In part three, he covers some of the most fascinating research to emerge from the field of human evolutionary anthropology over the past decade and a half, concluding that tribalism emerged from group selection and that without it humans never would have become, well, human. Again, the point is that tribal morality—i.e. conservatism—cannot be all bad.

One of Haidt’s goals in writing The Righteous Mind, though, was to improve understanding on each side of the central political divide by exploring, and even encouraging an appreciation for, the moral psychology of those on the rival side. Tribalism can’t be all bad—and yet we need much less of it in the form of partisanship. “My hope,” Haidt writes in the introduction, “is that this book will make conversations about morality, politics, and religion more common, more civil, and more fun, even in mixed company” (xii). Later he identifies the crux of his challenge, “Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide” (49). There are plenty of books by conservative authors which gleefully point out the contradictions and errors in the thinking of naïve liberals, and there are plenty by liberals returning the favor. What Haidt attempts is a willful disregard of his own politics for the sake of transcending the entrenched divisions, even as he’s covering some key evidence that forms the basis of his beliefs. Not surprisingly, he gives the impression at several points throughout the book that he’s either withholding the conclusions he really draws from the research or exercising great discipline in directing his conclusions along paths amenable to his agenda of bringing about greater civility.

Haidt’s focus is on intuition, so he faces the same challenge Daniel Kahneman did in writing Thinking, Fast and Slow: how to convey all these different theories and findings in a book people will enjoy reading from first page to last? Kahneman’s attempt was unsuccessful, but his encyclopedic book is still readable because its topic is so compelling. Haidt’s approach is to discuss the science in the context of his own story of intellectual development. The product reads like a postmodern hero’s journey in which the unreliable narrator returns right back to where he started, but with a heightened awareness of how small his neighborhood really is. It’s a riveting trip down the rabbit hole of self-reflection where the distinction between is and ought gets blurred and erased and reinstated, as do the distinctions between intuition and reason, and even self and other. Since, as Haidt reports, liberals tend to score higher on the personality trait called openness to new ideas and experiences, he seems to have decided on a strategy of uncritically adopting several points of conservative rhetoric—like suggesting liberals are out-of-touch with most normal people—in order to subtly encourage less open members of his audience to read all the way through. Who, after all, wants to read a book by a liberal scientist pointing out all the ways conservatives go wrong in their thinking?

The Elephant in the Room

Haidt’s first move is to challenge the primacy of thinking over intuiting. If you’ve ever debated someone into a corner, you know simply demolishing the reasons behind a position will pretty much never be enough to change anyone’s mind. Citing psychologist Tom Gilovich, Haidt explains that when we want to believe something, we ask ourselves, “Can I believe it?” We begin a search, “and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have justification, in case anyone asks.” But if we don’t like the implications of, say, global warming, or the beneficial outcomes associated with free markets, we ask a different question: when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,” showing the many tricks people use to reach the conclusions they want to reach. (84)

Haidt’s early research was designed to force people into making weak moral arguments so that he could explore the intuitive foundations of judgments of right and wrong. When presented with stories involving incest, or eating the family dog, which in every case were carefully worded to make it clear no harm would result to anyone—the incest couldn’t result in pregnancy; the dog was already dead—“subjects tried to invent victims” (24). It was clear that they wanted there to be a logical case based on somebody getting hurt so they could justify their intuitive answer that a wrong had been done.

They said things like ‘I know it’s wrong, but I just can’t think of a reason why.’ They seemed morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively. These subjects were reasoning. They were working quite hard reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. (25)

Reading this section, you get the sense that people come to their beliefs about the world and how to behave in it by asking the same three questions they’d ask before deciding on a t-shirt: how does it feel, how much does it cost, and how does it make me look? Quoting political scientist Don Kinder, Haidt writes, “Political opinions function as ‘badges of social membership.’ They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support” (86)—or like the skinny jeans showing everybody how hip you are.

Kahneman uses the metaphor of two systems to explain the workings of the mind. System 1, intuition, does most of the work most of the time. System 2 takes a lot more effort to engage and can never manage to operate independently of intuition. Kahneman therefore proposes educating your friends about the common intuitive mistakes—because you’ll never recognize them yourself. Haidt uses the metaphor of an intuitive elephant and a cerebrating rider. He first used this image for an earlier book on happiness, so the use of the GOP mascot was accidental. But because of the more intuitive nature of conservative beliefs it’s appropriate. Far from saying that republicans need to think more, though, Haidt emphasizes the point that rational thought is never really rational and never anything but self-interested. He argues,

the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (46)

The futility of trying to avoid motivated reasoning provides Haidt some justification of his own to engage in what can only be called pandering. He cites cultural psychologists Joe Henrich, Steve Heine, and Ara Noenzayan, who argued in their 2010 paper “The Weirdest People in the World?”that researchers need to do more studies with culturally diverse subjects. Haidt commandeers the acronym WEIRD—western, educated, industrial, rich, and democratic—and applies it somewhat derisively for most of his book, even though it applies both to him and to his scientific endeavors. Of course, he can’t argue that what’s popular is necessarily better. But he manages to convey that attitude implicitly, even though he can’t really share the attitude himself.

Haidt is at his best when he’s synthesizing research findings into a holistic vision of human moral nature; he’s at his worst, his cringe-inducing worst, when he tries to be polemical. He succumbs to his most embarrassingly hypocritical impulses in what are transparently intended to be concessions to the religious and the conservative. WEIRD people are more apt to deny their intuitive, judgmental impulses—except where harm or oppression are involved—and insist on the fair application of governing principles derived from reasoned analysis. But apparently there’s something wrong with this approach:

Western philosophy has been worshipping reason and distrusting the passions for thousands of years. There’s a direct line running from Plato through Immanuel Kant to Lawrence Kohlberg. I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. (28)

This is disingenuous. For one thing, he doesn’t refer to the rationalist delusion throughout the book; it only shows up one other time. Both instances implicate the New Atheists. Haidt coins the term rationalist delusion in response to Dawkins’s The God Delusion. An atheist himself, Haidt is throwing believers a bone. To make this concession, though, he’s forced to seriously muddle his argument. “I’m not saying,” he insists,

we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. We should see each individual as being limited, like a neuron. (90)

As far as I know, neither Harris nor Dawkins has ever declared himself dictator of reason—nor, for that matter, did Mill or Rawls (Hitchens might have). Haidt, in his concessions, is guilty of making points against arguments that were never made. He goes on to make a point similar to Kahneman’s.

We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. (90)

What Haidt probably realizes but isn’t saying is that the environment he’s describing is a lot like scientific institutions in academia. In other words, if you hang out in them, you’ll be WEIRD.

A Taste for Self-Righteousness

The divide over morality can largely be reduced to the differences between the urban educated and the poor not-so-educated. As Haidt says of his research in South America, “I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university” (22). One of the major differences he and his research assistants serendipitously discovered was that educated people think it’s normal to discuss the underlying reasons for moral judgments while everyone else in the world—who isn’t WEIRD—thinks it’s odd:

But what I didn’t expect was that these working-class subjects would sometimes find my request for justifications so perplexing. Each time someone said that the people in a story had done something wrong, I asked, “Can you tell me why that was wrong?” When I had interviewed college students on the Penn campus a month earlier, this question brought forth their moral justifications quite smoothly. But a few blocks west, this same question often led to long pauses and disbelieving stares. Those pauses and stares seemed to say,

You mean you don’t know why it’s wrong to do that to a chicken? I have to explain it to you? What planet are you from? (95)

The Penn students “were unique in their unwavering devotion to the ‘harm principle,’” Mill’s dictum that laws are only justified when they prevent harm to citizens. Haidt quotes one of the students as saying, “It’s his chicken, he’s eating it, nobody is getting hurt” (96). (You don’t want to know what he did before cooking it.)

Having spent a little bit of time with working-class people, I can make a point that Haidt overlooks: they weren’t just looking at him as if he were an alien—they were judging him. In their minds, he was wrong just to ask the question. The really odd thing is that even though Haidt is the one asking the questions he seems at points throughout The Righteous Mind to agree that we shouldn’t ask questions like that:

There’s more to morality than harm and fairness. I’m going to try to convince you that this principle is true descriptively—that is, as a portrait of the moralities we see when we look around the world. I’ll set aside the question of whether any of these alternative moralities are really good, true, or justifiable. As an intuitionist, I believe it is a mistake to even raise that emotionally powerful question until we’ve calmed our elephants and cultivated some understanding of what such moralities are trying to accomplish. It’s just too easy for our riders to build a case against every morality, political party, and religion that we don’t like. So let’s try to understand moral diversity first, before we judge other moralities. (98)

But he’s already been busy judging people who base their morality on reason, taking them to task for worshipping it. And while he’s expending so much effort to hold back his own judgments he’s being judged by those whose rival conceptions he’s trying to understand. His open-mindedness and disciplined restraint are as quintessentially liberal as they are unilateral.

In the book’s first section, Haidt recounts his education and his early research into moral intuition. The second section is the story of how he developed his Moral Foundations Theory. It begins with his voyage to Bhubaneswar, the capital of Orissa in India. He went to conduct experiments similar to those he’d already been doing in the Americas. “But these experiments,” he writes, “taught me little in comparison to what I learned just from stumbling around the complex social web of a small Indian city and then talking with my hosts and advisors about my confusion.” It was an earlier account of this sojourn Haidt had written for the online salon The Edge that first piqued my interest in his work and his writing. In both, he talks about his two “incompatible identities.”

On one hand, I was a twenty-nine-year-old liberal atheist with very definite views about right and wrong. On the other hand, I wanted to be like those open-minded anthropologists I had read so much about and had studied with. (101)

The people he meets in India are similar in many ways to American conservatives. “I was immersed,” Haidt writes, “in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine” (102). The conversion to what he calls pluralism doesn’t lead to any realignment of his politics. But supposedly for the first time he begins to feel and experience the appeal of other types of moral thinking. He could see why protecting physical purity might be fulfilling. This is part of what's known as the “ethic of divinity,” and it was missing from his earlier way of thinking. He also began to appreciate certain aspects of the social order, not to the point of advocating hierarchy or rigid sex roles but seeing value in the complex network of interdependence.

The story is thoroughly engrossing, so engrossing that you want it to build up into a life-changing insight that resolves the crisis. That’s where the six moral dimensions come in (though he begins with just five and only adds the last one much later), which he compares to the different dimensions of taste that make up our flavor palette. The two that everyone shares but that liberals give priority to whenever any two or more suggest different responses are Care and Harm—hurting people is wrong and we should help those in need—and Fairness. The other three from the original set are Loyalty, Authority, and Sanctity, loyalty to the tribe, respect for the hierarchy, and recognition of the sacredness of the tribe’s symbols, like the flag. Libertarians are closer to liberals; they just rely less on the Care dimension and much more on the recently added sixth one, Liberty from Opression, which Haidt believes evolved in the context of ancestral egalitarianism similar to that found among modern nomadic foragers. Haidt suggests that restricting yourself to one or two dimensions is like swearing off every flavor but sweet and salty, saying,

many authors reduce morality to a single principle, usually some variant of welfare maximization (basically, help people, don’t hurt them). Or sometimes it’s justice or related notions of fairness, rights, or respect for individuals and their autonomy. There’s The Utilitarian Grill, serving only sweeteners (welfare), and The Deontological Diner, serving only salts (rights). Those are your options. (113)

Haidt doesn’t make the connection between tribalism and the conservative moral trifecta explicit. And he insists he’s not relying on what’s called the Naturalistic Fallacy—reasoning that what’s natural must be right. Rather, he’s being, he claims, strictly descriptive and scientific.

Moral judgment is a kind of perception, and moral science should begin with a careful study of the moral taste receptors. You can’t possibly deduce the list of five taste receptors by pure reasoning, nor should you search for it in scripture. There’s nothing transcendental about them. You’ve got to examine tongues. (115)

But if he really were restricting himself to description he would have no beef with the utilitarian ethicists like Mill, the deontological ones like Kant, or for that matter with the New Atheists, all of whom are operating in the realm of how we should behave and what we should believe as opposed to how we’re naturally, intuitively primed to behave and believe. At one point, he goes so far as to present a case for Kant and Jeremy Bentham, father of utilitarianism, being autistic (the trendy psychological diagnosis du jour) (120). But, like a lawyer who throws out a damning but inadmissible comment only to say “withdrawn” when the defense objects, he assures us that he doesn’t mean the autism thing as an ad hominem.

From The Moral Foundations Website

I think most of my fellow liberals are going to think Haidt’s metaphor needs some adjusting. Humans evolved a craving for sweets because in our ancestral environment fruits were a rare but nutrient-rich delicacy. Likewise, our taste for salt used to be adaptive. But in the modern world our appetites for sugar and salt have created a health crisis. These taste receptors are also easy for industrial food manufacturers to exploit in a way that enriches them and harms us. As Haidt goes on to explain in the third section, our tribal intuitions were what allowed us to flourish as a species. But what he doesn’t realize or won’t openly admit is that in the modern world tribalism is dangerous and far too easily exploited by demagogues and PR experts.

In his story about his time in India, he makes it seem like a whole new world of experiences was opened to him. But this is absurd (and insulting). Liberals experience the sacred too; they just don’t attempt to legislate it. Liberals recognize intuitions pushing them toward dominance and submission. They have feelings of animosity toward outgroups and intense loyalty toward members of their ingroup. Sometimes, they even indulge these intuitions and impulses. The distinction is not that liberals don’t experience such feelings; they simply believe they should question whether acting on them is appropriate in the given context. Loyalty in a friendship or a marriage is moral and essential; loyalty in business, in the form of cronyism, is profoundly immoral. Liberals believe they shouldn’t apply their personal feelings about loyalty or sacredness to their judgments of others because it’s wrong to try to legislate your personal intuitions, or even the intuitions you share with a group whose beliefs may not be shared in other sectors of society. In fact, the need to consider diverse beliefs—the pluralism that Haidt extolls—is precisely the impetus behind the efforts ethicists make to pare down the list of moral considerations.

Moral intuitions, like food cravings, can be seen as temptations requiring discipline to resist. It’s probably no coincidence that the obesity epidemic tracks the moral divide Haidt found when he left the Penn campus. As I read Haidt’s account of Drew Westen’s fMRI experiments with political partisans, I got a bit anxious because I worried a scan might reveal me to be something other than what I consider myself. The machine in this case is a bit like the Sorting Hat at Hogwarts, and I hoped, like Harry Potter, not to be placed in Slytherin. But this hope, even if it stems from my wish to identify with the group of liberals I admire and feel loyalty toward, cannot be as meaningless as Haidt’s “intuitionism” posits.

Ultimately, the findings Haidt brings together under the rubric of Moral Foundations Theory don’t lend themselves in any way to his larger program of bringing about greater understanding and greater civility. He fails to understand that liberals appreciate all the moral dimensions but don’t think they should all be seen as guides to political policies. And while he may want there to be less tribalism in politics he has to realize that most conservatives believe tribalism is politics—and should be.

Resistance to the Hive Switch is Futile

“We are not saints,” Haidt writes in the third section, “but we are sometimes good team players” (191). Though his efforts to use Moral Foundations to understand and appreciate conservatives lead to some bizarre contortions and a profound misunderstanding of liberals, his synthesis of research on moral intuitions with research and theorizing on multi-level selection, including selection at the level of the group, is an important contribution to psychology and anthropology. He writes that

anytime a group finds a way to suppress selfishness, it changes the balance of forces in a multi-level analysis: individual-level selection becomes less important, and group-level selection becomes more powerful. For example, if there is a genetic basis for feelings of loyalty and sanctity (i.e., the Loyalty and Sanctity Foundations), then intense intergroup competition will make these genes become more common in the next generation. (194)

The most interesting idea in this section is that humans possess what Haidt calls a “hive switch” that gets flipped whenever we engage in coordinated group activities. He cites historian William McNeil who recalls an “altered state of consciousness” when he was marching in formation with fellow soldiers in his army days. He describes it as a “sense of pervasive well-being…a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life” (221). Sociologist Emile Durkheim referred to this same experience as “collective effervescence.” People feel it today at football games, at concerts as they dance to a unifying beat, and during religious rituals. It’s a profoundly spiritual experience, and it likely evolved to create a greater sense of social cohesion within groups competing with other groups.

Surprisingly, the altruism inspired by this sense of the sacred triggered by coordinated activity, though primarily directed at fellow group members—parochial altruism—can also flow out in ways that aren’t entirely tribal.

Haidt cites political scientists Robert Putnam and David Campbell’s book, American Grace: How Religion Divides and Unites Us, where they report the finding that “the more frequently people attend religious services, the more generous and charitable they become across the board” (267); they do give more to religious charities, but they also give more to secular ones. Putnam and Campbell write that “religiously observant Americans are better neighbors and better citizens.” The really astonishing finding from Putnam and Campbell’s research, though, is that the social advantages enjoyed by religious people had nothing to do with the actual religious beliefs. Haidt explains,

These beliefs and practices turned out to matter very little. Whether you believe in hell, whether you pray daily, whether you are a Catholic, Protestant, Jew, or Mormon… none of these things correlated with generosity. The only thing that was reliably and powerfully associated with the moral benefits of religion was how enmeshed people were in relationships with their co-religionists. It’s the friendships and group activities, carried out within a moral matrix that emphasizes selflessness. That’s what brings out the best in people. (267)

The Sacred foundation, then, is an integral aspect of our sense of community, as well as a powerful inspiration for altruism. Haidt cites the work of Richard Sosis, who combed through all the records he could find on communes in America. His central finding is that “just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.” Socis went on to identify “one master variable” which accounted for the difference between success and failure for religious groups: “the number of costly sacrifices that each commune demanded from its members” (257). But sacrifices demanded by secular groups made no difference whatsoever. Haidt concludes,

In other words, the very ritual practices that the New Atheists dismiss as costly, inefficient, and irrational turn out to be a solution to one of the hardest problems humans face: cooperation without kinship. Irrational beliefs can sometimes help the group function more rationally, particularly when those beliefs rest upon the Sanctity foundation. Sacredness binds people together, and then blinds them to the arbitrariness of the practice. (257)

This section captures the best and the worst of Haidt's work. The idea that humans have an evolved sense of the sacred, and that it came about to help our ancestral groups cooperate and cohere—that’s a brilliant contribution to theories going back through D.S.Wilson, Emile Durkheim, all the way back to Darwin. Contemplating it sparks a sense of wonder that must emerge from that same evolved feeling for the sacred. But then he uses the insight in the service of a really lame argument.

The costs critics of religion point to aren’t the minor personal ones like giving up alcohol or fasting for a few days. Haidt compares studying the actual, “arbitrary” beliefs and practices of religious communities to observing the movements of a football for the purpose of trying to understand why people love watching games. It’s the coming together as a group, he suggests, the sharing of goals and mutual direction of attention, the feeling of shared triumph or even disappointment. But if the beliefs and rituals aren’t what’s important then there’s no reason they have to be arbitrary—and there’s no reason they should have to entail any degree of hostility toward outsiders. How then can Haidt condemn Harris and Dawkins for “worshipping reason” and celebrating the collective endeavor known as science? Why doesn’t he recognize that for highly educated people, especially scientists, discovery is sacred? He seriously mars his otherwise magnificent work by wrongly assuming anyone who doesn’t think flushing an American flag down the toilet is wrong has no sense of the sacred, shaking his finger at them, effectively saying, rallying around a cause is what being human is all about, but what you flag-flushers think is important just isn’t worthy—even though it’s exactly what I think is important too, what I’ve devoted my career and this book you're holding to anyway.

As Kahneman stresses in his book, resisting the pull of intuition takes a great deal of effort. The main difference between highly educated people and everyone else isn’t a matter of separate moral intuitions. It’s a different attitude toward intuitions in general. Those of us who worship reason believe in the Enlightenment ideals of scientific progress and universal human rights. I think most of us even feel those ideals are sacred and inviolable. But the Enlightenment is a victim of its own success. No one remembers the unchecked violence and injustice that were the norms before it came about—and still are the norms in many parts of the world. In some academic sectors, the Enlightenment is even blamed for some of the crimes its own principles are used to combat, like patriarchy and colonialism. Intuitions are still very much a part of human existence, even among those who are the most thoroughly steeped in Enlightenment values. But worshipping them is far more dangerous than worshipping reason. As the world becomes ever more complicated, nostalgia for simpler times becomes an ever more powerful temptation. And surmounting the pull of intuition may ultimately be an impossible goal. But it’s still a worthy, and even sacred ideal.

But if Haidt’s attempt to inspire understanding and appreciation misfires how are we to achieve the goal of greater civility and less partisanship? Haidt does offer some useful suggestions. Still, I worry that his injunction to “Talk to the elephant” will merely contribute to the growing sway of the burgeoning focus-groupocracy. Interestingly, the third stage of the Robber's Cave experiment may provide some guidance. Sherif and his colleagues did manage to curtail the escalating hostility between the Eagles and the Rattlers. And all it took was some shared goals they had to cooperate to achieve, like when their bus got stuck on the side of the road and all the boys in both groups had to work together to work it free. Maybe it’s time for a mission to Mars all Americans could support (credit Neil de Grasse Tyson). Unfortunately, the conservatives would probably never get behind it. Maybe we should do another of our liberal conspiracy hoaxes to convince them China is planning to build a military base on the Red Planet. Then we’ll be there in no time.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

THE SELF-RIGHTEOUSNESS INSTINCT: STEVEN PINKER ON THE BETTER ANGELS OF MODERNITY AND THE EVILS OF MORALITY

And:

WHY TAMSIN SHAW IMAGINES THE PSYCHOLOGISTS ARE TAKING POWER

Read More
Dennis Junk Dennis Junk

Who Needs Complex Narratives? : Tim Parks' Enlightened Cynicism

Identities can be burdensome, as Parks intimates his is when he reveals his story has brought him to a place where he’s making a living engaging in an activity that serves no need—and yet he can’t bring himself seek out other employment. In an earlier, equally fascinating post titled "The Writer's Job," Parks interprets T.S. Eliot's writing about writers as suggesting that "only those who had real personality, special people like himself, would appreciate what a burden personality was and wish to shed it."

One of my professors asked our class last week how many of us were interested in writing fiction of our own. She was trying to get us to consider the implications of using one strategy for telling a story based on your own life over another. But I was left thinking instead about the implications of nearly everyone in the room raising a hand. Is the audience for any aspiring author’s work composed exclusively of other aspiring authors? If so, does that mean literature is no more than a exclusive society of the published and consumed forever screening would-be initiates, forever dangling the prize of admission to their ranks, allowing only the elite to enter, and effectively sealed off from the world of the non-literary?

Most of our civilization has advanced beyond big books. People still love their stories, but everyone’s time is constrained, and the choices of entertainment are infinite. Reading The Marriage Plot is an extravagance. Reading Of Human Bondage, the book we’re discussing in my class, is only for students of college English and the middle-class white guys trying to impress them. Nevertheless, Jonathan Franzen, whose written two lengthy, too lengthy works of fiction that enjoy a wide readership, presumably made up primarily of literary aspirants like me (I read and enjoyed both), told an Italian interviewer that “There is an enormous need for long, elaborate, complex stories, such as can only be written by an author concentrating alone, free from the deafening chatter of Twitter.”

British author Tim Parks quotes Franzen in a provocative post at The New York Review of Books titled “Do We Need Stories?” Parks notes that “as a novelist it is convenient to think that by the nature of the job one is on the side of the good, supplying an urgent and general need.” Though he’s written some novels of his own, and translated several others from Italian to English, Parks suspects that Franzen is wrong, that as much as we literary folk may enjoy them, we don’t really need complex narratives. We should note that just as Franzen is arguing on behalf of his own vocation Parks is arguing against his, thus effecting a type of enlightened cynicism toward his own work and that of others in the same field. “Personally,” he says, “I fear I’m too enmired in narrative and self narrative to bail out now. I love an engaging novel, I love a complex novel; but I am quite sure I don’t need it.”

         Parks’ argument is fascinating for what it reveals about what many fiction writers and aficionados believe they’re doing when they’re telling stories. It’s also fascinating for what it represents about authors and their attitudes toward writing. Parks rubs up against some profound insights, but then succumbs to some old-fashioned humanities nonsense. Recalling a time when he served as a judge for a literary award, Parks quotes the case made by a colleague on behalf of his or her favored work, which is excellent, it was insisted, “because it offers complex moral situations that help us get a sense of how to live and behave.” As life becomes increasingly complex, then, fraught with distractions like those incessant tweets, we need fictional accounts of complex moral dilemmas to help us train our minds to be equal to the task of living in the modern world. Parks points out two problems with this view: fiction isn’t the only source of stories, and behind all that complexity is the author’s take on the moral implications of the story’s events which readers must decide whether to accept or reject. We can’t escape complex moral dilemmas, so we may not really need any simulated training. And we have to pay attention lest we discover our coach has trained us improperly. The power of stories can, as Parks suggests, be “pernicious.” “In this view of things, rather than needing stories we need to learn how to smell out their drift and resist them.” (Yeah, but does anyone read Ayn Rand who isn't already convinced?)

But Parks doesn’t believe the true goal of either authors or readers is moral development or practical training. Instead, complex narratives give pleasure because they bolster our belief in complex selves. Words like God, angel, devil, and ghost, Parks contends, have to come with stories attached to them to be meaningful because they don’t refer to anything we can perceive. From this premise of one-word stories, he proceeds,

Arguably the most important word in the invented-referents category is “self.” We would like the self to exist perhaps, but does it really? What is it? The need to surround it with a lexical cluster of reinforcing terms—identity, character, personality, soul—all with equally dubious referents suggests our anxiety. The more words we invent, the more we feel reassured that there really is something there to refer to.

When my classmates and I raised our hands and acknowledged our shared desire to engage in the creative act of storytelling, what we were really doing, according to Parks, was expressing our belief in that fictional character we refer to reverentially as ourselves. 

One of the accomplishments of the novel, which as we know blossomed with the consolidation of Western individualism, has been to reinforce this ingenious invention, to have us believe more and more strongly in this sovereign self whose essential identity remains unchanged by all vicissitudes. Telling the stories of various characters in relation to each other, how something started, how it developed, how it ended, novels are intimately involved with the way we make up ourselves. They reinforce a process we are engaged in every moment of the day, self creation. They sustain the idea of a self projected through time, a self eager to be a real something (even at the cost of great suffering) and not an illusion.

Parks is just as much a product of that “Western individualism” as the readers he’s trying to enlighten as to the fictional nature of their essential being. As with his attempt at undermining the ultimate need for his own profession, there’s a quality of self-immolation in this argument—except of course there’s nothing, really, to immolate.

What exactly, we may wonder, is doing the reading, is so desperate to believe in its own reality? And why is that belief in its own reality so powerful that this thing, whatever it may be, is willing to experience great suffering to reinforce it? Parks suggests the key to the self is some type of unchanging and original coherence. So we like stories because we like characters who are themselves coherent and clearly delineated from other coherent characters.

The more complex and historically dense the stories are, the stronger the impression they give of unique and protracted individual identity beneath surface transformations, conversions, dilemmas, aberrations. In this sense, even pessimistic novels—say, J.M. Coetzee’s Disgrace—can be encouraging: however hard circumstances may be, you do have a self, a personal story to shape and live. You are a unique something that can fight back against all the confusion around. You have pathos.

In this author’s argument for the superfluity of authors, the centrality of pain and suffering to the story of the self is important to note. He makes the point even more explicit, albeit inadvertently, when he says, “If we asked the question of, for example, a Buddhist priest, he or she would probably tell us that it is precisely this illusion of selfhood that makes so many in the West unhappy.”

I don’t pretend to have all the questions surrounding our human fascination with narrative—complex and otherwise—worked out, but I do know Parks’ premise is faulty.

Unlike many professional scholars in the Humanities, Parks acknowledges that at least some words can refer to things in the world. But he goes wrong when he assumes that if there exists no physical object to refer to the word must have a fictional story attached to it. There is good evidence, for instance, that our notions of God and devils and spirits are not in fact based on stories, though stories clearly color their meanings. Our interactions with invisible beings are based on the same cognitive mechanisms that help us interact with completely visible fellow humans. What psychologists call theory of mind, our reading of intentions and mental states into others, likely extends into realms where no mind exists to have intentions and states. That’s where our dualistic philosophy comes from.

While Parks is right in pointing out that the words God and self don’t have physical referents—though most of us, I assume, think of our bodies as ourselves to some degree—he’s completely wrong in inferring these words only work as fictional narratives. People assume, wrongly, that God is a real being because they have experiences with him. In the same way, the self isn’t an object but an experience—and a very real experience. (Does the word fun have to come with a story attached?) The consistency across time and circumstance, the sense of unified awareness, these are certainly exaggerated at times. So too is our sense of transformation though, as anyone knows who’s discovered old writings from an earlier stage of life and thought, “Wow, I was thinking about the same stuff back then as I am now—even my writing style is similar!”

Parks is wrong too about so-called Western society, as pretty much everyone who uses that term is. It’s true that some Asian societies have a more collectivist orientation, but I’ve heard rumors that a few Japanese people actually enjoy reading novels. (The professor of the Brittish Lit course I'm taking is Chinese.) Those Buddhists monks are deluded too. Ruut Veenhoven surveyed 43 nations in the early 1990s and discovered that as individualism increases, so too does happiness. “There is no pattern of diminishing returns,” Veenhoven writes. “This indicates that individualization has not yet passed its optimum.” What this means is that, assuming Parks is right in positing that novel-reading increases individualism, reading novels could make you happier. Unfortunately, a lot of high-brow, literary authors would bristle at this idea because it makes of their work less a heroic surveying of the abyss and more of a commodity.

Parks doesn’t see any meaningful distinction between self and identity, but psychologists would use the latter term to label his idea of a coherent self-story. Dan McAdams is the leading proponent of the idea that in addition to a unified and stable experience of ourselves we each carry with us a story whose central theme is our own uniqueness and how it developed. He writes in his book The Stories We Live By: Personal Myths and the Making of the Self that identity is “an inner story of the self that integrates the reconstructed past, perceived present, and anticipated future to provide a life with unity, purpose, and meaning.” But we don’t just tell these stories to ourselves, nor are we solely interested in our own story. One of the functions of identity is to make us seem compelling and attractive to other people. Parks, for instance, tells us the story of how he provides a service, writing and translating, he understands isn’t necessary to anyone. And, if you’re like me, at least for a moment, you’re impressed with his ability to shoulder the burden of this enlightened cynicism. He’s a bit like those Buddhist monks who go to such great lengths to eradicate their egos.

The insight that Parks never quite manages to arrive at is that suffering is integral to stories of the self. If my story of myself, my identity, doesn’t feature any loss or conflict, then it’s not going to be very compelling to anyone. But what’s really compelling are the identities which somehow manage to cause the self whose stories they are their own pain. Identities can be burdensome, as Parks intimates his is when he reveals his story has brought him to a place where he’s making a living engaging in an activity that serves no need—and yet he can’t bring himself seek out other employment. In an earlier, equally fascinating post titled "The Writer's Job," Parks interprets T.S. Eliot's writing about writers as suggesting that "only those who had real personality, special people like himself, would appreciate what a burden personality was and wish to shed it."

If we don’t suffer for our identities, then we haven’t earned them. Without the pain of initiation, we don’t really belong. We’re not genuinely who we claim to be. We’re tourists. We’re poseurs. Mitt Romney, for instance, is thought to be an inauthentic conservative because he hasn’t shown sufficient willingness to lose votes—and possibly elections—for the sake of his convictions. We can’t help but assume equivalence between cost and value. If your identity doesn’t entail some kind of cost, well, then it’s going to come off as cheap. So a lot of people play up, or even fabricate, the suffering in their lives.

What about Parks’ question? Are complex narratives necessary? Maybe, like identities, the narratives we tell, as well as the narratives we enjoy, work as costly signals, so that the complexity of the stories you like serves as a reliable indication of the complexity of your identity. If you can truly appreciate a complex novel, you can truly appreciate a complex individual. Maybe our complicated modern civilization, even with its tweets and Kindles, is more a boon than a hindrance to complexity and happiness. What this would mean is that if two people on the subway realize they’re both reading the same complex narrative they can be pretty sure they’re compatible as friends or lovers. Either that, or they’re both English professors and they have no idea what’s going on, in which case they’re still compatible but they’ll probably hate each other regardless.

At least, that's the impression I get from David Lodge's Small World, the latest complex narrative assigned in my English Lit course taught by a professor from an Eastern society.

Also read:

WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

And:

WHAT'S THE POINT OF DIFFICULT READING?

Read More
Dennis Junk Dennis Junk

HUNGER GAME THEORY: Post-Apocalyptic Fiction and the Rebirth of Humanity

We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds?

The appeal of post-apocalyptic stories stems from the joy of experiencing anew the birth of humanity. The renaissance never occurs in M.T. Anderson’s Feed, in which the main character is rendered hopelessly complacent by the entertainment and advertising beamed directly into his brain. And it is that very complacency, the product of our modern civilization's unfathomable complexity, that most threatens our sense of our own humanity. There was likely a time, though, when small groups composed of members of our species were beset by outside groups composed of individuals of a different nature, a nature that when juxtaposed with ours left no doubt as to who the humans were. 

      In Suzanne Collins’ The Hunger Games, Katniss Everdeen reflects on how the life-or-death stakes of the contest she and her fellow “tributes” are made to participate in can transform teenage boys and girls into crazed killers. She’s been brought to a high-tech mega-city from District 12, a mining town as quaint as the so-called Capitol is futuristic. Peeta Mellark, who was chosen by lottery as the other half of the boy-girl pair of tributes from the district, has just said to her, “I want to die as myself…I don’t want them to change me in there. Turn me into some kind of monster that I’m not.” Peeta also wants “to show the Capitol they don’t own me. That I’m more than just a piece in their Games.” The idea startles Katniss, who at this point is thinking of nothing but surviving the games—knowing full well that there are twenty-two more tributes and only one will be allowed to leave the arena alive. Annoyed by Peeta’s pronouncement of a higher purpose, she thinks,

We will see how high and mighty he is when he’s faced with life and death. He’ll probably turn into one of those raging beast tributes, the kind who tries to eat someone’s heart after they’ve killed them. There was a guy like that a few years ago from District 6 called Titus. He went completely savage and the Gamemakers had to have him stunned with electric guns to collect the bodies of the players he’d killed before he ate them. There are no rules in the arena, but cannibalism doesn’t play well with the Capitol audience, so they tried to head it off. (141-3)

Cannibalism is the ultimate relinquishing of the mantle of humanity because it entails denying the humanity of those being hunted for food. It’s the most basic form of selfishness: I kill you so I can live.

The threat posed to humanity by hunger is also the main theme of Cormac McCarthy’s The Road, the story of a father and son wandering around the ruins of a collapsed civilization. The two routinely search abandoned houses for food and supplies, and in one they discover a bunch of people locked in a cellar. The gruesome clue to the mystery of why they’re being kept is that some have limbs amputated. The men keeping them are devouring the living bodies a piece at a time. After a harrowing escape, the boy, understandably disturbed, asks, “They’re going to kill those people, arent they?” His father, trying to protect him from the harsh reality, answers yes, but tries to be evasive, leading to this exchange:

Why do they have to do that?

I dont know.

Are they going to eat them?

I dont know.

They’re going to eat them, arent they?

Yes.

And we couldnt help them because then they’d eat us too.

Yes.

And that’s why we couldnt help them.

Yes.

Okay.

But of course it’s not okay. After they’ve put some more distance between them and the human abattoir, the boy starts to cry. His father presses him to explain what’s wrong:

Just tell me.

We wouldnt ever eat anybody, would we?

No. Of course not.

Even if we were starving?

We’re starving now.

You said we werent.

I said we werent dying. I didn’t say we werent starving.

But we wouldnt.

No. We wouldnt.

No matter what.

No. No matter what.

Because we’re the good guys.

Yes.

And we’re carrying the fire.

And we’re carrying the fire. Yes.

Okay. (127-9)

And this time it actually is okay because the boy, like Peeta Mellark, has made it clear that if the choice is between dying and becoming a monster he wants to die.

This preference for death over depredation of others is one of the hallmarks of humanity, and it poses a major difficulty for economists and evolutionary biologists alike. How could this type of selflessness possibly evolve?

John von Neumann, one of the founders of game theory, served an important role in developing the policies that have so far prevented the real life apocalypse from taking place. He is credited with the strategy of Mutually Assured Destruction, or MAD (he liked amusing acronyms), that prevailed during the Cold War. As the name implies, the goal was to assure the Soviets that if they attacked us everyone would die. Since the U.S. knew the same was true of any of our own plans to attack the Soviets, a tense peace, or Cold War, was the inevitable result. But von Neumann was not at all content with this peace. He devoted his twilight years to pushing for the development of Intercontinental Ballistic Missiles (ICBMs) that would allow the U.S. to bomb Russia without giving the Soviets a chance to respond. In 1950, he made the infamous remark that inspired Dr. Strangelove: “If you say why not bomb them tomorrow, I say, why not today. If you say today at five o’clock, I say why not one o’clock?”

           Von Neumann’s eagerness to hit the Russians first was based on the logic of game theory, and that same logic is at play in The Hunger Games and other post-apocalyptic fiction. The problem with cooperation, whether between rival nations or between individual competitors in a game of life-or-death, is that it requires trust—and once one player begins to trust the other he or see becomes vulnerable to exploitation, the proverbial stab in the back from the person who’s supposed to be watching it. Game theorists model this dynamic with a thought experiment called the Prisoner’s Dilemma. Imagine two criminals are captured and taken to separate interrogation rooms. Each criminal has the option of either cooperating with the other criminal by remaining silent or betraying him or her by confessing. Here’s a graph of the possible outcomes:

No matter what the other player does, each of them achieves a better outcome by confessing. Von Neumann saw the standoff between the U.S. and the Soviets as a Prisoner’s Dilemma; by not launching nukes, each side was cooperating with the other. Eventually, though, one of them had to realize that the only rational thing to do was be the first to defect.

But the way humans play games is a bit different. As it turned out, von Neumann was wrong about the game theory implications of the Cold War—neither side ever did pull the trigger; both prisoners kept their mouth shut. In Collins' novel, Katniss faces a Prisoner's Dilemma every time she encounters another tribute who may be willing to team up with her in the hunger game. The graph for her and Peeta looks like this:

In the context of the hunger games, then, it makes sense to team up with rivals as long as they have useful skills, knowledge, or strength. Each tribute knows, furthermore, that as long as he or she is useful to a teammate, it would be irrational for that teammate to defect.

The Prisoner’s Dilemma logic gets much more complicated when you start having players try to solve it over multiple rounds of play. Game theorists refer to each time a player has to make a choice as an iteration. And to model human cooperative behavior you have to not only have multiple iterations but also find a way to factor in each player’s awareness of how rivals have responded to the dilemma in the past. Humans have reputations. Katniss, for instance, doesn’t trust the Career tributes because they have a reputation for being ruthless. She even begins to suspect Peeta when she sees that he’s teamed up with the Careers. (His knowledge of Katniss is a resource to them, but he’s using that knowledge in an irrational way—to protect her instead of himself.) On the other hand, Katniss trusts Rue because she's young and dependent—and because she comes from an adjacent district not known for sending tributes who are cold-blooded.

When you have multiple iterations and reputations, you also open the door for punishments and rewards. At the most basic level, people reward those who they witness cooperating by being more willing to cooperate with them. As we read or watch The Hunger Games, we can actually experience the emotional shift that occurs in ourselves as we witness Katniss’s cooperative behavior.

People punish those who defect by being especially reluctant to trust them. At this point, the analysis is still within the realm of the purely selfish and rational. But you can’t stay in that realm for very long when you’re talking about the ways humans respond to one another.

            Each time Katniss encounters another tribute in the games she faces a Prisoner’s Dilemma. Until the final round, the hunger games are not a zero-sum contest—which means that a gain for one doesn’t necessarily mean a loss for the other. Ultimately, of course, Katniss and Peeta are playing a zero-sum game; since only one tribute can win, one of any two surviving players at the end will have to kill the other (or let him die). Every time one tribute kills another, the math of the Prisoner’s Dilemma has to be adjusted. Peeta, for instance, wouldn’t want to betray Katniss early on, while there are still several tributes trying to kill them, but he would want to balance the benefits of her resources with whatever advantage he could gain from her unsuspecting trust—so as they approach the last few tributes, his temptation to betray her gets stronger. Of course, Katniss knows this too, and so the same logic applies for her.

            As everyone who’s read the novel or seen the movie knows, however, this isn’t how either Peeta or Katniss plays in the hunger games. And we already have an idea of why that is: Peeta has said he doesn’t want to let the games turn him into a monster. Figuring out the calculus of the most rational decisions is well and good, but humans are often moved by their emotions—fear, affection, guilt, indebtedness, love, rage—to behave in ways that are completely irrational—at least in the near term. Peeta is in love with Katniss, and though she doesn’t really quite trust him at first, she proves willing to sacrifice herself in order to help him survive. This goes well beyond cooperation to serve purely selfish interests.

Many evolutionary theorists believe that at some point in our evolutionary history, humans began competing with each other to see who could be the most cooperative. This paradoxical idea emerges out of a type of interaction between and among individuals called costly signaling. Many social creatures must decide who among their conspecifics would make the best allies. And all sexually reproducing animals have to have some way to decide with whom to mate. Determining who would make the best ally or who would be the fittest mate is so important that only the most reliable signals are given any heed. What makes the signals reliable is their cost—only the fittest can afford to engage in costly signaling. Some animals have elaborate feathers that are conspicuous to predators; others have massive antlers. This is known as the handicap principle. In humans, the theory goes, altruism somehow emerged as a costly signal, so that the fittest demonstrate their fitness by engaging in behaviors that benefit others to their own detriment. The boy in The Road, for instance, isn’t just upset by the prospect of having to turn to canibalism himself; he’s sad that he and his father weren’t able to help the other people they found locked in the cellar.

We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds? But when it really comes down to it, when it really matters most, both Katniss and Peeta demonstrate that they’re willing to protect one another even at a cost to themselves.

The birth of humanity occurred, somewhat figuratively, when people refused to play the game of me versus you and determined instead to play us versus them. Humans don’t like zero-sum games, and whenever possible they try to change to the rules so there can be more than one winner. To do that, though, they have to make it clear that they would rather die than betray their teammates. In The Road, the father and his son continue to carry the fire, and in The Hunger Games Peeta gets his chance to show he’d rather die than be turned into a monster. By the end of the story, it’s really no surprise what Katniss choses to do either. Saving her sister may not have been purely altruistic from a genetic standpoint. But Peeta isn’t related to her, nor is he her only—or even her most eligible—suitor. Still, her moments of cold strategizing notwithstanding, we've had her picked as an altruist all along.

Of course, humanity may have begun with the sense that it’s us versus them, but as it’s matured the us has grown to encompass an ever wider assortment of people and the them has receded to include more and more circumscribed groups of evil-doers. Unfortunately, there are still all too many people who are overly eager to treat unfamiliar groups as rival tribes, and all too many people who believe that the best governing principle for society is competition—the war of all against all. Altruism is one of the main hallmarks of humanity, and yet some people are simply more altruistic than others. Let’s just hope that it doesn’t come down to us versus them…again.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Life's White Machine: James Wood and What Doesn't Happen in Fiction

For James Wood, fiction is communion. This view has implications about what constitutes the best literature—all the elements from description to dialogue should work to further the dramatic development of the connection between reader and character.

No one is a better reader of literary language than James Wood. In his reviews, he conveys with grace and precision his uncanny feel for what authors set out to say, what they actually end up saying, and what any discrepancy might mean for their larger literary endeavor. He effortlessly and convincingly infers from the lurch of faulty lines the confusions and pretentions and lacuna in understanding of struggling writers. Some take steady aim at starkly circumscribed targets, his analysis suggests, while others, desperate to achieve some greater, more devastating impact, shoot wistfully into the clouds. He can even listen to the likes of republican presidential nominee Rick Santorum and explain, with his seemingly eidetic knowledge of biblical history, what is really meant when the supposed Catholic uses the word steward.

As a critic, Wood’s ability to see character in narration and to find the author, with all his conceits and difficulties, in the character is often downright unsettling. For him there exists no divide between language and psychology—literature is the struggle of conflicted minds to capture the essence of experiences, their own and others’.

When Robert Browning describes the sound of a bird singing its song twice over, in order to ‘recapture/ The first fine careless rapture,’ he is being a poet, trying to find the best poetic image; but when Chekhov, in his story ‘Peasants,’ says that a bird’s cry sounded as if a cow had been locked up in a shed all night, he is being a fiction writer: he is thinking like one of his peasants. (24)

This is from Wood’s How Fiction Works. In the midst of a long paean to the power of free indirect style, the technique that allows the language of the narrator to bend toward and blend with the thoughts and linguistic style of characters—moving in and out of their minds—he deigns to mention, in a footnote, an actual literary theory, or rather Literary Theory. Wood likes Nabokov’s scene in the novel Pnin that has the eponymous professor trying to grasp a nutcracker in a sink full of dishes. The narrator awkwardly calls it a “leggy thing” as it slips through his grasp. “Leggy” conveys the image. “But ‘thing’ is even better, precisely because it is vague: Pnin is lunging at the implement, and what word in English better conveys a messy lunge, a swipe at verbal meaning, than ‘thing’?” (25) The vagueness makes of the psychological drama a contagion. There could be no symbol more immediately felt.

The Russian Formalists come into Wood’s discussion here. Their theory focused on metaphors that bring about an “estranging” or “defamiliarizing” effect. Wood would press them to acknowledge that this making strange of familiar objects and experiences works in the service of connecting the minds of the reader with the mind of the character—it’s anything but random:

But whereas the Russian Formalists see this metaphorical habit as emblematic of the way that fiction does not refer to reality, is a self-enclosed machine (such metaphors are the jewels of the author’s freakish, solipsistic art), I prefer the way that such metaphors, as in Pnin’s “leggy thing,” refer deeply to reality: because they emanate from the characters themselves, and are fruits of free indirect style. (26)

Language and words and metaphors, Wood points out, by their nature carry us toward something that is diametrically opposed to collapsing in on ourselves. Indeed, there is something perverse about the insistence of so many professional scholars devoted to the study of literature that the main thrust of language is toward some unacknowledged agenda of preserving an unjust status quo—with the implication that the only way to change the world is to torture our modes of expression, beginning with literature (even though only a tiny portion of most first world populations bother to read any).

For Wood, fiction is communion. This view has implications about what constitutes the best literature—all the elements from description to dialogue should work to further the dramatic development of the connection between reader and character. Wood even believes that the emphasis on “round” characters is overstated, pointing out that many of the most memorable—Jean Brodie, Mr. Biswas—are one-dimensional and unchanging. Nowhere in the table of contents of How Fiction Works, or even in the index, does the word plot appear. He does, however, discuss plot in his response to postmodernists’ complaints about realism. Wood quotes author Rick Moody:

It’s quaint to say so, but the realistic novel still needs a kick in the ass. The genre, with its epiphanies, its rising action, its predictable movement, its conventional humanisms, can still entertain and move us on occasion, but for me it’s politically and philosophically dubious and often dull. Therefore, it needs a kick in the ass.

Moody is known for a type of fiction that intentionally sabotages the sacred communion Wood sees as essential to the experience of reading fiction. He begins his response by unpacking some of the claims in Moody’s fussy pronouncement:

Moody’s three sentences efficiently compact the reigning assumptions. Realism is a “genre” (rather than, say, a central impulse in fiction-making); it is taken to be mere dead convention, and to be related to a certain kind of traditional plot, with predictable beginnings and endings; it deals in “round” characters, but softly and piously (“conventional humanisms”); it assumes that the world can be described, with a naively stable link between word and referent (“philosophically dubious”); and all this will tend toward a conservative or even oppressive politics (“politically… dubious”).

Wood begins the section following this analysis with a one-sentence paragraph: “This is all more or less nonsense” (224-5) (thus winning my devoted readership).

That “more or less” refers to Wood’s own frustrations with modern fiction. Conventions, he concedes, tend toward ossification, though a trope’s status as a trope, he maintains, doesn’t make it untrue. “I love you,” is the most clichéd sentence in English. That doesn’t nullify the experience of falling in love (236). Wood does believe, however, that realistic fiction is too eventful to live up to the label.

Reviewing Ben Lerner’s exquisite short novel Leaving the Atocha Station, Wood lavishes praise on the postmodernist poet’s first work of fiction. He writes of the author and his main character Adam Gordon,

Lerner is attempting to capture something that most conventional novels, with their cumbersome caravans of plot and scene and "conflict," fail to do: the drift of thought, the unmomentous passage of undramatic life. Several times in the book, he describes this as "that other thing, the sound-absorbent screen, life’s white machine, shadows massing in the middle distance… the texture of et cetera itself." Reading Tolstoy, Adam reflects that even that great master of the texture of et cetera itself was too dramatic, too tidy, too momentous: "Not the little miracles and luminous branching injuries, but the other thing, whatever it was, was life, and was falsified by any way of talking or writing or thinking that emphasized sharply localized occurrences in time." (98)

Wood is suspicious of plot, and even of those epiphanies whereby characters are rendered dynamic or three-dimensional or “round,” because he seeks in fiction new ways of seeing the world he inhabits according to how it might be seen by lyrically gifted fellow inhabitants. Those “cumbersome caravans of plot and scene and ‘conflict’" tend to be implausible distractions, forcing the communion into narrow confessionals, breaking the spell.

As a critic who has garnered wide acclaim from august corners conferring a modicum of actual authority, and one who's achieved something quite rare for public intellectuals, a popular following, Wood is (too) often criticized for his narrow aestheticism. Once he closes the door on goofy postmodern gimcrack, it remains closed to other potentially relevant, potentially illuminating cultural considerations—or so his detractors maintain. That popular following of his is, however, comprised of a small subset of fiction readers. And the disconnect between consumers of popular fiction and the more literary New Yorker subscribers speaks not just to the cultural issue of declining literacy or growing apathy toward fictional writing but to the more fundamental question of why people seek out narratives, along with the question Wood proposes to address in the title of his book, how does fiction work?

While Wood communes with synesthetic flaneurs, many readers are looking to have their curiosity piqued, their questing childhood adventurousness revived, their romantic and nightmare imaginings played out before them. “If you look at the best of literary fiction," Benjamin Percy said in an interview with Joe Fassler,

you see three-dimensional characters, you see exquisite sentences, you see glowing metaphors. But if you look at the worst of literary fiction, you see that nothing happens. Somebody takes a sip of tea, looks out the window at a bank of roiling clouds and has an epiphany.

The scene Percy describes is even more eventful than what Lerner describes as “life’s white machine”—it features one of those damn epiphanies. But Percy is frustrated with heavy-handed plots too.

In the worst of genre fiction, you see hollow characters, you see transparent prose, you see the same themes and archetypes occurring from book to book. If you look at the best of genre fiction, you see this incredible desire to discover what happens next.

The interview is part of Fessler’s post on the Atlantic website, “How Zombies and Superheroes Conquered Highbrow Fiction.” Percy is explaining the appeal of a new class of novel.

So what I'm trying to do is get back in touch with that time of my life when I was reading genre, and turning the pages so quickly they made a breeze on my face. I'm trying to take the best of what I've learned from literary fiction and apply it to the best of genre fiction, to make a kind of hybridized animal.

Is it possible to balance the two impulses: the urge to represent and defamiliarize, to commune, on the one hand, and the urge to create and experience suspense on the other? Obviously, if the theme you’re taking on is the struggle with boredom or the meaningless wash of time—white machine reminds me of a washer—then an incident-rich plot can only be ironic.

The solution to the conundrum is that no life is without incident. Fiction’s subject has always been births, deaths, comings-of-age, marriages, battles. I’d imagine Wood himself is often in the mood for something other than idle reflection. Ian McEwan, whose Atonement provides Wood an illustrative example of how narration brilliantly captures character, is often taken to task for overplotting his novels. Citing Henry James in a New Yorker interview with Daniel Zalewski to the effect that novels have an obligation to “be interesting,” McEwan admits finding “most novels incredibly boring. It’s amazing how the form endures. Not being boring is quite a challenge.” And if he thinks most novels are boring he should definitely stay away from the short fiction that gets published in the New Yorker nowadays.

A further implication of Wood’s observation about narration’s capacity for connecting reader to character is that characters who live eventful lives should inhabit eventful narratives. This shifts the issue of plot back to the issue of character, so the question is not what types of things should or shouldn’t happen in fiction but rather what type of characters do we want to read about? And there’s no question that literary fiction over the last century has been dominated by a bunch of passive losers, men and women flailing desperately about before succumbing to societal or biological forces. In commercial fiction, the protagonists beat the odds; in literature, the odds beat the protagonists.

There’s a philosophy at play in this dynamic. Heroes are thought to lend themselves to a certain view of the world, where overcoming sickness and poverty and cultural impoverishment is more of a rite of passage than a real gauge of how intractable those impediments are for nearly everyone who faces them. If audiences are exposed to too many tales of heroism, then hardship becomes a prop in personal development. Characters overcoming difficulties trivializes those difficulties. Winston Smith can’t escape O’Brien and Room 101 or readers won’t appreciate the true threat posed by Big Brother. The problem is that the ascent of the passive loser and the fiction of acquiescence don’t exactly inspire reform-minded action either.

Adam Gordon, the narrator of Leaving the Atocha Station, is definitely a loser. He worries all day that he’s some kind of impostor. He’s whiny and wracked with self-doubt. But even he doesn’t sit around doing nothing. The novel is about his trip to Spain. He pursues women with mixed success. He does readings of his poetry. He witnesses a terrorist attack. And these activities and events are interesting, as James insisted they must be. Capturing the feel of uneventful passages of time may be a worthy literary ambition, but most people seek out fiction to break up periods of nothingness. It’s never the case in real life that nothing is happening anyway—we’re at every instance getting older. I for one don’t find the prospect of spending time with people or characters who just sit passively by as that happens all that appealing.

In a remarkably lame failure of a lampoon in Harper's Colson Whitehead targets Wood's enthusiasm for Saul Bellow. And Bellow was indeed one of those impossibly good writers who could describe eating Corn Flakes and make it profound and amusing. Still, I'm a little suspicious of anyone who claims to enjoy (though enjoyment shouldn't be the only measure of literary merit) reading about the Bellow characters who wander around Chicago as much as reading about Henderson wandering around Africa. 

  Henderson: I'm actually looking forward to the next opportunity I get to hang out with that crazy bastard.

Read More
Dennis Junk Dennis Junk

New Yorker's Talk of the Town Goes Sci-Fi

The “Talk of the Town” pieces in The New Yorker have a distinctive style. Here, I write a fictional one about a man who’s gradually replacing parts of himself to potentially become immortal.

Dept. of Neurotechnology

Undermin(d)ing Mortality

"Most people's first response," Michael Maytree tells me over lunch, "is, you know, of course I want to live forever." The topic of our conversation surprises me, as Maytree's fame hinges not on his longevity—as remarkable as his ninety-seven years makes him—but on his current status as record-holder for greatest proportion of manmade brain in any human. Maytree says according to his doctors his brain is around seventy percent prosthetic. (Most people with prosthetic brain parts bristle at the term "artificial," but Maytree enjoys the running joke of his wife's about any extraordinary aspect of his thinking apparatus being necessarily unreal.)

He goes on, "But then you have to ask yourself: Do I really want to live through the pain of grieving for people again and again? Is there enough to look forward to to make going on—and on and on—worthwhile?" He stops to take a long sip of his coffee while quickly scanning our fellow patrons in the diner on West 103rd. Only when his age is kept in mind does there seem anything unsettling about his sharp-eyed attunement. Within the spectrum of aging, Maytree could easily pass for a younger guy with poor skin resiliency.

"The question I find most troubling though is, will I, as I get really, really old, be able to experience things, particularly relationships, as…"—he rolls his right hand, still holding a forkful of couscous, as he searches for the mot juste—"as profoundly—or fulfillingly—as I did when I was younger." He smirks and adds, "Like when I was still in my eighties."

When we first sat down in the diner, I asked Maytree if he'd received much attention from anyone other than techies and fellow implantees. Aside from the never-ending cascade of questions posted on the MindFX website he helps run (www.mindfx.gov), which serves as something of a support group listserv for people with brain prostheses and their families, and the requisite visits to research labs, including the one he receives medical care from, he gets noticed very little. The question about his brain he finds most interesting, he says, comes up frequently at the labs.

"I'd thought about it before I got the last implant," he said. "It struck me when Dr. Branson"—Maytree's chief neurosurgeon—"told me when it was done I'd have something like seventy percent brain replacement. Well, if my brain is already mostly mechanical, it shouldn't be that much of a stretch to transfer the part that isn't into some sort of durable medium—and, viola, my mind would become immortal."

It turned out the laboratory where Branson performed the surgery, the latest ("probably not the last," Maytree says) in a series of replacements and augmentations that began with a treatment for an injury he sustained in combat while serving in Iran and continued as he purchased shares in several biotech and neural implant businesses and watched their value soar, already had a division devoted to work on this very prospect. Though the work is being kept secret, it seems Maytree would be a likely subject if experimental procedures are in the offing. Hence my follow-up question: "Would you do it?"

"Think of a friend you've made recently," he enjoins me now, putting down his fork so he can gesticulate freely. "Now, is that friendship comparable—I mean emotion-wise—with friendships you began as a child? Sometimes I think there's no comparison; relationships in childhood are much deeper. Is it the same with every experience?" He rests his right elbow on the table next to his plate and leans in. "Or is the difference just a trick of memory? I honestly don't know."

(Another favorite question of Maytree's: Are you conscious? He says people usually add, or at least imply, "I mean, like me," to clarify. "I always ask then, 'Are you conscious—I mean like you were five years ago?' Naturally they can't remember.")

Finally, he leans back again, looks off into space shaking his head. "It's hard to think about without getting lost in the philosophical…" He trails off a moment before continuing dreamily, with downcast eyes and absent expression. "But it's important because you kind of need to know if the new experiences are going to be worth the passing on of the old ones." And that's the crux of the problem.

"Of course," he says turning back to me with a fraught grin, "it all boils down to what's going on in the brain anyway."

Dennis Junk

Read More
Dennis Junk Dennis Junk

The Adaptive Appeal of Bad Boys

From the intro to my master’s thesis where I explore the evolved psychological dynamics of storytelling and witnessing, with a special emphasis on the paradox that the most compelling characters are often less than perfect human beings. Why do audiences like Milton’s Satan, for instance? Why did we all fall in love with Tyler Durden from Fight Club? It turns out both of these characters give indications that they just may be more altruistic than they appear at first.

Excerpt from Hierarchies in Hell and Leaderless Fight ClubsAltruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis

            In a New York Times article published in the spring of 2010, psychologist Paul Bloom tells the story of a one-year-old boy’s remarkable response to a puppet show. The drama the puppets enacted began with a central character’s demonstration of a desire to play with a ball. After revealing that intention, the character roles the ball to a second character who likewise wants to play and so rolls the ball back to the first. When the first character rolls the ball to a third, however, this puppet snatches it up and quickly absconds. The second, nice puppet and the third, mean one are then placed before the boy, who’s been keenly attentive to their doings, and they both have placed before them a few treats. The boy is now instructed by one of the adults in the room to take a treat away from one of the puppets. Most children respond to the instructions by taking the treat away from the mean puppet, and this particular boy is no different. He’s not content with such a meager punishment, though, and after removing the treat he proceeds to reach out and smack the mean puppet on the head.

            Brief stage shows like the one featuring the nice and naughty puppets are part of an ongoing research program lead by Karen Wynn, Bloom’s wife and colleague, and graduate student Kiley Hamlin at Yale University’s Infant Cognition Center. An earlier permutation of the study was featured on PBS’s Nova series The Human Spark(jump to chapter 5), which shows host Alan Alda looking on as an infant named Jessica attends to a puppet show with the same script as the one that riled the boy Bloom describes. Jessica is so tiny that her ability to track and interpret the puppets’ behavior on any level is impressive, but when she demonstrates a rudimentary capacity for moral judgment by reaching with unchecked joy for the nice puppet while barely glancing at the mean one, Alda—and Nova viewers along with him—can’t help but demonstrate his own delight. Jessica shows unmistakable signs of positive emotion in response to the nice puppet’s behaviors, and Alda in turn feels positive emotions toward Jessica. Bloom attests that “if you watch the older babies during the experiments, they don’t act like impassive judges—they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events” (6). Any adult witnessing the children’s reactions can be counted on to mirror these expressions and to feel delight at the babies’ incredible precocity.

            The setup for these experiments with children is very similar to experiments with adult participants that assess responses to anonymously witnessed exchanges. In their research report, “Third-Party Punishment and Social Norms,” Ernst Fehr and Urs Fischbacher describe a scenario inspired by economic game theory called the Dictator Game. It begins with an experimenter giving a first participant, or player, a sum of money. The experimenter then explains to the first player that he or she is to propose a cut of the money to the second player. In the Dictator Game—as opposed to other similar game theory scenarios—the second player has no choice but to accept the cut from the first player, the dictator. The catch is that the exchange is being witnessed by a third party, the analogue of little Jessica or the head-slapping avenger in the Yale experiments.  This third player is then given the opportunity to reward or punish the dictator. As Fehr and Fischbacher explain, “Punishment is, however, costly for the third party so a selfish third party will never punish” (3).

It turns out, though, that adults, just like the infants in the Yale studies, are not selfish—at least not entirely. Instead, they readily engage in indirect, or strong, reciprocity. Evolutionary literary theorist William Flesch explains that “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their interactions with the reciprocator” (21-2). According to Flesch, strong reciprocity is the key to solving what he calls “the puzzle of narrative interest,” the mystery of why humans so readily and eagerly feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). The human tendency toward strong reciprocity reaches beyond any third party witnessing an exchange between two others; as Alda, viewers of Nova, and even readers of Bloom’s article in the Times watch or read about Wynn and Hamlin’s experiments, they have no choice but to become participants in the experiments themselves, because their own tendency to reward good behavior with positive emotion and to punish bad behavior with negative emotion is automatically engaged. Audiences’ concern, however, is much less with the puppets’ behavior than with the infants’ responses to it.

The studies of social and moral development conducted at the Infant Cognition Center pull at people’s heartstrings because they demonstrate babies’ capacity to behave in a way that is expected of adults. If Jessica had failed to discern between the nice and the mean puppets, viewers probably would have readily forgiven her. When older people fail to make moral distinctions, however, those in a position to witness and appreciate that failure can be counted on to withdraw their favor—and may even engage in some type of sanctioning, beginning with unflattering gossip and becoming more severe if the immorality or moral complacency persists. Strong reciprocity opens the way for endlessly branching nth–order reciprocation, so not only will individuals be considered culpable for offenses they commit but also for offenses they passively witness. Flesch explains,

Among the kinds of behavior that we monitor through tracking or through report, and that we have a tendency to punish or reward, is the way others monitor behavior through tracking or through report, and the way they manifest a tendency to punish and reward. (50)

Failing to signal disapproval makes witnesses complicit. On the other hand, signaling favor toward individuals who behave altruistically simultaneously signals to others the altruism of the signaler. What’s important to note about this sort of indirect signaling is that it does not necessarily require the original offense or benevolent act to have actually occurred. People take a proclivity to favor the altruistic as evidence of altruism—even if the altruistic character is fictional. 

        That infants less than a year old respond to unfair or selfish behavior with negative emotions—and a readiness to punish—suggests that strong reciprocity has deep evolutionary roots in the human lineage. Humans’ profound emotional engagement with fictional characters and fictional exchanges probably derives from a long history of adapting to challenges whose Darwinian ramifications were far more serious than any attempt to while away some idle afternoons. Game theorists and evolutionary anthropologists have a good idea what those challenges might have been: for cooperativeness or altruism to be established and maintained as a norm within a group of conspecifics, some mechanism must be in place to prevent the exploitation of cooperative or altruistic individuals by selfish and devious ones. Flesch explains,

Darwin himself had proposed a way for altruism to evolve through the mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries are too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)

If, however, individuals given to trying to take advantage of cooperative norms were reliably met with slaps on the head—or with ostracism in the wake of spreading gossip—any benefits they (or their genes) might otherwise count on to redound from their selfish behavior would be much diminished. Flesch’s theory is “that we have explicitly evolved the ability and desire to track others and to learn their stories precisely in order to punish the guilty (and somewhat secondarily to reward the virtuous)” (21). Before strong reciprocity was driving humans to bookstores, amphitheaters, and cinemas, then, it was serving the life-and-death cause of ensuring group cohesion and sealing group boundaries against neighboring exploiters. 

Game theory experiments that have been conducted since the early 1980s have consistently shown that people are willing, even eager to punish others whose behavior strikes them as unfair or exploitative, even when administering that punishment involves incurring some cost for the punisher. Like the Dictator Game, the Ultimatum Game involves two people, one of whom is given a sum of money and told to offer the other participant a cut. The catch in this scenario is that the second player must accept the cut or neither player gets to keep any money. “It is irrational for the responder not to accept any proposed split from the proposer,” Flesch writes. “The responder will always come out better by accepting than vetoing” (31). What the researchers discovered, though, was that a line exists beneath which responders will almost always refuse the cut. “This means they are paying to punish,” Flesch explains. “They are giving up a sure gain in order to punish the selfishness of the proposer” (31). Game theorists call this behavior altruistic punishment because “the punisher’s willingness to pay this cost may be an important part in enforcing norms of fairness” (31). In other words, the punisher is incurring a cost to him or herself in order to ensure that selfish actors don’t have a chance to get a foothold in the larger, cooperative group. 

The economic logic notwithstanding, it seems natural to most people that second players in Ultimatum Game experiments should signal their disapproval—or stand up for themselves, as it were—by refusing to accept insultingly meager proposed cuts. The cost of the punishment, moreover, can be seen as a symbol of various other types of considerations that might prevent a participant or a witness from stepping up or stepping in to protest. Discussing the Three-Player Dictator Game experiments conducted by Fehr and Fischbacher, Flesch points out that strong reciprocity is even more starkly contrary to any selfish accounting:

Note that the third player gets nothing out of paying to reward or punish except the power or agency to do just that. It is highly irrational for this player to pay to reward or punish, but again considerations of fairness trump rational self-interest. People do pay, and pay a substantial amount, when they think that someone has been treated notably unfairly, or when they think someone has evinced marked generosity, to affect what they have observed. (33)

Neuroscientists have even zeroed in on the brain regions that correspond to our suppression of immediate self-interest in the service of altruistic punishment, as well as those responsible for the pleasure we take in anticipating—though not in actually witnessing—free riders meeting with their just deserts (Knoch et al. 829Quevain et al. 1254). Outside of laboratories, though, the cost punishers incur can range from the risks associated with a physical confrontation to time and energy spent convincing skeptical peers a crime has indeed been committed.

Flesch lays out his theory of narrative interest in a book aptly titled Comeuppance:Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. A cursory survey of mainstream fiction, in both blockbuster movies and best-selling novels, reveals the good guys versus bad guys dynamic as preeminent in nearly every plot, and much of the pleasure people get from the most popular narratives can quite plausibly be said to derive from the goodie prevailing—after a long, harrowing series of close calls and setbacks—while the baddie simultaneously gets his or her comeuppance. Audiences love to see characters get their just deserts. When the plot fails to deliver on this score, they walk away severely disturbed. That disturbance can, however, serve the author’s purposes, particularly when the goal is to bring some danger or injustice to readers’ or viewers’ attention, as in the case of novels like Orwell’s 1984. Plots, of course, seldom feature simple exchanges with meager stakes on the scale of game theory experiments, and heroes can by no means count on making it to the final scene both vindicated and rewarded—even in stories designed to give audiences exactly what they want. The ultimate act of altruistic punishment, and hence the most emotionally poignant behavior a character can engage in, is martyrdom. It’s no coincidence that the hero dies in the act of vanquishing the villain in so many of the most memorable books and movies.

            If narrative interest really does emerge out of a propensity to monitor each other’s behaviors for signs of a capacity for cooperation and to volunteer affect on behalf of altruistic individuals and against selfish ones they want to see get their comeuppance, the strong appeal of certain seemingly bad characters emerges as a mystery calling for explanation.  From England’s tradition of Byronic heroes like Rochester to America’s fascination with bad boys like Tom Sawyer, these characters win over audiences and stand out as perennial favorites even though at first blush they seem anything but eager to establish their nice guy bone fides. On the other hand, Rochester was eventually redeemed in Jane Eyre, and Tom Sawyer, though naughty to be sure, shows no sign whatsoever of being malicious. Tellingly, though, these characters, and a long list of others like them, also demonstrate a remarkable degree of cleverness: Rochester passing for a gypsy woman, for instance, or Tom Sawyer making fence painting out to be a privilege. One hypothesis that could account for the appeal of bad boys is that their badness demonstrates undeniably their ability to escape the negative consequences most people expect to result from their own bad behavior.

This type of demonstration likely functions in a way similar to another mechanism that many evolutionary biologists theorize must have been operating for cooperation to have become established in human societies, a process referred to as the handicap principle, or costly signaling. A lone altruist in any group is unlikely to fare well in terms of survival and reproduction. So the question arises as to how the minimum threshold of cooperators in a population was first surmounted. Flesch’s fellow evolutionary critic, Brian Boyd, in his book On the Origin of Stories, traces the process along a path from mutualism, or coincidental mutual benefits, to inclusive fitness, whereby organisms help others who are likely to share their genes—primarily family members—to reciprocal altruism, a quid pro quo arrangement in which one organism will aid another in anticipation of some future repayment (54-57). However, a few individuals in our human ancestry must have benefited from altruism that went beyond familial favoritism and tit-for-tat bartering.

In their classic book The Handicap Principal, Amotz and Avishag Zahavi suggest that altruism serves a function in cooperative species similar to the one served by a peacock’s feathers. The principle could also help account for the appeal of human individuals who routinely risk suffering consequences which deter most others. The idea is that conspecifics have much to gain from accurate assessments of each other’s fitness when choosing mates or allies. Many species have thus evolved methods for honestly signaling their fitness, and as the Zahavis explain, “in order to be effective, signals have to be reliable; in order to be reliable, signals have to be costly” (xiv). Peacocks, the iconic examples of the principle in action, signal their fitness with cumbersome plumage because their ability to survive in spite of the handicap serves as a guarantee of their strength and resourcefulness. Flesch and Boyd, inspired by evolutionary anthropologists, find in this theory of costly signaling the solution the mystery of how altruism first became established; human altruism is, if anything, even more elaborate than the peacock’s display. 

Humans display their fitness in many ways. Not everyone can be expected to have the wherewithal to punish free-riders, especially when doing so involves physical conflict. The paradoxical result is that humans compete for the status of best cooperator. Altruism is a costly signal of fitness. Flesch explains how this competition could have emerged in human populations:

If there is a lot of between-group competition, then those groups whose modes of costly signaling take the form of strong reciprocity, especially altruistic punishment, will outcompete those whose modes yield less secondary gain, especially less secondary gain for the group as a whole. (57)

Taken together, the evidence Flesch presents suggests the audiences of narratives volunteer affect on behalf of fictional characters who show themselves to be altruists and against those who show themselves to be selfish actors or exploiters, experiencing both frustration and delight in the unfolding of the plot as they hope to see the altruists prevail and the free-riders get their comeuppance. This capacity for emotional engagement with fiction likely evolved because it serves as a signal to anyone monitoring individuals as they read or view the story, or as they discuss it later, that they are disposed either toward altruistic punishment or toward third-order free-riding themselves—and altruism is a costly signal of fitness.

The hypothesis emerging from this theory of social monitoring and volunteered affect to explain the appeal of bad boy characters is that their bad behavior will tend to redound to the detriment of still worse characters. Bloom describes the results of another series of experiments with eight-month-old participants:

When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior. (5)

These characters’ bad behavior will also likely serve an obvious function as costly signaling; they’re bad because they’re good at getting away with it. Evidence that the bad boy characters are somehow truly malicious—for instance, clear signals of a wish to harm innocent characters—or that they’re irredeemable would severely undermine the theory. As the first step toward a preliminary survey, the following sections examine two infamous instances in which literary characters whose creators intended audiences to recognize as bad nonetheless managed to steal the show from the supposed good guys.

(Watch Hamlin discussing the research in an interview from earlier today.)

And check out this video of the experiments.

Read More
Dennis Junk Dennis Junk

Campaigning Deities: Justifying the ways of Satan

Why do readers tend to admire Satan in Milton’s Paradise Lost? It’s one of the instances where a nominally bad character garners more attention and sympathy than the good guy, a conundrum I researched through an evolutionary lens as part of my master’s thesis.

[Excerpt from Hierarchies in Hell and Leaderless Fight Clubs: Altruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis]

Milton believed Christianity more than worthy of a poetic canon in the tradition of the classical poets, and Paradise Lost represents his effort at establishing one. What his Christian epic has offered for many readers over the centuries, however, is an invitation to weigh the actions and motivations of immortals in mortal terms. In the story, God becomes a human king, albeit one with superhuman powers, while Satan becomes an upstart subject. As Milton attempts to “justify the ways of God to Man,” he is taking it upon himself simultaneously, and inadvertently, to justify the absolute dominion of a human dictator. One of the consequences of this shift in perspective is the transformation of a philosophical tradition devoted to parsing the logic of biblical teachings into something akin to a political campaign between two rival leaders, each laying out his respective platform alongside a case against his rival. What was hitherto recondite and academic becomes in Milton’s work immediate and visceral.

Keats famously penned the wonderfully self-proving postulate, “Axioms in philosophy are not axioms until they are proved upon our pulses,” which leaves open the question of how an axiom might be so proved. Milton’s God responds to Satan’s approach to Earth, and his foreknowledge of Satan’s success in tempting the original pair, with a preemptive defense of his preordained punishment of Man:

…Whose fault?

Whose but his own? Ingrate! He had of Me

All he could have. I made him just and right,

Sufficient to have stood though free to fall.

Such I created all th’ ethereal pow’rs

And spirits, both them who stood and who failed:

Freely they stood who stood and fell who fell.

Not free, what proof could they have giv’n sincere

Of true allegiance, constant faith or love

Where only what they needs must do appeared,

Not what they would? What praise could they receive?

What pleasure I from such obedience paid

When will and reason… had served necessity,

Not me? (3.96-111)

God is defending himself against the charge that his foreknowledge of the fall implies that Man’s decision to disobey was borne of something other than his free will. What choice could there have been if the outcome of Satan’s temptation was predetermined? If it wasn’t predetermined, how could God know what the outcome would be in advance? God’s answer—of course I granted humans free will because otherwise their obedience would mean nothing—only introduces further doubt. Now we must wonder why God cherishes Man’s obedience so fervently. Is God hungry for political power? If we conclude he is—and that conclusion seems eminently warranted—then we find ourselves on the side of Satan. It’s not so much God’s foreknowledge of Man’s fall that undermines human freedom; it’s God’s insistence on our obedience, under threat of God’s terrible punishment.

Milton faces a still greater challenge in his attempt to justify God’s ways “upon our pulses” when it comes to the fallout of Man’s original act of disobedience. The Son argues on behalf of Man, pointing out that the original sin was brought about through temptation. If God responds by turning against Man, then Satan wins. The Son thus argues that God must do something to thwart Satan: “Or shall the Adversary thus obtain/ His end and frustrate Thine?” (3.156-7). Before laying out his plan for Man’s redemption, God explains why punishment is necessary:

…Man disobeying

Disloyal breaks his fealty and sins

Against the high supremacy of Heav’n,

Affecting godhead, and so, losing all,

To expiate his treason hath naught left

But to destruction sacred and devote

He with his whole posterity must die. (3. 203-9)

The potential contradiction between foreknowledge and free choice may be abstruse enough for Milton’s character to convincingly discount: “If I foreknew/ Foreknowledge had no influence on their fault/ Which had no less proved certain unforeknown” (3.116-9). There is another contradiction, however, that Milton neglects to take on. If Man is “Sufficient to have stood though free to fall,” then God must justify his decision to punish the “whole posterity” as opposed to the individuals who choose to disobey. The Son agrees to redeem all of humanity for the offense committed by the original pair. His knowledge that every last human will disobey may not be logically incompatible with their freedom to choose; if every last human does disobey, however, the case for that freedom is severely undermined. The axiom of collective guilt precludes the axiom of freedom of choice both logically and upon our pulses.

In characterizing disobedience as a sin worthy of severe punishment—banishment from paradise, shame, toil, death—an offense he can generously expiate for Man by sacrificing the (his) Son, God seems to be justifying his dominion by pronouncing disobedience to him evil, allowing him to claim that Man’s evil made it necessary for him to suffer a profound loss, the death of his offspring. In place of a justification for his rule, then, God resorts to a simple guilt trip.

Man shall not quite be lost but saved who will,

Yet not of will in him but grace in me

Freely vouchsafed. Once more I will renew

His lapsed pow’rs though forfeit and enthralled

By sin to foul exorbitant desires.

Upheld by me, yet once more he shall stand

On even ground against his mortal foe,

By me upheld that he may know how frail

His fall’n condition is and to me owe

All his deliv’rance, and to none but me. (3.173-83)

Having decided to take on the burden of repairing the damage wrought by Man’s disobedience to him, God explains his plan:

Die he or justice must, unless for him

Some other as able and as willing pay

The rigid satisfaction, death for death. (3.210-3)

He then asks for a volunteer. In an echo of an earlier episode in the poem which has Satan asking for a volunteer to leave hell on a mission of exploration, there is a moment of hesitation before the Son offers himself up to die on Man’s behalf.

…On Me let thine anger fall.

Account Me Man. I for his sake will leave

Thy bosom and this glory next to Thee

Freely put off and for him lastly die

Well pleased. On Me let Death wreck all his rage! (3.37-42)

This great sacrifice, which is supposed to be the basis of the Son’s privileged status over the angels, is immediately undermined because he knows he won’t stay dead for long: “Yet that debt paid/ Thou wilt not leave me in the loathsome grave” (246-7). The Son will only die momentarily. This sacrifice doesn’t stack up well against the real risks and sacrifices made by Satan.

All the poetry about obedience and freedom and debt never takes on the central question Satan’s rebellion forces readers to ponder: Does God deserve our obedience? Or are the labels of good and evil applied arbitrarily? The original pair was forbidden from eating from the Tree of Knowledge—could they possibly have been right to contravene the interdiction? Since it is God being discussed, however, the assumption that his dominion requires no justification, that it is instead simply in the nature of things, might prevail among some readers, as it does for the angels who refuse to join Satan’s rebellion. The angels, after all, owe their very existence to God, as Abdiel insists to Satan. Who, then, are any of them to question his authority? This argument sets the stage for Satan’s remarkable rebuttal:

…Strange point and new!

Doctrine which we would know whence learnt: who saw

When this creation was? Remember’st thou

Thy making while the Maker gave thee being?

We know no time when we were not as now,

Know none before us, self-begot, self-raised

By our own quick’ning power…

Our puissance is our own. Our own right hand

Shall teach us highest deeds by proof to try

Who is our equal. (5.855-66)

Just as a pharaoh could claim credit for all the monuments and infrastructure he had commissioned the construction of, any king or dictator might try to convince his subjects that his deeds far exceed what he is truly capable of. If there’s no record and no witness—or if the records have been doctored and the witnesses silenced—the subjects have to take the king’s word for it.

That God’s dominion depends on some natural order, which he himself presumably put in place, makes his tendency to protect knowledge deeply suspicious. Even the angels ultimately have to take God’s claims to have created the universe and them along with it solely on faith. Because that same unquestioning faith is precisely what Satan and the readers of Paradise Lost are seeking a justification for, they could be forgiven for finding the answer tautological and unsatisfying. It is the Tree of Knowledge of Good and Evil that Adam and Eve are forbidden to eat fruit from. When Adam, after hearing Raphael’s recounting of the war in heaven, asks the angel how the earth was created, he does receive an answer, but only after a suspicious preamble:

…such commission from above

I have received to answer thy desire

Of knowledge with bounds. Beyond abstain

To ask nor let thine own inventions hope

Things not revealed which the invisible King

Only omniscient hath suppressed in night,

To none communicable in Earth or Heaven:

Enough is left besides to search and know. (7.118-125)

Raphael goes on to compare knowledge to food, suggesting that excessively indulging curiosity is unhealthy. This proscription of knowledge reminded Shelley of the Prometheus myth. It might remind modern readers of The Wizard of Oz—“Pay no attention to that man behind the curtain”—or to the space monkeys in Fight Club, who repeatedly remind us that “The first rule of Project Mayhem is, you do not ask questions.” It may also resonate with news about dictators in Asia or the Middle East trying to desperately to keep social media outlets from spreading word of their atrocities.

Like the protesters of the Arab Spring, Satan is putting himself at great risk by challenging God’s authority. If God’s dominion over Man and the angels is evidence not of his benevolence but of his supreme selfishness, then Satan’s rebellion becomes an attempt at altruistic punishment. The extrapolation from economic experiments like the ultimatum and dictator games to efforts to topple dictators may seem like a stretch, especially if humans are predisposed to forming and accepting positions in hierarchies, as a casual survey of virtually any modern organization suggests is the case.

Organized institutions, however, are a recent development in terms of human evolution. The English missionary Lucas Bridges wrote about his experiences with the Ona foragers in Tierra del Fuego in his 1948 book Uttermost Part of the Earth, and he expresses his amusement at his fellow outsiders’ befuddlement when they learn about the Ona’s political dynamics:

A certain scientist visited our part of the world and, in answer to his inquiries on this matter, I told him that the Ona had no chieftains, as we understand the word. Seeing that he did not believe me, I summoned Kankoat, who by that time spoke some Spanish. When the visitor repeated his question, Kankoat, too polite to answer in the negative, said: “Yes, senor, we, the Ona, have many chiefs. The men are all captains and all the women are sailors” (quoted in Boehm 62).

At least among Ona men, it seems there was no clear hierarchy. The anthropologist Richard Lee discovered a similar dynamic operating among the !Kung foragers of the Kalahari. In order to ensure that no one in the group can attain an elevated status which would allow him to dominate the others, several leveling mechanisms are in place. Lee quotes one of his informants:

When a young man kills much meat, he comes to think of himself as a chief or a big man, and he thinks of the rest of us as his servants or inferiors. We can’t accept this. We refuse one who boasts, for someday his pride will make him kill somebody. So we always speak of his meat as worthless. In this way we cool his heart and make him gentle. (quoted in Boehm 45)

These examples of egalitarianism among nomadic foragers are part of anthropologist Christopher Boehm’s survey of every known group of hunter-gatherers. His central finding is that “A distinctively egalitarian political style is highly predictable wherever people live in small, locally autonomous social and economic groups” (35-36). This finding bears on any discussion of human evolution and human nature because small groups like these constituted the whole of humanity for all but what amounts to the final instants of geological time.

Also read:

THE ADAPTIVE APPEAL OF BAD BOYS

SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

Read More
Dennis Junk Dennis Junk

A Lyrical Refutation of Thomas Kinkade

The story of human warmth defying the frigid, impersonal harshness of a colorless,lifeless cosmos—in trying desperately to please, just to please, those pictures offend—that’s only half the story.

FIRELIGHT ON A SNOWY NIGHT: A Lyrical Refutation

I was eager to get out when I saw the storm, the swarm of small shadowed blurs descending

in swerves to create

a limn of white, out into the soft glowing sky of a winter night, peering through the

dark as those blurs

become streaking dabs as they pass through spheres of yellow lamplight, countless, endlessly

falling, engulfing

those sad, drooping, fiery lenses depending on their stoic posts.

I think of those Thomas Kinkade pictures my mom loves so much—everybody’s mom

loves so much—

and I have to admit they almost manage to signal it, that feeling, that mood.

Cold, brutal, uncaring wind, and a blanketing blankness of white struggled through

by the yellow and orange

warm vibrant doings of unseen humans, those quaint stone bridges over unimaginably

frigid, deathly chilling water,

somehow in their quaintness, in their suggestion of, insistence on, human ingenuity,

human doggedness, those scenes

hold out the promise of an end to the coldness, an end to the white nothing that fails,

year after year, to blot out world.

Those pictures are lies though—in almost conveying the feeling, the mood, they

do it an injustice.

In willfully ignoring the barren, venous, upward clawing, fleshed branches that rake

the eerily luminescent wind-crowned sky,

and failing to find a symbol to adequately suggest the paradoxical pace of the flakes

falling, endlessly falling

through those yellow, orange spheres of light—hurried but hypnotically slow, frantic

but easily, gracefully falling,

adjusting their cant to invisible, unforeseen and unforeseeable forces.

The story of human warmth defying the frigid, impersonal harshness of a colorless,

lifeless cosmos—

in trying desperately to please, just to please, those pictures offend—that’s

only half the story.

The woman who lit the fire sending out its light through the windows, she’s aging—

every covering of snow

is another year in the ceaseless procession, and the man, who worked so doggedly

at building a scaffold

and laying the stones for that charming bridge, he’s beyond reach of the snow, two or three

generations gone since his generous feat.

The absence of heat is its own type of energy. The wet-lashing night air is charged with it,

like the pause after a breath

awaiting the inevitable inhale—but it holds off, and holds off. Inevitable? Meanwhile,

those charged particles

of shocking white, tiny, but with visible weight—they’d kiss your cheek if you

opened your coat

and you’d know you’d been kissed by someone not alive. The ceaseless falling

and steady accumulation,

hours and days and years—humans create watches and clocks to defy time, but

this relentless rolling over

of green to white, warm to cold, thrilling, rejuvenating spring to contemplative, resigned

autumn, this we watch helplessly,

hopefully, hurtling toward those homes so far beneath the snow.

The air is charged, every flake a tiny ghost—no tinier, though, than any of us merits—

haunting the slippery medium

of night we might glide through so slow, so effortless, so sealed up to keep in our warmth,

turned inward on ourselves.

The hush, the silent yawn, is haunted with humanity’s piled up heap of here and gone,

and haunted too with

our own homeless, friendless, impossibly frightening future.

The homes of neighbors friendly donning matching caps, alike in our mutual blanketing, our

mutual muting.

Those paintings of cozy lit houses in the winter harshness remind me of the juxtaposition

of fright and absence of true threat,

those opposites we feel when young, the trick, the gift of some masterful ghost story,

properly told in such a scene,

and this night, snow creaking underfoot like those ill-hinged doors opening all on

their own, raising chills,

this night is haunted too, but less with presence than with utter absence, here and gone,

all those troubled souls,

their existence of no more consequence than the intricacy suddenly annihilated as it

collides with the flesh

just beneath my eye, collides and instantly transforms into something more medium

than message and

no sooner lands than begins to evaporate.

Also read:

THE TREE CLIMBER: A STORY INSPIRED BY W.S. MERWIN

THE TRUTH ABOUT GROWNUPS

IN HONOR OF CHARLES DICKENS ON THE 200TH ANNIVERSARY OF HIS BIRTH

GRACIE - INVISIBLE FENCES

ECRET DANCERS

Read More
Dennis Junk Dennis Junk

Intuition vs. Science: What's Wrong with Your Thinking, Fast and Slow

Kahneman has no faith in our ability to clean up our thinking. He’s an expert on all the ways thinking goes awry, and even he catches himself making all the common mistakes time and again. So he proposes a way around the impenetrable wall of cognitive illusion and self-justification. If all the people gossiping around the water cooler are well-versed in the language of biases and heuristics and errors of intuition, we may all benefit because anticipating gossip can have a profound effect on behavior. No one wants to be spoken of as the fool.

From Completely Useless to Moderately Useful

            In 1955, a twenty-one-year-old Daniel Kahneman was assigned the formidable task of creating an interview procedure to assess the fitness of recruits for the Israeli army. Kahneman’s only qualification was his bachelor’s degree in psychology, but the state of Israel had only been around for seven years at the time so the Defense Forces were forced to satisfice. In the course of his undergraduate studies, Kahneman had discovered the writings of a psychoanalyst named Paul Meehl, whose essays he would go on to “almost memorize” as a graduate student. Meehl’s work gave Kahneman a clear sense of how he should go about developing his interview technique.

If you polled psychologists today to get their predictions for how successful a young lieutenant inspired by a book written by a psychoanalyst would be in designing a personality assessment protocol—assuming you left out the names—you would probably get some dire forecasts. But Paul Meehl wasn’t just any psychoanalyst, and Daniel Kahneman has gone on to become one of the most influential psychologists in the world. The book whose findings Kahneman applied to his interview procedure was Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence, which Meehl lovingly referred to as “my disturbing little book.” Kahneman explains,

Meehl reviewed the results of 20 studies that had analyzed whether clinical predictions based on the subjective impressions of trained professionals were more accurate than statistical predictions made by combining a few scores or ratings according to a rule. In a typical study, trained counselors predicted the grades of freshmen at the end of the school year. The counselors interviewed each student for forty-five minutes. They also had access to high school grades, several aptitude tests, and a four-page personal statement. The statistical algorithm used only a fraction of this information: high school grades and one aptitude test. (222)

The findings for this prototypical study are consistent with those arrived at by researchers over the decades since Meehl released his book:

The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment. No exception has been convincingly documented. (223)       

            Kahneman designed the interview process by coming up with six traits he thought would have direct bearing on a soldier’s success or failure, and he instructed the interviewers to assess the recruits on each dimension in sequence. His goal was to make the process as systematic as possible, thus reducing the role of intuition. The response of the recruitment team will come as no surprise to anyone: “The interviewers came close to mutiny” (231). They complained that their knowledge and experience were being given short shrift, that they were being turned into robots. Eventually, Kahneman was forced to compromise, creating a final dimension that was holistic and subjective. The scores on this additional scale, however, seemed to be highly influenced by scores on the previous scales.

When commanding officers evaluated the new recruits a few months later, the team compared the evaluations with their predictions based on Kahneman’s six scales. “As Meehl’s book had suggested,” he writes, “the new interview procedure was a substantial improvement over the old one… We had progressed from ‘completely useless’ to ‘moderately useful’” (231).   

            Kahneman recalls this story at about the midpoint of his magnificent, encyclopedic book Thinking, Fast and Slow. This is just one in a long series of run-ins with people who don’t understand or can’t accept the research findings he presents to them, and it is neatly woven into his discussions of those findings. Each topic and each chapter feature a short test that allows you to see where you fall in relation to the experimental subjects. The remaining thread in the tapestry is the one most readers familiar with Kahneman’s work most anxiously anticipated—his friendship with AmosTversky, with whom he shared the Nobel prize in economics in 2002.

Most of the ideas that led to experiments that led to theories which made the two famous and contributed to the founding of an entire new field, behavioral economics, were borne of casual but thrilling conversations both found intrinsically rewarding in their own right. Reading this book, as intimidating as it appears at a glance, you get glimmers of Kahneman’s wonder at the bizarre intricacies of his own and others’ minds, flashes of frustration at how obstinately or casually people avoid the implications of psychology and statistics, and intimations of the deep fondness and admiration he felt toward Tversky, who died in 1996 at the age of 59.

Pointless Punishments and Invisible Statistics

            When Kahneman begins a chapter by saying, “I had one of the most satisfying eureka experiences of my career while teaching flight instructors in the Israeli Air Force about the psychology of effective training” (175), it’s hard to avoid imagining how he might have relayed the incident to Amos years later. It’s also hard to avoid speculating about what the book might’ve looked like, or if it ever would have been written, if he were still alive. The eureka experience Kahneman had in this chapter came about, as many of them apparently did, when one of the instructors objected to his assertion, in this case that “rewards for improved performance work better than punishment of mistakes.” The instructor insisted that over the long course of his career he’d routinely witnessed pilots perform worse after praise and better after being screamed at. “So please,” the instructor said with evident contempt, “don’t tell us that reward works and punishment does not, because the opposite is the case.” Kahneman, characteristically charming and disarming, calls this “a joyous moment of insight” (175).

            The epiphany came from connecting a familiar statistical observation with the perceptions of an observer, in this case the flight instructor. The problem is that we all have a tendency to discount the role of chance in success or failure. Kahneman explains that the instructor’s observations were correct, but his interpretation couldn’t have been more wrong.

What he observed is known as regression to the mean, which in that case was due to random fluctuations in the quality of performance. Naturally, he only praised a cadet whose performance was far better than average. But the cadet was probably just lucky on that particular attempt and therefore likely to deteriorate regardless of whether or not he was praised. Similarly, the instructor would shout into the cadet’s earphones only when the cadet’s performance was unusually bad and therefore likely to improve regardless of what the instructor did. The instructor had attached a causal interpretation to the inevitable fluctuations of a random process. (175-6)

The roster of domains in which we fail to account for regression to the mean is disturbingly deep. Even after you’ve learned about the phenomenon it’s still difficult to recognize the situations you should apply your understanding of it to. Kahneman quotes statistician David Freedman to the effect that whenever regression becomes pertinent in a civil or criminal trial the side that has to explain it will pretty much always lose the case. Not understanding regression, however, and not appreciating how it distorts our impressions has implications for even the minutest details of our daily experiences. “Because we tend to be nice to other people when they please us,” Kahneman writes, “and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty” (176). Probability is a bitch.

The Illusion of Skill in Stock-Picking

            Probability can be expensive too. Kahneman recalls being invited to give a lecture to advisers at an investment firm. To prepare for the lecture, he asked for some data on the advisers’ performances and was given a spreadsheet for investment outcomes over eight years. When he compared the numbers statistically, he found that none of the investors was consistently more successful than the others. The correlation between the outcomes from year to year was nil. When he attended a dinner the night before the lecture “with some of the top executives of the firm, the people who decide on the size of bonuses,” he knew from experience how tough a time he was going to have convincing them that “at least when it came to building portfolios, the firm was rewarding luck as if it were a skill.” Still, he was amazed by the execs’ lack of shock:

We all went on calmly with our dinner, and I have no doubt that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions—and thereby threaten people’s livelihood and self-esteem—are simply not absorbed. (216)

The scene that follows echoes the first chapter of Carl Sagan’s classic paean to skepticism Demon-Haunted World, where Sagan recounts being bombarded with questions about science by a driver who was taking him from the airport to an auditorium where he was giving a lecture. He found himself explaining to the driver again and again that what he thought was science—Atlantis, aliens, crystals—was, in fact, not. "As we drove through the rain," Sagan writes, "I could see him getting glummer and glummer. I was dismissing not just some errant doctrine, but a precious facet of his inner life" (4). In Kahneman’s recollection of his drive back to the airport after his lecture, he writes of a conversation he had with his own driver, one of the execs he’d dined with the night before. 

He told me, with a trace of defensiveness, “I have done very well for the firm and no one can take that away from me.” I smiled and said nothing. But I thought, “Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it? (216)

Blinking at the Power of Intuitive Thinking

            It wouldn’t surprise Kahneman at all to discover how much stories like these resonate. Indeed, he must’ve considered it a daunting challenge to conceive of a sensible, cognitively easy way to get all of his vast knowledge of biases and heuristics and unconscious, automatic thinking into a book worthy of the science—and worthy too of his own reputation—while at the same time tying it all together with some intuitive overarching theme, something that would make it read more like a novel than an encyclopedia.

Malcolm Gladwell faced a similar challenge in writing Blink: the Power of Thinking without Thinking, but he had the advantages of a less scholarly readership, no obligation to be comprehensive, and the freedom afforded to someone writing about a field he isn’t one of the acknowledged leaders and creators of. Ultimately, Gladwell’s book painted a pleasing if somewhat incoherent picture of intuitive thinking. The power he refers to in the title is over the thoughts and actions of the thinker, not, as many must have presumed, to arrive at accurate conclusions.

It’s entirely possible that Gladwell’s misleading title came about deliberately, since there’s a considerable market for the message that intuition reigns supreme over science and critical thinking. But there are points in his book where it seems like Gladwell himself is confused. Robert Cialdini, Steve Marin, and Noah Goldstein cover some of the same research Kahneman and Gladwell do, but their book Yes!: 50 Scientifically Proven Ways to be Persuasive is arranged in a list format, with each chapter serving as its own independent mini-essay.

Early in Thinking, Fast and Slow, Kahneman introduces us to two characters, System 1 and System 2, who pass the controls of our minds back and forth between themselves according the expertise and competency demanded by current exigency or enterprise. System 1 is the more intuitive, easygoing guy, the one who does what Gladwell refers to as “thin-slicing,” the fast thinking of the title. System 2 works deliberately and takes effort on the part of the thinker. Most people find having to engage their System 2—multiply 17 by 24—unpleasant to one degree or another.

The middle part of the book introduces readers to two other characters, ones whose very names serve as a challenge to the field of economics. Econs are the beings market models and forecasts are based on. They are rational, selfish, and difficult to trick. Humans, the other category, show inconsistent preferences, changing their minds depending on how choices are worded or presented, are much more sensitive to the threat of loss than the promise of gain, are sometimes selfless, and not only can be tricked with ease but routinely trick themselves. Finally, Kahneman introduces us to our “Two Selves,” the two ways we have of thinking about our lives, either moment-to-moment—experiences he, along with Mihaly Csikzentmihhalyi (author of Flow) pioneered the study of—or in abstract hindsight. It’s not surprising at this point that there are important ways in which the two selves tend to disagree.

Intuition and Cerebration

  The Econs versus Humans distinction, with its rhetorical purpose embedded in the terms, is plenty intuitive. The two selves idea, despite being a little too redolent of psychoanalysis, also works well. But the discussions about System 1 and System 2 are never anything but ethereal and abstruse. Kahneman’s stated goal was to discuss each of the systems as if they were characters in a plot, but he’s far too concerned with scientifically precise definitions to run with the metaphor. The term system is too bloodless and too suggestive of computer components; it’s too much of the realm of System 2 to be at all satisfying to System 1. The collection of characteristics Thinking links to the first system (see a list below) is lengthy and fascinating and not easily summed up or captured in any neat metaphor. But we all know what Kahneman is talking about. We could use mythological figures, perhaps Achilles or Orpheus for System 1 and Odysseus or Hephaestus for System 2, but each of those characters comes with his own narrative baggage. Not everyone’s System 1 is full of rage like Achilles, or musical like Orpheus. Maybe we could assign our System 1s idiosyncratic totem animals.

But I think the most familiar and the most versatile term we have for System 1 is intuition. It is a hairy and unpredictable beast, but we all recognize it. System 2 is actually the harder to name because people so often mistake their intuitions for logical thought. Kahneman explains why this is the case—because our cognitive resources are limited our intuition often offers up simple questions as substitutes from more complicated ones—but we must still have a term that doesn’t suggest complete independence from intuition and that doesn’t imply deliberate thinking operates flawlessly, like a calculator. I propose cerebration. The cerebral cortex rests on a substrate of other complex neurological structures. It’s more developed in humans than in any other animal. And the way it rolls trippingly off the tongue is as eminently appropriate as the swish of intuition. Both terms work well as verbs too. You can intuit, or you can cerebrate. And when your intuition is working in integrated harmony with your cerebration you are likely in the state of flow Csikzentmihalyi pioneered the study of.

While Kahneman’s division of thought into two systems never really resolves into an intuitively manageable dynamic, something he does throughout the book, which I initially thought was silly, seems now a quite clever stroke of brilliance. Kahneman has no faith in our ability to clean up our thinking. He’s an expert on all the ways thinking goes awry, and even he catches himself making all the common mistakes time and again. In the introduction, he proposes a way around the impenetrable wall of cognitive illusion and self-justification. If all the people gossiping around the water cooler are well-versed in the language describing biases and heuristics and errors of intuition, we may all benefit because anticipating gossip can have a profound effect on behavior. No one wants to be spoken of as the fool.

Kahneman writes, “it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own.” It’s not easy to tell from his straightforward prose, but I imagine him writing lines like that with a wry grin on his face. He goes on,

Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and at home. (3)

So we encourage the education of others to trick ourselves into trying to be smarter in their eyes. Toward that end, Kahneman ends each chapter with a list of sentences in quotation marks—lines you might overhear passing that water cooler if everyone where you work read his book.  I think he’s overly ambitious. At some point in the future, you may hear lines like “They’re counting on denominator neglect” (333) in a boardroom—where people are trying to impress colleagues and superiors—but I seriously doubt you’ll hear it in the break room. Really, what he’s hoping is that people will start talking more like behavioral economists. Though some undoubtedly will, Thinking, Fast and Slow probably won’t ever be as widely read as, say, Freud’s lurid pseudoscientific On the Interpretation of Dreams. That’s a tragedy.

Still, it’s pleasant to think about a group of friends and colleagues talking about something other than football and American Idol. Characteristics of System 1 (105): Try to come up with a good metaphor.·

generates impressions, feelings, and inclinations; when endorsed by System 2 these become beliefs, attitudes, and intentions·

operates automatically and quickly, with little or no effort, and no sense of voluntary control·

can be programmed by System 2 to mobilize attention when particular patterns are detected (search) ·

executes skilled responses and generates skilled intuitions, after adequate training·

creates a coherent pattern of activated ideas in associative memory·

links a sense of cognitive ease to illusions of truth, pleasant feelings, and reduced vigilance·

distinguishes the surprising from the normal·

infers and invents causes and intentions·

neglects ambiguity and suppresses doubt·

is biased to believe and confirm·

exaggerates emotional consistency (halo effect)·

focuses on existing evidence and ignores absent evidence (WYSIATI)·

generates a limited set of basic assessments·

represents sets by norms and prototypes, does not integrate·

matches intensities across scales (e.g., size and loudness)·

computes more than intended (mental shotgun)·

sometimes substitutes an easier question for a difficult one (heuristics) ·

is more sensitive to changes than to states (prospect theory)·

overweights low probabilities.

shows diminishing sensitivity to quantity (psychophysics)·

responds more strongly to losses than to gains (loss aversion)·

frames decision problems narrowly, in isolation from one another

Also read:

LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE

“THE WORLD UNTIL YESTERDAY” AND THE GREAT ANTHROPOLOGY DIVIDE: WADE DAVIS’S AND JAMES C. SCOTT’S BIZARRE AND DISHONEST REVIEWS OF JARED DIAMOND’S WORK

Read More
Dennis Junk Dennis Junk

In Honor of Charles Dickens on the 200th Anniversary of His Birth

A poem about the effects of reading fiction on one’s life and perspective, inspired by Charles Dickens.

DISTRACTION

He wakes up every day and reads

most days only for a few minutes

before he has to work the fields.

He always plans to read more

before he goes to sleep but

the candlelight and exhaustion

put the plan neatly away.

He hates the reading,

wonders if he should find

something other than Great Expectations.

But he doesn’t have

any other books,

and he thinks of reading

like he thinks of church.

And one Sunday after sleeping

through the sermon,

he comes home and picks up

his one book.

He finds his place

planning to read just

those few minutes

but goes on and on.

The line that gets him

is about how “our worst

weaknesses and meanness”

are “for the sake of” those

“we most despise.”

He reads it over and over

and then goes on intent

on making sense of the words

and finding that they make their own.

After a while he stops to consider

beginning the entire book again

feeling he’s missed too much

but he goes back to where he left off.

The next day in the field he puts

everything he sees into silent words

and that night he reads for the first time

before falling asleep.

The next day in the field he describes

to himself his feelings about his work

and later holds things in their places

with words as he moves around in time.

The words are the only constant,

as even their objects can shift

through his life, childhood,

senility, and through the life of the land.

He wants to write down his days on paper

because he believes if he does then he can

go anywhere, do anything, and yet still

there he’ll be.

It’s not that Dickens was right that got him,

but that he was wrong—

even Pip must’ve known his worst

wasn’t for anyone but Estella,

nor his best.

One day could stretch to a whole

book of bound pages like the one

in his hands, or it could start and finish

on just one.

He imagines writing right over

the grand typeset words of Dickens'

on page one, “Hard to believe,

I woke up, excited to read.

I wished I could keep reading all day.”

Sunday, June 22, 2008, 11:43 am.

Also read:

Secret Dancers

And:

Gracie - Invisible Fences

Read More