READING SUBTLY

This
was the domain of my Blogger site from 2009 to 2018, when I moved to this domain and started
The Storytelling Ape
. The search option should help you find any of the old posts you're looking for.
 

Dennis Junk Dennis Junk

Putting Down the Pen: How School Teaches Us the Worst Possible Way to Read Literature

Far too many literature teachers encourage us to treat great works like coded messages with potentially harmful effects on society. Thus, many of us were taught we needed to resist being absorbed by a story and enchanted by language—but aren’t those the two parts of reading we’re most excited about enjoying?

Storytelling comes naturally to humans. But there is a special category of narratives that we’re taught from an early age to approach in the most strained and unnatural of ways. The label we apply to this category is literature. While we demand of movies and television shows that they envelop us in the seamlessly imagined worlds of their creators’ visions, not only whisking us away from our own concerns, but rendering us oblivious as well, however fleetingly, to the artificiality of the dramas playing out before us, we split the spines of literary works expecting some real effort at heightened awareness to be demanded of us—which is why many of us seldom read this type of fiction at all.

Some of the difficulty is intrinsic to the literary endeavor, reflecting the authors’ intention to engage our intellect as well as our emotions. But many academics seem to believe that literature exists for the sole purpose of supporting a superstructure of scholarly discourse. Rather than treating it as an art form occupying a region where intuitive aesthetic experience overlaps with cerebral philosophical musing, these scholars take it as their duty to impress upon us the importance of approaching literature as a purely intellectual exercise. In other words, if you allow yourself to become absorbed in the story, especially to the point where you forget, however briefly, that it is just a story, then you’re breaking faith with the very institutions that support literary scholarship—and that to some degree support literature as an art form.    

The unremarked scandal of modern literary scholarship is that the tension between reading as an aesthetic experience and reading as a purely intellectual pursuit is never even acknowledged. Many students seeking a deeper and more indelible involvement with great works come away instead with instructions on how to take on a mindset and apply a set of methods designed specifically to preclude just the type of experience they’re hoping to achieve. For instance, when novelist and translator Tim Parks wrote an essay called “A Weapon for Readers” for The New York Review of Books, in which he opined on the critical importance of having a pen in hand while reading, he received several emails from disappointed readers who “even thus armed felt the text was passing them by.” In a response titled “How I Read,” Parks begins with an assurance that he will resist being “prescriptive” as he shares his own reading methods, and yet he goes on to profess, “I do believe reading is an active skill, an art even, certainly not a question of passive absorption.” But, we might ask, could there also be such a state as active absorption? And isn’t that what most of us are hoping for when we read a story?

For Parks, and nearly every academic literary scholar writing or teaching today, stories are vehicles for the transmission of culture and hence reducible to the propositional information contained within them. The task of the scholar and the responsible reader alike therefore is to penetrate the surface effects of the story—the characters, the drama, the music of the prose—so we can scrutinize the underlying assumptions that hold them all together and make them come to life. As author David Shields explains in his widely celebrated manifesto Reality Hunger, “I always read the book as an allegory, as a disguised philosophical argument.” Parks demonstrates more precisely what this style of reading entails, writing, “As I dive into the opening pages, the first question I’m asking is, what are the qualities or values that matter most to this author, or at least in this novel?” Instead of pausing to focus on character descriptions or to take any special note of the setting, he aims his pen at clues to the author’s unspoken preoccupations:

I start a novel by Hemingway and at once I find people taking risks, forcing themselves toward acts of courage, acts of independence, in a world described as dangerous and indifferent to human destiny. I wonder if being courageous is considered more important than being just or good, more important than coming out a winner, more important than comradeship. Is it the dominant value? I’m on the lookout for how each character positions himself in relation to courage.

We can forget for a moment that Parks’ claim is impossible—how could he start a novel with so much foreknowledge of what it contains? The important point revealed in this description is that from the opening pages Parks is searching for ways to leap from the particular to the abstract, from specific incidents of the plot to general propositions about the world and the people in it. He goes on,

After that the next step is to wonder what is the connection between these force fields—fear/courage, belonging/exclusion, domination/submission—and the style of the book, the way the plot unfolds. How is the writer trying to draw me into the mental world of his characters through his writing, through his conversation with me?

While this process of putting the characters in some relation to each other and the author in relation to the reader is going on, another crucial question is hammering away in my head. Is this a convincing vision of the world?

Like Shields, Parks is reducing stories to philosophical arguments. And he proceeds to weigh them according to how well they mesh with his own beliefs.

Parks addresses the objection that his brand of critical reading, which he refers to as “alert resistance,” will make us much less likely to experience “those wonderful moments when we might fall under a writer’s spell” by insisting that there will be time enough for that after we’ve thoroughly examined the text for dangerous hidden assumptions, and by further suggesting that many writers will have worked hard enough on their texts to survive our scrutiny. For Parks and other postmodern scholars, there’s simply too much at stake for us to allow ourselves to be taken in by a good story until it’s been properly scanned for contraband ideas. “Sometimes it seems the whole of society languishes in the stupor of the fictions it has swallowed,” he writes. Because it’s a central tenet of postmodernism, the ascendant philosophy in English departments across the country, Parks fails to appreciate just how extraordinary a claim he’s making when he suggests that writers of literary texts are responsible, at least to some degree, for all the worst ills of society.

The sickening irony is that postmodern scholars are guilty of the very crime they accuse literary authors of committing. Critics like Parks and Shields charge that writers dazzle us with stories so they can secretly inculcate us with their ideologies. Parks feels he needs to teach readers “to protect themselves from all those underlying messages that can shift one’s attitude without one’s being aware of it.” And yet when his own readers come to him looking for advice on how to experience literature more deeply he offers them his own ideology disguised as the only proper way to approach a text (politely, of course, since he wouldn’t want to be prescriptive). Consider the young booklover attending her first college lit courses and being taught the importance of putting literary works and their authors on trial for their complicity in societal evils: she comes believing she’s going to read more broadly and learn to experience more fully what she reads, only to be tricked into thinking what she loves most about books are the very things that must be resisted.

Parks is probably right in his belief that reading with a pen and looking for hidden messages makes us more attentive to the texts and increases our engagement with them. But at what cost? The majority of people in our society avoid literary fiction altogether once they’re out of school precisely because it’s too difficult to get caught up in the stories the way we all do when we’re reading commercial fiction or watching movies. Instead of seeing their role as helping students experience this absorption with more complex works, scholars like Parks instruct us on ways to avoid becoming absorbed at all. While at first the suspicion of hidden messages that underpins this oddly counterproductive approach to stories may seem like paranoia, the alleged crimes of authors often serve to justify an attitude toward texts that’s aggressively narcissistic—even sadistic. Here’s how Parks describes the outcome of his instructions to his students:

There is something predatory, cruel even, about a pen suspended over a text. Like a hawk over a field, it is on the lookout for something vulnerable. Then it is a pleasure to swoop and skewer the victim with the nib’s sharp point. The mere fact of holding the hand poised for action changes our attitude to the text. We are no longer passive consumers of a monologue but active participants in a dialogue. Students would report that their reading slowed down when they had a pen in their hand, but at the same time the text became more dense, more interesting, if only because a certain pleasure could now be taken in their own response to the writing when they didn’t feel it was up to scratch, or worthy only of being scratched.

It’s as if the author’s first crime, the original sin, as it were, was to attempt to communicate in a medium that doesn’t allow anyone to interject or participate. By essentially shouting writers down by marking up their works, Parks would have us believe we’re not simply being like the pompous idiot who annoys everyone by trying to point out all the holes in movie plots so he can appear smarter than the screenwriters—no, we’re actually making the world a better place. He even begins his essay on reading with a pen with this invitation: “Imagine you are asked what single alteration in people’s behavior might best improve the lot of mankind.”

            The question postmodern literary scholars never get around to answering is, given that they believe books and stories are so far-reaching in their insidious effects, and given that they believe the main task in reading is to resist the author’s secret agenda, why should we bother reading in the first place? Of course, we should probably first ask if it’s even true that stories have such profound powers of persuasion. Jonathan Gottschall, a scholar who seeks to understand storytelling in the context of human evolution, may seem like one of the last people you’d expect to endorse the notion that every cultural artifact emerging out of so-called Western civilization must be contaminated with hidden reinforcements of oppressive ideas. But in an essay that seemingly echoes Parks’ most paranoid pronouncements about literature, one that even relies on similarly martial metaphors, Gottschall reports,

Results repeatedly show that our attitudes, fears, hopes, and values are strongly influenced by story. In fact, fiction seems to be more effective at changing beliefs than writing that is specifically designed to persuade through argument and evidence.

What is going on here? Why are we putty in a storyteller’s hands? The psychologists Melanie Green and Tim Brock argue that entering fictional worlds “radically alters the way information is processed.” Green and Brock’s studies show that the more absorbed readers are in a story, the more the story changes them. Highly absorbed readers also detected significantly fewer “false notes” in stories—inaccuracies, missteps—than less transported readers. Importantly, it is not just that highly absorbed readers detected the false notes and didn’t care about them (as when we watch a pleasurably idiotic action film). They were unable to detect the false notes in the first place.

Gottschall’s essay is titled “Why Storytelling Is the Ultimate Weapon,” and one of his main conclusions seems to corroborate postmodern claims about the dangers lurking in literature. “Master storytellers,” he writes, “want us drunk on emotion so we will lose track of rational considerations, relax our skepticism, and yield to their agenda.”

            Should we just accept Shields’ point then that stories are no more than disguised attempts at persuasion? Should we take Parks’ advice and start scouring our books for potentially nefarious messages? It’s important to note that Gottschall isn’t writing about literature in his essay; rather, he’s discussing storytelling in the context of business and marketing. And this brings up another important point: as Gottschall writes, “story is a tool that can be used for good or ill.” Just because there’s a hidden message doesn’t mean it’s necessarily a bad one. Indeed, if literature really were some kind of engine driving the perpetuation of all the most oppressive aspects of our culture, then we would expect the most literate societies, and the most literate sectors within each society, to be the most oppressive. Instead, some scholars, from Lynn Hunt to Steven Pinker, have traced liberal ideas like universal human rights to the late eighteenth century, when novels were first being widely read. The nature of the relationship is nearly impossible to pin down with any precision, but it’s clear that our civilization’s thinking about human rights evolved right alongside its growing appreciation for literature.

            A growing body of research demonstrates that people who read literary fiction tend to be more empathetic—and less racist even. If literature has hidden messages, they seem to be nudging us in a direction not many would consider cause for alarm. It is empathy after all that allows us to enter into narratives in the first place, so it’s hardly a surprise that one of the effects of reading is a strengthening of this virtue. And that gets at the fundamental misconception at the heart of postmodern theories of narrative. For Shields and Parks, stories are just clever ways to package an argument, but their theories leave unanswered why we enjoy all those elements of narratives that so distract us from the author’s supposed agenda. What this means is that postmodern scholars are confused about what a story even is. They don’t understand that the whole reason narratives have such persuasive clout is that reading them brings us close to actual experiences, simulating what it would be like to go through the incidents of the plots alongside the characters. And, naturally, experiences tend to be more persuasive than arguments. When we’re absorbed in a story, we fail to notice incongruities or false notes because in a very real sense we see them work just fine right before our mind’s eye. Parks worries that readers will passively absorb arguments, so he fails to realize that the whole point of narratives is to help us become actively absorbed in their simulated experiences.

            So what is literature? Is it pure rhetoric, pure art, or something in between? Do novelists begin conceiving of their works when they have some philosophical point to make and realize they need a story to cloak it in? Or are any aspects of their stories that influence readers toward one position or another merely incidental to the true purpose of writing fiction? Consider these questions in the light of your own story consuming habits. Do you go to a movie to have your favorite beliefs reinforced? Or do you go to have a moving experience? Or we can think of it in relation to other art forms. Does the painter arrange colors on a canvas to convince us of some point? Are we likely to vote differently after attending a symphony? The best art really does impact the way we think and feel, but that’s because it creates a moving experience, and—perhaps the most important point here—that experience can seldom be reduced to a single articulable proposition. Think about your favorite novel and try to pare it down to a single philosophical statement, or even ten statements. Now compare that list of statements to the actual work.

            Another fatal irony for postmodernism is that literary fiction, precisely because it requires special effort to appreciate, is a terribly ineffective medium for propaganda. And exploring why this is the case will start getting us into the types of lessons professors might be offering their students if they were less committed to their bizarre ideology than they were to celebrating literature as an art form. If we compare literary fiction to commercial fiction, we see that the prior has at least two disadvantages when it comes to absorbing our attention. First, literary writers are usually committed to realism, so the events of the plot have to seem like they may possibly occur in the real world, and the characters have to seem like people you could actually meet. Second, literary prose often relies on a technique known as estrangement, whereby writers describe scenes and experiences in a way that makes readers think about them differently than they ever have before, usually in the same way the character guiding the narration thinks of them. The effect of these two distinguishing qualities of literature is that you have less remarkable plots recounted in remarkably unfamiliar language, whereas with commercial fiction you have outrageous plots rendered in the plainest of terms.

            Since it’s already a challenge to get into literary stories, the notion that readers need to be taught how to resist their lures is simply perverse. And the notion that an art form that demands so much thought and empathy to be appreciated should be treated as some kind of delivery package for oppressive ideas is just plain silly—or rather it would be if nearly the entirety of American academia weren’t sold on it. I wonder if Parks sits in movie theaters violently scribbling in notebooks lest he succumb to the dangerous messages hidden in Pixar movies (like that friends are really great!). Our lives are pervaded by stories—why focus our paranoia on the least likely source of unacceptable opinions? Why assume our minds are pristinely in the right before being influenced? Of course, one of the biggest influences on our attitudes and beliefs, surely far bigger than any single reading of a book, is our choice of friends. Does Parks vet candidates for entrance into his social circle according to some standard of political correctness? For that matter, does he resist savoring his meals by jotting down notes about suspect ingredients, all the while remaining vigilant lest one of his dining partners slip in some indefensible opinion while he’s distracted with chewing?

            Probably the worst part of Parks’ advice to readers on how to get more out of literature is that he could hardly find a better way to ensure that their experiences will be blunted than by encouraging them to move as quickly as possible from the particular to the abstract and from the emotional to the intellectual. Emotionally charged experiences are the easiest to remember, dry abstractions the most difficult. If you want to get more out of literature, if you want to become actively absorbed in it, then you’ll need to forget about looking past the words on the page in search of confirmation for some pet theory. There’s enough ambiguity in good fiction to support just about any theory you’re determined to apply. But do you really want to go to literature intent on finding what you already think you know? Or would you rather go in search of undiscovered perspectives and new experiences?

Moonwalking with Einstein and Literature

            I personally stopped reading fiction with a pen in my hand—and even stopped using bookmarks—after reading Moonwalking with Einstein, a book on memory and competitive mnemonics by science writer Joshua Foer. A classic of participatory journalism, the book recounts Foer's preparation for the U.S. Memory Championships, and along the way it explores the implications of our culture’s continued shift toward more external forms of memory, from notes and books, to recorders and smartphones. Since one of the major findings in the field of memory research is that you can increase your capacity with the right kind of training, Foer began looking for opportunities to memorize things. He writes,

I started trying to use my memory in everyday life, even when I wasn’t practicing for the handful of arcane events that would be featured in the championship. Strolls around the neighborhood became an excuse to memorize license plates. I began to pay a creepy amount of attention to name tags. I memorized my shopping lists. I kept a calendar on paper, and also in my mind. Whenever someone gave me a phone number, I installed it in a special memory palace. (163-4)

Foer even got rid of all the sticky notes around his computer monitor, except for one which read, “Don’t forget to remember.”

            The most basic technique in mnemonics is what cognitive scientists call “elaborative encoding,” which means you tag otherwise insignificant items like numbers or common names with more salient associations, usually some kind of emotionally provocative imagery. After reading Foer’s book, it occurred to me that while the mnemonics masters went about turning abstractions into solid objects and people, literary scholars are busy insisting that we treat fictional characters as abstractions. Authors, in applying the principle of estrangement to their descriptions, are already doing most of the work of elaborately encoding pertinent information for us. We just to have to accept their invitations to us and put the effort into imagining what they describe.

             A study I came across sometime after reading Foer’s book illustrates the tradeoff between external and internal memories. Psychologist Linda Henkel compared the memories of museum visitors who were instructed to take pictures to those of people who simply viewed the various displays, and she found that taking pictures had a deleterious effect on recall. What seems to be occurring here is that museum visitors who don’t take pictures are either more motivated to get the full experience by mentally taking in all the details or simply less distracted by the mechanics of picture-taking. People with photos know they can rely on them as external memories, so they’re quicker to shift their attention to other things. In other words, because they’re storing parts of the present moment for the future, they have less incentive to occupy the present moment—to fully experience it—with the result that they don’t remember it as well.

            If I’m reading nonfiction, or if I’m reading a work of fiction I’ve already read before in preparation for an essay or book discussion, I’ll still pull out a pen once in a while. But the first time I read a work of literature I opt to follow Foer’s dictum, “Don’t forget to remember,” instead of relying on external markers. I make an effort to cast myself into the story, doing my best to think of the events as though they were actually happening before my eyes and think of the characters as though they were real people—if an author is skilled enough and generous enough to give a character a heartbeat, who are we to drain them of blood? Another important principle of cognitive psychology is that “Memory is the residue of thought.” So when I’m reading I take time—usually at section breaks—to think over what’s already happened and wonder at what may happen next.

I do eventually get around to thinking about abstractions like the author’s treatment of various themes and what the broader societal implications might be of the particular worldview represented in the story, insofar as there is a discernable one. But I usually save those topics for the times when I don’t actually have the book in my hands. It’s much more important to focus on the particulars, on the individual words and lines, so you can make the most of the writer’s magic and transform the marks on the page into images in your mind. I personally think it’s difficult to do that when you’re busy making your own marks on the pages. And I also believe we ought to have the courage and openheartedness to give ourselves over to great authors—at least for a while—confident in our ability to return from where they take us if we deem it necessary. Once in a while, the best thing to do is just shut up and listen.

Also read: 

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

And: 

REBECCA MEAD’S MIDDLEMARCH PILGRIMAGE AND THE 3 WRONG WAYS TO READ A NOVEL

And: 

Sabbath Says: Philip Roth and the Dilemmas of Ideological Castration

Read More
Dennis Junk Dennis Junk

The Creepy King Effect: Why We Can't Help Confusing Writers with Their Characters

Authors often serve as mental models helping readers imagine protagonists. This can be disconcerting when the protagonist engage in unsavory behavior. Is Stephen King as scary as his characters? Probably not, we all know, but the workings of his imagination are enough to make us wonder just a bit.

          Every writer faces this conundrum: your success hinges on your ability to create impressions that provoke emotions in the people who read your work, so you need feedback from a large sample of readers to gauge the effect of your writing. Without feedback, you have no way to calibrate the impact of your efforts and thus no way to hone your skills. This is why writers’ workshops are so popular; they bring a bunch of budding authors together to serve as one another’s practice audience. The major drawback to this solution is that a sample composed of fellow authorly aspirants may not be representative of the audience you ultimately hope your work will appeal to.

Whether or not they attend a workshop, all writers avail themselves of the ready-made trial audience comprised of their family and friends, a method which inevitably presents them with yet another conundrum: anyone who knows the author won’t be able to help mixing her up with her protagonists. The danger isn’t just that the feedback you get will be contaminated with moral judgments and psychological assessments; you also risk offending people you care about who will have a tough time not assuming identify with characters who bear even the most superficial resemblance to them. And of course you risk giving everyone the wrong idea about the type of person you are and the type of things you get up to.  

My first experience of being mistaken for one of my characters occurred soon after I graduated from college. A classmate and fellow double-major in psychology and anthropology asked to read a story I’d mentioned I was working on. Desperate for feedback, I emailed it to her right away. The next day I opened my inbox to find a two-page response to the story which treated everything described in it as purely factual and attempted to account for the emotional emptiness I’d demonstrated in my behavior and commentary. I began typing my own response explaining I hadn’t meant the piece to be taken as a memoir—hence the fictional name—and pointing to sections she’d missed that were meant to explain why the character was emotionally empty (I had deliberately portrayed him that way), but as I composed the message I found myself getting angry. As a writer of fiction, you trust your readers to understand that what you’re writing is, well, fiction, regardless of whether real people and real events figure into it to some degree. I felt like that trust had been betrayed. I was being held personally responsible for behaviors and comments that for all she knew I had invented whole-cloth for the sake of telling a good story.

To complicate matters, the events in the story my classmate was responding to were almost all true. And yet it still seemed tremendously unfair for anyone to have drawn conclusions about me based on it. The simplest way to explain this is to point out that you have an entirely different set of goals if you’re telling a story about yourself to a friend than you do if you’re telling a story about a fictional character to anyone who might read it—even if they’re essentially the same story. And your goals affect your choice of not only which events to include, but which aspects of the situation and which traits of the characters to focus on. Add in even a few purely fabricated elements and you can dramatically alter the readers’ impression of the characters.

Another way to think about this is to imagine how boring fiction would be if all authors knew they would be associated with and held accountable for everything their protagonists do or say. This is precisely why it’s so important to avoid mistaking writers for their characters, and why writers feel betrayed when that courtesy isn’t afforded to them. Unfortunately, that courtesy is almost never afforded to them. Indeed, if you call readers out for conflating you with your characters, many of them will double down on their mistake. As writers who feel our good names should be protected under the cover of the fiction label, we have to accept that human psychology is constantly operating to poke giant holes in that cover.

Let’s try an experiment: close your eyes for a moment and try to picture Jay Gatsby’s face in your mind’s eye. If you’re like me, you imagined one of two actors who played Gatsby in the movie versions, either Leonardo DiCaprio or Robert Redford. The reason these actors come so readily to mind is that imagining a character’s face from scratch is really difficult. What does Queequeq look like? Melville describes him in some detail; various illustrators have given us their renditions; a few actors have portrayed him, albeit never in a film you’d bother watching a second time. Since none of these movies is easily recallable, I personally have to struggle a bit to call an image of him to mind. What’s true of characters’ physical appearances is also true of nearly everything else about them. Going from words on a page to holistic mental representations of human beings takes effort, and even if you put forth that effort the product tends to be less than perfectly vivid and stable.

In lieu of a well-casted film, the easiest shortcut to a solid impression is to substitute the author you know for the character you don’t. Actors are also mistaken for their characters with disturbing frequency, or at least assumed to possess similar qualities. (“I’m not a doctor, but I play one on TV.”) To be fair, actors are chosen for roles they can convincingly pull off, and authors, wittingly or otherwise, infuse their characters with tinctures of their own personalities. So it’s not like you won’t ever find real correspondences.

You can nonetheless count on your perception of the similarities being skewed toward gross exaggeration. This is owing to a phenomenon social psychologists call the fundamental attribution error. The basic idea is that, at least in individualist cultures, people tend to attribute behavior to the regular inclinations of the person behaving as opposed to more ephemeral aspects of the situation: the driver who cut you off is inconsiderate and hasty, not rushing to work because her husband’s car broke down and she had to drop him off first. One of the classic experiments on this attribution bias had subjects estimate people’s support for Fidel Castro based on an essay they’d written about him. The study, conducted by Edward Jones and Victor Harris at the height of the Cold War, found that even if people were told that the author was assigned a position either for or against Castro based on a coin toss they still assumed more often than not that the argument reflected the author’s true beliefs.

The implication of Jones and Harris’s findings is that even if an author tries to assure everyone that she was writing on behalf of a character for the purpose of telling a story, and not in any way trying to use that character as a mouthpiece to voice some argument or express some sentiment, readers are still going to assume she agrees with everything her character thinks and says. As readers, we can’t help giving too little weight to the demands of the story and too much weight to the personality of the author. And writers can’t even count on other writers not to be biased in this way. In 2001, Eric Hansen, Charles Kimble, and David Biers conducted a series of experiments that instructed people to treat a fellow study participant in either a friendly or unfriendly way and then asked them to rate each other on friendliness. Even though they all got the same type of instructions, and hence should have appreciated the nature of the situational influences, they still attributed unfriendliness in someone else to that person’s disposition. Of course, their own unfriendliness they attributed to the instructions.

One of the theories for why we Westerners fall prey to the fundamental attribution error is that creating dual impressions of someone’s character takes a great deal of effort. Against the immediate and compelling evidence of actual behavior, we have nothing but an abstract awareness of the possibility that the person may behave differently in different circumstances. The ease of imagining a person behaving similarly even without situational factors like explicit instructions makes it seem more plausible, thus creating the illusion that we can somehow tell whether someone following instructions, performing a scene, or writing on behalf of a fictional character is being sincere—and naturally enough we nearly always think they are.

The underlying principle here—that we’re all miserly with our cognition—is bad news for writers for yet another reason. Another classic study, this one conducted by Daniel Gilbert and his colleagues, was reported in an article titled “You Can’t Not Believe Everything You Read,” which for a fiction writer sounds rather heartening at first. The experiment asked participants to determine prison sentences for defendants in imaginary court cases based on statements that were color-coded to signal they were either true or false. Even though some of the statements were marked as false, they still affected the length of the sentences, and the effect grew even more pronounced when the participants were distracted or pressed for time.

The researchers interpret these findings to mean that believing a statement is true is crucial to comprehending it. To understand the injunction against thinking of a pink elephant, you have to imagine the very pink elephant you’re not supposed to think about. Only after comprehension is achieved can you then go back and tag a statement as false. In other words, we automatically believe what we hear or read and only afterward, with much cognitive effort, go back and revise any conclusions we arrived at based on the faulty information. That’s why sentences based on rushed assessments were more severe—participants didn’t have enough time to go back and discount the damning statements that were marked as false.

If those of us who write fiction assume that our readers rely on the cognitive shortcut of substituting us for our narrators or protagonists, Hansen et al’s and Gilbert’s findings suggest yet another horrifying conundrum. The more the details of our stories immerse readers in the plot, the more difficulty they’re going to have taking into account the fictional nature of the behaviors being enacted in each of the scenes. So the more successful you are in writing your story, the harder it’s going to be to convince anyone you didn’t do the things you so expertly led them to envision you doing. And I suspect, even if readers know as a matter of fact you probably didn’t enact some of the behaviors described in the story, their impressions of you will still be influenced by a sort of abstract association between you and the character. When a reader seems to be confusing me with my characters, I like to pose the question, “Did you think Stephen King wanted to kill his family when you read The Shining?” A common answer I get is, “No, but he is definitely creepy.” (After reading King’s nonfiction book On Writing, I personally no longer believe he’s creepy.)

            When people talk to me about stories of mine they’ve read, they almost invariably use “you” as a shorthand for the protagonist. At least, that’s what I hope they’re doing—in many cases, though, they betray no awareness of the story as a story. To them, it’s just a straightforward description of some real events. Of course, when you press them they allow for some creative license; they’ll acknowledge that maybe it didn’t all happen exactly as it’s described. But that meager allowance still tends to leave me pretty mortified. Once, I even had a family member point to some aspects of a character that were recognizably me and suggest that they undermined the entire story because they made it impossible for her to imagine the character as anyone but me. In her mind, my goal in writing was to disguise myself behind the character, but I’d failed to suppress my true feelings. I tried to explain that I hadn’t tried to hide anything; I’d included elements of my own life deliberately because they served what were my actual goals. I don’t think she was convinced. At any rate, I never got any good feedback from her because she simply didn’t understand what I was really trying to do with the story. And ever since I’ve been taking a reader’s use of “you” to refer to the protagonist as an indication that I’ll need to go elsewhere for any useful commentary.

I’m pretty sure all fiction writers incorporate parts of their own life stories into their work. I’d even go so far as to posit that, at least for literary writers, creating plots and characters is more a matter of rearranging bits and pieces of real events and real people’s sayings and personalities into a coherent sequence with a beginning, middle, and end—a dilemma, resolution, and outcome—than it is of conjuring scenes and actors out of the void. But even a little of this type of rearranging is enough to make any judgments about the author seem pretty silly to anyone who can put the true details back together in their original order. The problem is the author is often the only one who knows what parts are true and how they actually happened, so you’re left having to simply ask her what she really thinks, what she really feels, and what she’s really done. For everyone else, the story only seems like it can tell them something when they already know whatever it is it might tell them. So they end up being tricked into making the leap from bits and pieces of recognizable realities to an assumption of general truthiness.

Even the greatest authors get mixed up in people’s minds with their characters. People think Rabbit Angstrom and John Updike are the same person—or at least that the character is some kind of distillation of the author. Philip Roth gets mistaken for both Nathan Zuckerman (though Roth seems to have wanted that to happen) and Mickey Sabbath, two very different characters. I even wonder if readers assume some kinship between Hilary Mantel and her fictional version of Thomas Cromwell. So I have to accept that my goal with this essay is ridiculously ambitious. As long as I write, people are going to associate me with my narrators and protagonists to one degree or another.

 ********

Nevertheless, I’m going to do something that most writers are loath to do. I’m going to retrace the steps that went into the development of my latest story so everyone can see what I mean when I say I’m responding to the demands of the story or making characters serve the goals of the story. By doing so, I hope to show how quickly real life character models and real life events get muddled, and why there could never be anything like a straightforward method for drawing conclusions about the author based on his or her characters.

The story is titled The Fire Hoarder and it follows a software engineer nearing forty who decides to swear off his family and friends for an indefinite amount of time because he’s impatient with their complacent mediocrity and feels beset by their criticisms, which he perceives as expressions of envy and insecurity. My main inspirations were a series of conversations with a recently divorced friend about the detrimental effects of marriage and parenthood on a man’s identity, a beautiful but somehow eerie nature preserve in my hometown where I fell into the habit of taking weekly runs, and the HBO series True Detective.

The newly divorced friend, whom I’ve known for almost twenty years, became a bit of a fitness maniac over this past summer. Mainly through grueling bike rides, he lost all the weight he’d put on since what he considered his physical prime, going from something like 235 to 190 pounds in the span of few months. Once, in the midst of a night of drinking, he began apologizing for all the time he’d been locked away, gaining weight, doing nothing with his life. He said he felt like he’d let me down, but I have to say it hadn’t ever occurred to me to take it personally. Months later, in the process of writing my story, I realized I needed some kind of personal drama in the protagonist’s life, something he would already be struggling with when the instigating events of the plot occurred. 

So my divorced friend, who turned 39 this summer (I’m just turning 37 myself), ended up serving as a partial model for two characters, the protagonist who is determined to get in better shape, and the friend who betrays him by being too comfortable and lazy in his family life. He shows up again in the words of yet another character, a police detective and tattoo artist who tries to convince the protagonist that single life is better than married life. Though, as one of the other models for that character, an actual police detective and tattoo artist, was quick to notice, the cop in the story is based on a few other people as well.

My own true detective meets the protagonist at a bar after the initial scene. The problem I faced with this chapter was that the main character had already decided to forswear socializing. I handled this by describing the relationship between the characters as one that didn’t include any kind of intimate revelations or profound exchanges—except when it did (like in this particular scene). “Oh man,” read the text I got from the real detective, “I hope I am not as shallow of a friend as Ray is to Russell?” And this is a really good example of how responding to the demands of the story can give the wrong impression to anyone looking for clues about the author’s true thoughts and feelings. (He later assured me he was just busting my balls.)

            Russell’s name was originally Steve; I changed it late in the writing process to serve as an homage to Rustin Cohle, one of the lead characters in True Detective. Before I ever even began watching the show, one of my brothers, the model for Russell’s brother Nick, compared me to Rust. He meant it as a compliment, but a complicated one. Like all brothers, our relationship is complimentary, but complicated. A few of the things my brother has said that have annoyed me over the past few years show up in the story, but whereas this type of commentary is subsumed in our continuing banter, which is almost invariably good-humored, it really gets under Russell’s skin. In a story, one of the goals is to give texture to the characters’ backgrounds, and another goal is often to crank up the tension. So I often include more serious versions of petty and not-so-memorable spats I have with friends, lovers, and family members in my plots and character bios. And when those same friends, lovers, and family members read the resulting story I have to explain that it doesn’t mean what they think it means. (I haven’t gotten any texts from my brother about the story yet.) I won't go into the details of my love life here; suffice it to say writers pretty much have to be prepared for their wives or girlfriends to flip out whenever they read one of their stories featuring fictional wives or girlfriends. 

I was initially put off by True Detective for the same reasons I have a hard time stomaching any hardboiled fiction. The characters use the general foulness of the human race to justify their own appalling behavior. “The world needs bad men,” Rust says to his partner. “They keep the other bad men from the door.” The conceit is that life is so ugly and people are so evil that we should all just walk around taking ourselves far too seriously as we bemoan the tragedy of existence. At one point, Rust tells some fellow detectives about M-theory, an outgrowth of superstring theory. The show tries to make it sound tragic and horrifying. But the tone of the scene is perfectly nonsensical. Why should thinking about extra dimensions be like rubbing salt in our existential wounds? The view of the world that emerges is as embarrassingly adolescent as it is screwball.

But much of the dialogue in the show is magnificent, and the scene-by-scene progression of the story is virtuoso. When I first watched it, the conversations about marriage and family life resonated with the discussions I’d been having with my divorced friend over the summer. And Rust in particular forces us to ask what becomes of a man who finds the common rituals and diversions to be resoundingly devoid of meaning. The entire mystery of the secret cult at the center of the plot, with all its crude artifacts made of sticks, really only succeeds as a prop for Rust’s struggle with his own character. He needs something to obsess over. But the bad guy at the end, the monster at the end of the dream, is destined to disappoint. I included my own true detective in The Fire Hoarder so there would be someone who could explain why not finding such monsters is worse than finding them. And I went on to explore what a character like Rust, obsessive, brilliant, skeptical, curious, and haunted would do in the absence of a bad guy to hunt down. But my guy wouldn’t take himself so seriously.

If you add in the free indirect style of narration I enjoy in the works of Saul Bellow, Philip Roth, Ian McEwan, Hilary Mantel, and others, along with some of the humor you find in their novels, you have the technique I used, the tone I tried to strike, and my approach to dealing with the themes. (The revelation at the end that one of the characters is acting as the narrator is a trick I’m most familiar with from McEwan’s Sweet Tooth.) The ideal reader response to my story would focus on these issues of style and technique, and insofar as the comments took on topics like the vividness of the characters or the feelings they inspired it would do so as if they were entities entirely separate from me and the people I know and care about.

But I know that would be asking a lot. The urge to read stories, the pleasure we take in them, is a product of the same instincts that make us fascinated with gossip. And we have a nasty tendency to try to find hidden messages in what we read, as though we were straining to hear the whispers we can’t help suspecting are about us--and not exactly flattering. So, as frustrated as I get with people who get the wrong idea, I usually come around to just being happy there are some people out there who are interested enough in what I’m doing to read my fiction.  

Also read: 

THE FIRE HOARDER

And:

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

The Rowling Effect: The Uses and Abuses of Storytelling in Literary Fiction

Donna Tartt’s Pulitzer-winning novel “The Goldfinch” prompted critic James Wood to lament the demise of the more weighty and serious novels of the past and the rise of fantastical stories in a world where adults go around reading Harry Potter. But is Wood confused about what storytelling really is?

It’s in school that almost everyone first experiences both the joys and the difficulties of reading stories. And almost everyone quickly learns to associate reading fiction with all the other abstract, impersonal, and cognitively demanding tasks forced on them by their teachers. One of the rewards of graduation, indeed of adulthood, is that you no longer have to read boring stories and novels you have to work hard to understand, all those lengthy texts that repay the effort with little else besides the bragging rights for having finished. (So, on top of being a waste of time, reading books makes normal people hate you.) One of the worst fates for an author, meanwhile, is to have your work assigned in one of those excruciating lit courses students treat as way stations on their way to gainful employment, an honor all but guaranteed to inspire lifelong loathing.

As a lonely endeavor, reading is most enticing—for many it’s only enticing—when viewed as an act of rebellion. (It’s no accident that the Harry Potter books begin with scenes of life with the Dursley family, caricaturizing as it does conformity and harsh, arbitrary discipline.) So, if students’ sole motivation to read comes from on-high, with the promise of quizzes and essays to follow, the natural defiance of adolescence ensures a deep-seated suspicion of the true purpose of the assignment and a stubborn resistance to any emotional connection with the characters. This is why all but the tamest, most credulous of students get filtered out on the way to advanced literature courses at universities, the kids neediest of praise from teachers and least capable of independent thought, which is in turn why so many cockamamie ideas proliferate in English departments. As arcane theories about the “indeterminacy of meaning” or “the author function” trickle down into high schools and grade schools, it becomes ever more difficult to imagine, let alone test, possible reforms to the methods teachers use to introduce kids to written stories.

Miraculously, reading persists at the margins of society, far removed from the bloodless academic exercises students are programmed to dread. The books you’re most likely to read after graduation are the type you read when you’re supposed to be reading something else, the comics tucked inside textbooks, the unassigned or outright banned books featuring characters struggling with sex, religious doubt, violence, abortion, or corrupt authorities. One of the reasons the market for books written for young adults is currently so vibrant and successful is that literature teachers haven’t gotten around to including any of the most recent novels in their syllabuses. And, if teachers take to heart the admonitions of critics like Ruth Graham, who insists that “the emotional and moral ambiguity of adult fiction—of the real world—is nowhere in evidence in YA fiction,” they never will. YA books' biggest success is making reading its own reward, not an exercise in the service of developing knowledge or character or maturity—whatever any of those are supposed to be. And what naysayers like Graham fear is that such enjoyment might be coming at the expense of those same budding virtues, and it may even forestall the reader’s graduation to the more refined gratifications that come from reading more ambiguous and complex—or more difficult, or less fantastical—fiction. 

Harry Potter became a cultural phenomenon at a time when authors, publishers, and critics were busy breaking the news of the dismal prognosis for the novel, beset as it was by the rise of the internet, the new golden age of television, and a growing impatience with texts extending more than a few paragraphs. The impact may not have been felt in the wider literary world if the popularity of Rowling’s books had been limited to children and young adults, but British and American grownups seem to have reasoned that if the youngsters think it’s cool it’s probably worth it for the rest of us young-at-hearts to take a look. Now not only are adults reading fiction written for teens, but authors—even renowned literary authors—are taking their cue from the YA world. Marquee writers like Donna Tartt and David Mitchell are spinning out elaborate yarns teeming with teen-tested genre tropes they hope to make respectable with a liberal heaping of highly polished literary prose. Predictably, the laments and jeremiads from old-school connoisseurs are beginning to show up in high-end periodicals. Here’s James Wood’s opening to a review of Mitchell’s latest novel:

As the novel’s cultural centrality dims, so storytelling—J.K. Rowling’s magical Owl of Minerva, equipped for a thousand tricks and turns—flies up and fills the air. Meaning is a bit of a bore, but storytelling is alive. The novel form can be difficult, cumbrously serious; storytelling is all pleasure, fantastical in its fertility, its ceaseless inventiveness. Easy to consume, too, because it excites hunger while simultaneously satisfying it: we continuously want more. The novel now aspires to the regality of the boxed DVD set: the throne is a game of them. And the purer the storytelling the better—where purity is the embrace of sheer occurrence, unburdened by deeper meaning. Publishers, readers, booksellers, even critics, acclaim the novel that one can deliciously sink into, forget oneself in, the novel that returns us to the innocence of childhood or the dream of the cartoon, the novel of a thousand confections and no unwanted significance. What becomes harder to find, and lonelier to defend, is the idea of the novel as—in Ford Maddox Ford’s words—a “medium of profoundly serious investigation into the human case.”

As is customary for Wood, the bracingly eloquent clarifications in this passage serve to misdirect readers from its overall opacity, which is to say he raises more questions than he answers.

              The most remarkable thing in Wood’s advance elegy (an idea right out of Tom Sawyer and reprised in The Fault in Our Stars) is the idea that “the novel” is somehow at odds with storytelling. The charge that a given novel fails to rise above mere kitsch is often a legitimate one: a likable but attractively flawed character meets another likable character whose equally attractive flaws perfectly complement and compensate for those of the first, so that they can together transcend their foibles and live happily ever after. This is the formula for commercial fiction, designed to uplift and delight (and make money). But the best of YA novels are hardly guilty of this kind of pandering. And even if we acknowledge that an author aiming simply to be popular and pleasing is a good formula in its own right—for crappy novels—it doesn’t follow that quality writing precludes pleasurable reading. The questions critics like Graham and Wood fail to answer as they bemoan the decline of ambiguity on the one hand and meaning on the other is what role either one of them naturally plays, either in storytelling or in literature, and what really distinguishes a story from a supposedly more serious and meaningful novel?

Donna Tartt’s Pulitzer-winning novel The Goldfinch has rekindled an old debate about the difference between genre fiction and serious literature. Evgenia Peretz chronicles some earlier iterations of the argument in Vanity Fair, and the popularity of Rowling’s wizards keeps coming up, both as an emblem of the wider problem and a point of evidence proving its existence. As Christopher Beha explains in the New Yorker,

The problem with “The Goldfinch,” its detractors said, was that it was essentially a Y.A. novel. Vanity Fair quoted Wood as saying that “the rapture with which this novel has been received is further proof of the infantilization of our literary culture: a world in which adults go around reading Harry Potter.

For Wood—and he’s hardly alone—fantastical fiction lacks meaning for the very reason so many readers find it enjoyable: it takes place in a world that simply doesn’t exist, with characters like no one you’ll ever really encounter, and the plots resolve in ways that, while offering some modicum of reassurance and uplift, ultimately mislead everyone about what real, adult life is all about. Whatever meaning these simplified and fantastical fictions may have is thus hermetically sealed within the world of the story.

            The terms used in the debates over whether there’s a meaningful difference between commercial and literary fiction and whether adults should be embarrassed to be caught reading Harry Potter are so poorly defined, and the nature of stories so poorly understood, that it seems like nothing will ever be settled. But the fuzziness here is gratuitous. Graham’s cherishing of ambiguity is perfectly arbitrary. Wood is simply wrong in positing a natural tension between storytelling and meaning. And these critics’ gropings after some solid feature of good, serious, complex, adult literature they can hold up to justify their impatience and disappointment in less ambitious readers is symptomatic of the profound vacuity of literary criticism as both a field of inquiry and an artistic, literary form of its own. Even a critic as erudite and perceptive as Wood—and as eminently worth reading, even when he’s completely wrong—relies on fundamental misconceptions about the nature of stories and the nature of art.

            For Wood, the terms story, genre, plot, and occurrence are all but interchangeable. That’s how he can condemn “the embrace of sheer occurrence, unburdened by deeper meaning.” But the type of meaning he seeks in literature sounds a lot like philosophy or science. How does he distinguish between novels and treatises? The problem here is that story is not reducible to sheer occurrence. Plots are not mere sequences of events. If I tell you I got in my car, went to the store, and came home, I’m recalling a series of actions—but it’s hardly a story. However, if I say I went to the store and while I was there I accidentally bumped shoulders with a guy who immediately flew into a rage, then I’ve begun to tell you a real story. Many critics and writing coaches characterize this crucial ingredient as conflict, but that’s only partly right. Conflicts can easily be reduced to a series of incidents. What makes a story a story is that it features some kind of dilemma, some situation in which the protagonist has to make a difficult decision. Do I risk humiliation and apologize profusely to the guy whose shoulder I bumped? Do I risk bodily harm and legal trouble by standing up for myself? There’s no easy answer. That’s why it has the makings of a good story.

            Meaning in stories is not declarative or propositional, just as the point of physical training doesn’t lie in any communicative aspect of the individual exercises. And you wouldn’t judge a training regimen based solely on the exercises’ resemblance to actions people perform in their daily lives. A workout is good if it’s both enjoyable and effective, that is, if going through it offers sufficient gratification to outweigh the difficulty—so you keep doing it—and if you see improvements in the way you look and feel. The pleasure humans get from stories is probably a result of the same evolutionary processes that make play fighting or play stalking fun for cats and dogs. We need to acquire skills for handling our complex social lives just as they need to acquire skills for fighting and hunting. Play is free-style practice made pleasurable by natural selection to ensure we’re rewarded for engaging in it. The form that play takes, as important as it is in preparing for real-life challenges, only needs to resemble real life enough for the skills it hones to be applicable. And there’s no reason reading about Harry Potter working through his suspicions and doubts about Dumbledore couldn’t help to prepare people of any age for a similar experience of having to question the wisdom or trustworthiness of someone they admire—even though they don’t know any wizards. (And isn’t this dilemma similar to the one so many of Saul Bellow’s characters face in dealing with their “reality instructors” in the novels Wood loves most?)

            The rather obvious principle that gets almost completely overlooked in debates about low versus high art is that the more refined and complex a work is the more effort will be necessary to fully experience it and the fewer people will be able to fully appreciate it. The exquisite pleasures of long-distance running, or classical music, or abstract art are reserved for those who have done adequate training and acquired sufficient background knowledge. Apart from this inescapable corollary of aesthetic refinement and sophistication, though, there’s a fetishizing of difficulty for the sake of difficulty apparent in many art forms. In literature, novels celebrated by the supposed authorities, books like Ulysses, Finnegan’s Wake, and Infinite Jest, offer none of the joys of good stories. Is it any wonder so many readers have stopped listening to the authorities? Wood is not so foolish as to equate difficulty with quality, as fans of Finnegan’s Wake must, but he does indeed make the opposite mistake—assuming that lack of difficulty proves lack of quality. There’s also an unmistakable hint of the puritanical, even the masochistic in Wood’s separation of the novel from storytelling and its pleasures. He’s like the hulking power lifter overcome with disappointment at all the dilettantish fitness enthusiasts parading around the gym, smiling, giggling, not even exerting themselves enough to feel any real pain.  

            What the Harry Potter books are telling us is that there still exists a real hunger for stories, not just as flippant and senseless contrivances, but as rigorously imagined moral dilemmas faced by characters who inspire strong feelings, whether positive, negative, or ambivalent. YA fiction isn't necessarily simpler, its characters invariably bland baddies or goodies, its endings always neat and happy. The only things that reliably distinguish it are its predominantly young adult characters and its general accessibility. It's probably true that The Goldfinch's appeal to many people derives from it being both literary and accessible. More interestingly, it probably turns off just as many people, not because it's overplotted, but because the story is mediocre, the central dilemma of the plot too easily resolved, the main character too passive and pathetic. Call me an idealist, but I believe that literary language can be challenging while not being impenetrable, that plots can be both eventful and meaningful, and that there’s a reliable blend of ingredients for mixing this particular magic potion: characters who actually do things, whose actions get them mixed up in high-stakes dilemmas, who are described in language that both captures their personalities and conveys the urgency of their circumstances. This doesn’t mean every novel needs to have dragons and werewolves, but it does mean having them doesn’t necessarily make a novel unworthy of serious attention from adults. And we need not worry about the fate of less fantastical literature because there will always be a small percentage of the population who prefers, at least on occasion, a heavier lift.

Also read: 

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

And: 

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And: 

WHAT'S THE POINT OF DIFFICULT READING?

Read More
Dennis Junk Dennis Junk

How Violent Fiction Works: Rohan Wilson’s “The Roving Party” and James Wood’s Sanguinary Sublime from Conrad to McCarthy

James Wood criticized Cormac McCathy’s “No Country for Old Men” for being too trapped by its own genre tropes. Wood has a strikingly keen eye for literary registers, but he’s missing something crucial in his analysis of McCarthy’s work. Rohan Wilson’s “The Roving Party” works on some of the same principles as McCarthy’s work, and it shows that the authors’ visions extend far beyond the pages of any book.

            Any acclaimed novel of violence must be cause for alarm to anyone who believes stories encourage the behaviors depicted in them or contaminate the minds of readers with the attitudes of the characters. “I always read the book as an allegory, as a disguised philosophical argument,” writes David Shields in his widely celebrated manifesto Reality Hunger. Suspicious of any such disguised effort at persuasion, Shields bemoans the continuing popularity of traditional novels and agitates on behalf of a revolutionary new form of writing, a type of collage that is neither marked as fiction nor claimed as truth but functions rather as a happy hybrid—or, depending on your tastes, a careless mess—and is in any case completely lacking in narrative structure. This is because to him giving narrative structure to a piece of writing is itself a rhetorical move. “I always try to read form as content, style as meaning,” Shields writes. “The book is always, in some sense, stutteringly, about its own language” (197).

As arcane as Shields’s approach to reading may sound, his attempt to find some underlying message in every novel resonates with the preoccupations popular among academic literary critics. But what would it mean if novels really were primarily concerned with their own language, as so many students in college literature courses are taught? What if there really were some higher-order meaning we absorbed unconsciously through reading, even as we went about distracting ourselves with the details of description, character, and plot? Might a novel like Heart of Darkness, instead of being about Marlowe’s growing awareness of Kurtz’s descent into inhuman barbarity, really be about something that at first seems merely contextual and incidental, like the darkness—the evil—of sub-Saharan Africa and its inhabitants? Might there be a subtle prompt to regard Kurtz’s transformation as some breed of enlightenment, a fatal lesson encapsulated and propagated by Conrad’s fussy and beautifully tantalizing prose, as if the author were wielding the English language like the fastenings of a yoke over the entire continent?

Novels like Cormac McCarthy’s Blood Meridian and, more recently, Rohan Wilson’s The Roving Party, take place amid a transition from tribal societies to industrial civilization similar to the one occurring in Conrad’s Congo. Is it in this seeming backdrop that we should seek the true meaning of these tales of violence? Both McCarthy’s and Wilson’s novels, it must be noted, represent conspicuous efforts at undermining the sanitized and Manichean myths that arose to justify the displacement and mass killing of indigenous peoples by Europeans as they spread over the far-flung regions of the globe. The white men hunting “Indians” for the bounties on their scalps in Blood Meridian are as beastly and bloodthirsty as the savages peopling the most lurid colonial propaganda, just as the Europeans making up Wilson’s roving party are only distinguishable by the relative degrees of their moral degradation, all of them, including the protagonist, moving in the shadow of their chief quarry, a native Tasmanian chief.

If these novels are about their own language, their form comprising their true content, all in the service of some allegory or argument, then what pleasure would anyone get from them, suggesting as they do that to partake of the fruit of civilization is to become complicit in the original sin of the massacre that made way for it? “There is no document of civilization,” Walter Benjamin wrote, “that is not at the same time a document of barbarism.” It could be that to read these novels is to undergo a sort of rite of expiation, similar to the ritual reenactment of the crucifixion performed by Christians in the lead up to Easter. Alternatively, the real argument hidden in these stories may be still more insidious; what if they’re making the case that violence is both eternal and unavoidable, that it is in our nature to relish it, so there’s no more point in resisting the urge personally than in trying to bring about reform politically?

            Shields intimates that the reason we enjoy stories is that they warrant our complacency when he writes, “To ‘tell a story well’ is to make what one writes resemble the schemes people are used to—in other words, their ready-made idea of reality” (200). Just as we take pleasure in arguments for what we already believe, Shields maintains (explicitly) that we delight in stories that depict familiar scenes and resolve in ways compatible with our convictions. And this equating of the pleasure we take in reading with the pleasure we take in having our beliefs reaffirmed is another practice nearly universal among literary critics. Sophisticated readers know better than to conflate the ideas professed by villainous characters like the judge in Blood Meridian with those of the author, but, as one prominent critic complains,

there is often the disquieting sense that McCarthy’s fiction puts certain fond American myths under pressure merely to replace them with one vaster myth—eternal violence, or [Harold] Bloom’s “universal tragedy of blood.” McCarthy’s fiction seems to say, repeatedly, that this is how it has been and how it always will be.

What’s interesting about this interpretation is that it doesn’t come from anyone normally associated with Shields’s school of thought on literature. Indeed, its author, James Wood, is something of a scourge to postmodern scholars of Shields’s ilk.

Wood takes McCarthy to task for his alleged narrative dissemination of the myth of eternal violence in a 2005 New Yorker piece, Red Planet: The Sanguinary Sublime of Cormac McCarthy, a review of his then latest novel No Country for Old Men. Wood too, it turns out, hungers for reality in his novels, and he faults McCarthy’s book for substituting psychological profundity with the pabulum of standard plot devices. He insists that

the book gestures not toward any recognizable reality but merely toward the narrative codes already established by pulp thrillers and action films. The story is itself cinematically familiar. It is 1980, and a young man, Llewelyn Moss, is out antelope hunting in the Texas desert. He stumbles upon several bodies, three trucks, and a case full of money. He takes the money. We know that he is now a marked man; indeed, a killer named Anton Chigurh—it is he who opens the book by strangling the deputy—is on his trail.

Because McCarthy relies on the tropes of a familiar genre to convey his meaning, Wood suggests, that meaning can only apply to the hermetic universe imagined by that genre. In other words, any meaning conveyed in No Country for Old Men is rendered null in transit to the real world.

When Chigurh tells the blameless Carla Jean that “the shape of your path was visible from the beginning,” most readers, tutored in the rhetoric of pulp, will write it off as so much genre guff. But there is a way in which Chigurh is right: the thriller form knew all along that this was her end.

The acuity of Wood’s perception when it comes to the intricacies of literary language is often staggering, and his grasp of how diction and vocabulary provide clues to the narrator’s character and state of mind is equally prodigious. But, in this dismissal of Chigurh as a mere plot contrivance, as in his estimation of No Country for Old Men in general as a “morally empty book,” Wood is quite simply, quite startlingly, mistaken. And we might even say that the critical form knew all along that he would make this mistake.

           When Chigurh tells Carla Jean her path was visible, he’s not voicing any hardboiled fatalism, as Wood assumes; he’s pointing out that her predicament came about as a result of a decision her husband Llewelyn Moss made with full knowledge of the promised consequences. And we have to ask, could Wood really have known, before Chigurh showed up at the Moss residence, that Carla Jean would be made to pay for her husband’s defiance? It’s easy enough to point out superficial similarities to genre conventions in the novel (many of which it turns inside-out), but it doesn’t follow that anyone who notices them will be able to foretell how the book will end. Wood, despite his reservations, admits that No Country for Old Men is “very gripping.” But how could it be if the end were so predictable? And, if it were truly so morally empty, why would Wood care how it was going to end enough to be gripped? Indeed, it is in the realm of the characters’ moral natures that Wood is the most blinded by his reliance on critical convention. He argues,

Lewelyn Moss, the hunted, ought not to resemble Anton Chigurh, the hunter, but the flattening effect of the plot makes them essentially indistinguishable. The reader, of course, sides with the hunted. But both have been made unfree by the fake determinism of the thriller.

How could the two men’s fates be determined by the genre if in a good many thrillers the good guy, the hunted, prevails?

One glaring omission in Wood’s analysis is that Moss initially escapes undetected with the drug money he discovers at the scene of the shootout he happens upon while hunting, but he is then tormented by his conscience until he decides to return to the trucks with a jug of water for a dying man who begged him for a drink. “I’m fixin to go do somethin dumbern hell but I’m goin anyways,” he says to Carla Jean when she asks what he’s doing. “If I don’t come back tell Mother I love her” (24). Llewelyn, throughout the ensuing chase, is thus being punished for doing the right thing, an injustice that unsettles readers to the point where we can’t look away—we’re gripped—until we’re assured that he ultimately defeats the agents of that injustice. While Moss risks his life to give a man a drink, Chigurh, as Wood points out, is first seen killing a cop. Moreover, it’s hard to imagine Moss showing up to murder an innocent woman to make good on an ultimatum he’d presented to a man who had already been killed in the interim—as Chigurh does in the scene when he explains to Carla Jean that she’s to be killed because Llewelyn made the wrong choice.

Chigurh is in fact strangely principled, in a morally inverted sort of way, but the claim that he’s indistinguishable from Moss bespeaks a failure of attention completely at odds with the uncannily keen-eyed reading we’ve come to expect from Wood. When he revisits McCarthy’s writing in a review of the 2006 post-apocalyptic novel The Road, collected in the book The Fun Stuff, Wood is once again impressed by McCarthy’s “remarkable effects” but thoroughly baffled by “the matter of his meanings” (61). The novel takes us on a journey south to the sea with a father and his son as they scrounge desperately for food in abandoned houses along the way. Wood credits McCarthy for not substituting allegory for the answer to “a simpler question, more taxing to the imagination and far closer to the primary business of fiction making: what would this world without people look like, feel like?” But then he unaccountably struggles to sift out the novel’s hidden philosophical message. He writes,

A post-apocalyptic vision cannot but provoke dilemmas of theodicy, of the justice of fate; and a lament for the Deus absconditus is both implicit in McCarthy’s imagery—the fine simile of the sun that circles the earth “like a grieving mother with a lamp”—and explicit in his dialogue. Early in the book, the father looks at his son and thinks: “If he is not the word of God God never spoke.” There are thieves and murderers and even cannibals on the loose, and the father and son encounter these fearsome envoys of evil every so often. The son needs to think of himself as “one of the good guys,” and his father assures him that this is the side they are indeed on. (62)

We’re left wondering, is there any way to answer the question of what a post-apocalypse would be like in a story that features starving people reduced to cannibalism without providing fodder for genre-leery critics on the lookout for characters they can reduce to mere “envoys of evil”?

As trenchant as Wood is regarding literary narration, and as erudite—or pedantic, depending on your tastes—as he is regarding theology, the author of the excellent book How Fiction Works can’t help but fall afoul of his own, and his discipline’s, thoroughgoing ignorance when it comes to how plots work, what keeps the moral heart of a story beating. The way Wood fails to account for the forest comprised of the trees he takes such thorough inventory of calls to mind a line of his own from a chapter in The Fun Stuff about Edmund Wilson, describing an uncharacteristic failure on part of this other preeminent critic:

Yet the lack of attention to detail, in a writer whose greatness rests supremely on his use of detail, the unwillingness to talk of fiction as if narrative were a special kind of aesthetic experience and not a reducible proposition… is rather scandalous. (72)

To his credit, though, Wood never writes about novels as if they were completely reducible to their propositions; he doesn’t share David Shields’s conviction that stories are nothing but allegories or disguised philosophical arguments. Indeed, few critics are as eloquent as Wood on the capacity of good narration to communicate the texture of experience in a way all literate people can recognize from their own lived existences.

            But Wood isn’t interested in plot. He just doesn’t seem to like them. (There’s no mention of plot in either the table of contents or the index to How Fiction Works.) Worse, he shares Shields’s and other postmodern critics’ impulse to decode plots and their resolutions—though he also searches for ways to reconcile whatever moral he manages to pry from the story with its other elements. This is in fact one of the habits that tends to derail his reviews. Even after lauding The Road’s eschewal of easy allegory in place of the hard work of ground-up realism, Wood can’t help trying to decipher the end of the novel in the context of the religious struggle he sees taking place in it. He writes of the son’s survival,

The boy is indeed a kind of last God, who is “carrying the fire” of belief (the father and son used to speak of themselves, in a kind of familial shorthand, as people who were carrying the fire: it seems to be a version of being “the good guys”.) Since the breath of God passes from man to man, and God cannot die, this boy represents what will survive of humanity, and also points to how life will be rebuilt. (64)

This interpretation underlies Wood’s contemptuous attitude toward other reviewers who found the story uplifting, including Oprah, who used The Road as one of her book club selections. To Wood, the message rings false. He complains that

a paragraph of religious consolation at the end of such a novel is striking, and it throws the book off balance a little, precisely because theology has not seemed exactly central to the book’s inquiry. One has a persistent, uneasy sense that theodicy and the absent God have been merely exploited by the book, engaged with only lightly, without much pressure of interrogation. (64)

Inquiry? Interrogation? Whatever happened to “special kind of aesthetic experience”? Wood first places seemingly inconsequential aspects of the novel at the center of his efforts to read meaning into it, and then he faults the novel for not exploring these aspects at greater length. The more likely conclusion we might draw here is that Wood is simply and woefully mistaken in his interpretation of the book’s meaning. Indeed, Wood’s jump to theology, despite his insistence on its inescapability, is really quite arbitrary, one of countless themes a reader might possibly point to as indicative of the novel’s one true meaning.

Perhaps the problem here is the assumption that a story must have a meaning, some point that can be summed up in a single statement, for it to grip us. Getting beyond the issue of what statement the story is trying to make, we can ask what it is about the aesthetic experience of reading a novel that we find so compelling. For Wood, it’s clear the enjoyment comes from a sort of communion with the narrator, a felt connection forged by language, which effects an estrangement from his own mundane experiences by passing them through the lens of the character’s idiosyncratic vocabulary, phrasings, and metaphors. The sun dimly burning through an overcast sky looks much different after you’ve heard it compared to “a grieving mother with a lamp.” This pleasure in authorial communion and narrative immersion is commonly felt by the more sophisticated of literary readers. But what about less sophisticated readers? Many people who have a hard enough time simply understanding complex sentences, never mind discovering in them clues to the speaker’s personality, nevertheless become absorbed in narratives.

Developmental psychologists Karen Wynn and Paul Bloom, along with then graduate student Kiley Hamlin, serendipitously discovered a major clue to the mystery of why fictional stories engage humans’ intellectual and emotional faculties so powerfully while trying to determine at what age children begin to develop a moral sense. In a series of experiments conducted at the Yale Infant Cognition Center, Wynn and her team found that babies under a year old, even as young as three months, are easily induced to attribute agency to inanimate objects with nothing but a pair of crude eyes to suggest personhood. And, astonishingly, once agency is presumed, these infants begin attending to the behavior of the agents for evidence of their propensities toward being either helpfully social or selfishly aggressive—even when they themselves aren’t the ones to whom the behaviors are directed. 

            In one of the team’s most dramatic demonstrations, the infants watch puppet shows featuring what Bloom, in his book about the research program Just Babies, refers to as “morality plays” (30). Two rabbits respond to a tiger’s overture of rolling a ball toward them in different ways, one by rolling it back playfully, the other by snatching it up and running away with it. When the babies are offered a choice between the two rabbits after the play, they nearly always reach for the “good guy.” However, other versions of the experiment show that babies do favor aggressive rabbits over nice ones—provided that the victim is itself guilty of some previously witnessed act of selfishness or aggression. So the infants prefer cooperation over selfishness and punishment over complacency.

            Wynn and Hamlin didn’t intend to explore the nature of our fascination with fiction, but even the most casual assessment of our most popular stories suggests their appeal to audiences depends on a distinction similar to the one made by the infants in these studies. Indeed, the most basic formula for storytelling could be stated: good guy struggles against bad guy. Our interest is automatically piqued once such a struggle is convincingly presented, and it doesn’t depend on any proposition that can be gleaned from the outcome.

We favor the good guy because his (or her) altruism triggers an emotional response—we like him. And our interest in the ongoing developments of the story—the plot—are both emotional and dynamic. This is what the aesthetic experience of narrative consists of. 

            The beauty in stories comes from the elevation we get from the experience of witnessing altruism, and the higher the cost to the altruist the more elevating the story. The symmetry of plots is the balance of justice. Stories meant to disturb readers disrupt that balance.The crudest stories pit good guys against bad guys. The more sophisticated stories feature what we hope are good characters struggling against temptations or circumstances that make being good difficult, or downright dangerous. In other words, at the heart of any story is a moral dilemma, a situation in which characters must decide who deserves what fate and what they’re willing to pay to ensure they get it. The specific details of that dilemma are what we recognize as the plot.

The most basic moral, lesson, proposition, or philosophical argument inherent in the experience of attending to a story derives then not from some arbitrary decision on the part of the storyteller but from an injunction encoded in our genome. At some point in human evolution, our ancestor’s survival began to depend on mutual cooperation among all the members of the tribe, and so to this day, and from a startlingly young age, we’re on the lookout for anyone who might be given to exploiting the cooperative norms of our group. Literary critics could charge that the appeal of the altruist is merely another theme we might at this particular moment in history want to elevate to the status of most fundamental aspect of narrative. But I would challenge anyone who believes some other theme, message, or dynamic is more crucial to our engagement with stories to subject their theory to the kind of tests the interplay of selfish and altruistic impulses routinely passes in the Yale studies. Do babies care about theodicy? Are Wynn et al.’s morality plays about their own language?

This isn’t to say that other themes or allegories never play a role in our appreciation of novels. But whatever role they do play is in every case ancillary to the emotional involvement we have with the moral dilemmas of the plot. 1984 and Animal Farm are clear examples of allegories—but their greatness as stories is attributable to the appeal of their characters and the convincing difficulty of their dilemmas. Without a good plot, no one would stick around for the lesson. If we didn’t first believe Winston Smith deserved to escape Room 101 and that Boxer deserved a better fate than the knackery, we’d never subsequently be moved to contemplate the evils of totalitarianism. What makes these such powerful allegories is that, if you subtracted the political message, they’d still be great stories because they engage our moral emotions.

            What makes violence so compelling in fiction then is probably not that it sublimates our own violent urges, or that it justifies any civilization’s past crimes; violence simply ups the stakes for the moral dilemmas faced by the characters. The moment by moment drama in The Road, for instance, has nothing to do with whether anyone continues to believe in God. The drama comes from the father and son’s struggles to resist having to succumb to theft and cannibalism to survive. That’s the most obvious theme recurring throughout the novel. And you get the sense that were it not for the boy’s constant pleas for reassurance that they would never kill and eat anyone—the ultimate act of selfish aggression—and that they would never resort to bullying and stealing, the father quite likely would have made use of such expedients. The fire that they’re carrying is not the light of God; it’s the spark of humanity, the refusal to forfeit their human decency. (Wood doesn't catch that the fire was handed off from Sheriff Bell's father at the end of No Country.) The boy may very well be a redeemer, in that he helps his father make it to the end of his life with a clear conscience, but unless you believe morality is exclusively the bailiwick of religion God’s role in the story is marginal at best.

            What the critics given to dismissing plots as pointless fabrications fail to consider is that just as idiosyncratic language and simile estranges readers from their mundane existence so too the high-stakes dilemmas that make up plots can make us see our own choices in a different light, effecting their own breed of estrangement with regard to our moral perceptions and habits. In The Roving Party, set in the early nineteenth century, Black Bill, a native Tasmanian raised by a white family, joins a group of men led by a farmer named John Batman to hunt and kill other native Tasmanians and secure the territory for the colonials. The dilemmas Bill faces are like nothing most readers will ever encounter. But their difficulty is nonetheless universally understandable. In the following scene, Bill, who is also called the Vandemonian, along with a young boy and two native scouts, watches as Batman steps up to a wounded clansman in the aftermath of a raid on his people.

Batman considered the silent man secreted there in the hollow and thumbed back the hammers. He put one foot either side of the clansman’s outstretched legs and showed him the long void of those bores, standing thus prepared through a few creakings of the trees. The warrior was wide-eyed, looking to Bill and to the Dharugs.

The eruption raised the birds squealing from the branches. As the gunsmoke cleared the fellow slumped forward and spilled upon the soil a stream of arterial blood. The hollow behind was peppered with pieces of skull and other matter. John Batman snapped open the locks, cleaned out the pans with his cloth and mopped the blood off the barrels. He looked around at the rovers.

The boy was openmouthed, pale, and he stared at the ruination laid out there at his feet and stepped back as the blood ran near his rags. The Dharugs had by now turned away and did not look back. They began retracing their track through the rainforest, picking among the fallen trunks. But Black Bill alone among that party met Batman’s eye. He resettled his fowling piece across his back and spat on the ferns, watching Batman. Batman pulled out his rum, popped loose the cork, and drank. He held out the vessel to Bill. The Vandemonian looked at him. Then he turned to follow the Parramatta men out among the lemon myrtles and antique pines. (92)

If Rohan Wilson had wanted to expound on the evils of colonialism in Tasmania, he might have written about how Batman, a real figure from history, murdered several men he could easily have taken prisoner. But Wilson wanted to tell a story, and he knew that dilemmas like this one would grip our emotions. He likewise knew he didn’t have to explain that Bill, however much he disapproves of the murder, can’t afford to challenge his white benefactor in any less subtle manner than meeting his eyes and refusing his rum.

            Unfortunately, Batman registers the subtle rebuke all too readily. Instead of killing a native lawman wounded in a later raid himself, Batman leaves the task to Bill, who this time isn’t allowed the option of silently disapproving. But the way Wilson describes Bill’s actions leaves no doubt in the reader’s mind about his feelings, and those feelings have important consequences for how we feel about the character.

Black Bill removed his hat. He worked back the heavy cocks of both barrels and they settled with a dull clunk. Taralta clutched at his swaddled chest and looked Bill in the eyes, as wordless as ground stone. Bill brought up the massive gun and steadied the barrels across his forearm as his broken fingers could not take the weight. The sight of those octagonal bores levelled on him caused the lawman to huddle down behind his hands and cry out, and Bill steadied the gun but there was no clear shot he might take. He waited.

                        See now, he said. Move your hands.

            The lawman crabbed away over the dirt, still with his arms upraised, and Bill followed him and kicked him in the bandaged ribs and kicked at his arms.

                        menenger, Bill said, menenger.

The lawman curled up more tightly. Bill brought the heel of his boot to bear on the wounded man but he kicked in vain while Taralta folded his arms ever tighter around his head.

Black Bill lowered the gun. Wattlebirds made their yac-a-yac coughs in the bush behind and he gazed at the blue hills to the south and the snow clouds forming above them. When Bill looked again at the lawman he was watching through his hands, dirt and ash stuck in the cords of his ochred hair. Bill brought the gun up, balanced it across his arm again and tucked the butt into his shoulder. Then he fired into the lawman’s head.

The almighty concussion rattled the wind in his chest and the gun bucked from his grip and fell. He turned away, holding his shoulder. Blood had spattered his face, his arms, the front of his shirt. For a time he would not look at the body of the lawman where it lay near the fire. He rubbed at the bruising on his shoulder; watched storms amass around the southern peaks. After a while he turned to survey the slaughter he had wrought.

One of the lawman’s arms was gone at the elbow and the teeth seated in the jawbone could be seen through the cheek. There was flesh blown every place. He picked up the Manton gun. The locks were soiled and he fingered out the grime, and then with the corner of his coat cleaned the pan and blew into the latchworks. He brought the weapon up to eye level and peered along its sights for barrel warps or any misalignment then, content, slung the leather on his shoulder. Without a rearward glance he stalked off, his hat replaced, his boots slipping in the blood. Smoke from the fire blew around him in a snarl raised on the wind and dispersed again on the same. (102-4)

Depending on their particular ideological bent, critics may charge that a scene like this simply promotes the type of violence it depicts, or that it encourages a negative view of native Tasmanians—or indigenous peoples generally—as of such weak moral fiber that they can be made to turn against their own countrymen. And pointing out that the aspect of the scene that captures our attention is the process, the experience, of witnessing Bill’s struggle to resolve his dilemma would do little to ease their worries; after all, even if the message is ancillary, its influence could still be pernicious.

            The reason that critics applying their favored political theories to their analyses of fiction so often stray into the realm of the absurd is that the only readers who experience stories the same way as they do will be the ones who share the same ideological preoccupations. You can turn any novel into a Rorschach, pulling out disparate shapes and elements to blur into some devious message. But any reader approaching the writing without your political theories or your critical approach will likely come away with a much more basic and obvious lesson. Black Bill’s dilemma is that he has to kill many of his fellow Tasmanians if he wants to continue living as part of a community of whites. If readers take on his attitude toward killing as it’s demonstrated in the scene when he kills Taralta, they’ll be more reluctant to do it, not less. Bill clearly loathes what he’s forced to do. And if any race comes out looking bad it’s the whites, since they’re the ones whose culture forces Bill to choose between his family’s well-being and the dictates of his conscience.

Readers likely have little awareness of being influenced by the overarching themes in their favorite stories, but upon reflection the meaning of those themes is usually pretty obvious. Recent research into how reading the Harry Potter books has impacted young people’s political views, for instance, shows that fans of the series are more accepting of out-groups, more tolerant, less predisposed to authoritarianism, more supporting of equality, and more opposed to violence and torture. Anthony Gierzynsky, the author of the study, points out, “As Harry Potter fans will have noted, these are major themes repeated throughout the series.” The messages that reach readers are the conspicuous ones, not the supposedly hidden ones critics pride themselves on being able to suss out. 

            It’s an interesting question just how wicked stories could persuade us to be, relying as they do on our instinctual moral sense. Fans could perhaps be biased toward evil by themes about the threat posed by some out-group, or the debased nature of the lower orders, or nonbelievers in the accepted deities—since the salience of these concepts likewise seems to be inborn. But stories told from the perspective of someone belonging to the persecuted group could provide an antidote. At any rate, there’s a solid case to be made that novels have helped the moral of arc of history bend toward greater justice and compassion.

            Even a novel with violence as pervasive and chaotic as it is in Blood Meridian sets up a moral gradient for the characters to occupy—though finding where the judge fits is a quite complicated endeavor—and the one with the most qualms about killing happens to be the protagonist, referred to simply as the kid. “You alone were mutinous,” the judge says to him. “You alone reserved in your soul some corner of clemency for the heathen” (299). The kid’s character is revealed much the way Black Bill’s is in The Roving Party, as readers witness him working through high-stakes dilemmas. After drawing arrows to determine who in the band of scalp hunters will stay behind to kill some of their wounded (to prevent a worse fate at the hands of the men pursuing them), the kid finds himself tasked with euthanizing a man who would otherwise survive.

                        You wont thank me if I let you off, he said.

                        Do it then you son of a bitch.

            The kid sat. A light wind was blowing out of the north and some doves had begun to call in the thicket of greasewood behind them.

                        If you want me just to leave you I will.

                        Shelby didnt answer

                        He pushed a furrow in the sand with the heel of his boot. You’ll have to say.

                        Will you leave me a gun?

                        You know I can’t leave you no gun.

                        You’re no better than him. Are you?

                        The kid didn’t answer. (208)

That “him” is ambiguous; it could either be Glanton, the leader of the gang whose orders the kid is ignoring, or the judge, who engages him throughout the later parts of the novel in a debate about the necessity of violence in history. We know by now that the kid really is better than the judge—at least in the sense that Shelby means. And the kid handles the dilemma, as best he can, by hiding Shelby in some bushes and leaving him with a canteen of water.

These three passages from The Roving Party and Blood Meridian reveal as well something about the language commonly used by authors of violent novels going back to Conrad (perhaps as far back as Tolstoy). Faced with the choice of killing a man—or of standing idly by and allowing him to be killed—the characters hesitate, and the space of their hesitation is filled with details like the type of birdsong that can be heard. This style of “dirty realism,” a turning away from abstraction, away even from thought, to focus intensely on physical objects and the natural world, frustrates critics like James Wood because they prefer their prose to register the characters’ meanderings of mind in the way that only written language can. Writing about No Country for Old Men, Wood complains about all the labeling and descriptions of weapons and vehicles to the exclusion of thought and emotion.

Here is Hemingway’s influence, so popular in male American fiction, of both the pulpy and the highbrow kind. It recalls the language of “A Farewell to Arms”: “He looked very dead. It was raining. I had liked him as well as anyone I ever knew.” What appears to be thought is in fact suppressed thought, the mere ratification of male taciturnity. The attempt to stifle sentimentality—“He looked very dead”—itself comes to seem a sentimental mannerism. McCarthy has never been much interested in consciousness and once declared that as far as he was concerned Henry James wasn’t literature. Alas, his new book, with its gleaming equipment of death, its mindless men and absent (but appropriately sentimentalized) women, its rigid, impacted prose, and its meaningless story, is perhaps the logical result of a literary hostility to Mind.

Here again Wood is relaxing his otherwise razor-keen capacity for gleaning insights from language and relying instead on the anemic conventions of literary criticism—a discipline obsessed with the enactment of gender roles. (I’m sure Suzanne Collins would be amused by this idea of masculine taciturnity.) But Wood is right to recognize the natural tension between a literature of action and a literature of mind. Imagine how much the impact of Black Bill’s struggle with the necessity of killing Taralta would be blunted if we were privy to his thoughts, all of which are implicit in the scene as Wilson has rendered it anyway.

            Fascinatingly, though, it seems that Wood eventually realized the actual purpose of this kind of evasive prose—and it was Cormac McCarthy he learned it from. As much as Wood lusts after some leap into theological lucubration as characters reflect on the lessons of the post-apocalypse or the meanings of violence, the psychological reality is that it is often in the midst of violence or when confronted with imminent death that people are least given to introspection. As Wood explains in writing about the prose style of The Road,

McCarthy writes at one point that the father could not tell the son about life before the apocalypse: “He could not construct for the child’s pleasure the world he’d lost without constructing the loss as well and thought perhaps the child had known this better than he.” It is the same for the book’s prose style: just as the father cannot construct a story for the boy without also constructing the loss, so the novelist cannot construct the loss without the ghost of the departed fullness, the world as it once was. (55)

The rituals of weapon reloading, car repair, and wound wrapping that Wood finds so offputtingly affected in No Country for Old Men are precisely the kind of practicalities people would try to engage their minds with in the aftermath of violence to avoid facing the reality. But this linguistic and attentional coping strategy is not without moral implications of its own.

            In the opening of The Roving Party, Black Bill receives a visit from some of the very clansmen he’s been asked by John Batman to hunt. The headman of the group is a formidable warrior named Manalargena (another real historical figure), who is said to have magical powers. He has come to recruit Bill to help in fighting against the whites, unaware of Bill’s already settled loyalties. When Bill refuses to come fight with Manalargena, the headman’s response is to tell a story about two brothers who live near a river where they catch plenty of crayfish, make fires, and sing songs. Then someone new arrives on the scene:

Hunter come to the river. He is hungry hunter you see. He want crayfish. He see them brother eating crayfish, singing song. He want crayfish too. He bring up spear. Here the headman made as if to raise something. He bring up that spear and he call out: I hungry, you give me that crayfish. He hold that spear and he call out. But them brother they scared you see. They scared and they run. They run and they change. They change to wallaby and they jump. Now they jump and jump and the hunter he follow them.

So hunter he change too. He run and he change to that wallaby and he jump. Now three wallaby jump near river. They eat grass. They forget crayfish. They eat grass and they drink water and they forget crayfish. Three wallaby near the river. Very big river. (7-8)

Bill initially dismisses the story, saying it makes no sense. Indeed, as a story, it’s terrible. The characters have no substance and the transformation seems morally irrelevant. The story is pure allegory. Interestingly, though, by the end of the novel, its meaning is perfectly clear to Bill. Taking on the roles of hunter and hunted leaves no room for songs, no place for what began the hunt in the first place, creating a life closer to that of animals than of humans. There are no more fires.

            Wood counts three registers authors like Conrad and McCarthy—and we can add Wilson—use in their writing. The first is the dirty realism that conveys the characters’ unwillingness to reflect on their circumstances or on the state of their souls. The third is the lofty but oblique discourse on God’s presence or absence in a world of tragedy and carnage Wood finds so ineffectual. For most readers, though, it’s the second register that stands out. Here’s how Wood describes it:

Hard detail and a fine eye is combined with exquisite, gnarled, slightly antique (and even slightly clumsy or heavy) lyricism. It ought not to work, and sometimes it does not. But many of its effects are beautiful—and not only beautiful, but powerfully efficient as poetry. (59)

This description captures what’s both great and frustrating about the best and worst lines in these authors’ novels. But Wood takes the tradition for granted without asking why this haltingly graceful and heavy-handedly subtle language is so well-suited to these violent stories. The writers are compelled to use this kind of language by the very effects of the plot and setting that critics today so often fail to appreciate—though Wood does gesture toward it in the title of his essay on No Country for Old Men. The dream logic of song and simile that goes into the aesthetic experience of bearing witness to the characters sparsely peopling the starkly barren and darkly ominous landscapes of these novels carries within it the breath of the sublime.

            In coming to care about characters whose fates unfold in the aftermath of civilization, or in regions where civilization has yet to take hold, places where bloody aggression and violent death are daily concerns and witnessed realities, we’re forced to adjust emotionally to the worlds they inhabit. Experiencing a single death brings a sense of tragedy, but coming to grips with a thousand deaths has a more curious effect. And it is this effect that the strange tangles of metaphorical prose both gesture toward and help to induce. The sheer immensity of the loss, the casual brushing away of so many bodies and the blotting out of so much unique consciousness, overstresses the capacity of any individual to comprehend it. The result is paradoxical, a fixation on the material objects still remaining, and a sliding off of one’s mind onto a plane of mental existence where the material has scarcely any reality at all because it has scarcely any significance at all. The move toward the sublime is a lifting up toward infinite abstraction, the most distant perspective ever possible on the universe, where every image is a symbol for some essence, where every embrace is a symbol for human connectedness, where every individual human is a symbol for humanity. This isn’t the abstraction of logic, the working out of implications about God or cosmic origins. It’s the abstraction of the dream or the religious experience, an encounter with the sacred and the eternal, a falling and fading away of the world of the material and the particular and the mundane.

            The prevailing assumption among critics and readers alike is that fiction, especially literary fiction, attempts to represent some facet of life, so the nature of a given representation can be interpreted as a comment on whatever is being represented. But what if the representations, the correspondences between the fictional world and the nonfictional one, merely serve to make the story more convincing, more worthy of our precious attention? What if fiction isn’t meant to represent reality so much as to alter our perceptions of it? Critics can fault plots like the one in No Country for Old Men, and characters like Anton Chigurh, for having no counterparts outside the world of the story, mooting any comment about the real world the book may be trying to make. But what if the purpose of drawing readers into fictional worlds is to help them see their own worlds anew by giving them a taste of what it would be like to live a much different existence? Even the novels that hew more closely to the mundane, the unremarkable passage of time, are condensed versions of the characters’ lives, encouraging readers to take a broader perspective on their own. The criteria we should apply to our assessments of novels then would not be how well they represent reality and how accurate or laudable their commentaries are. We should instead judge novels by how effectively they pull us into the worlds they create for themselves and how differently we look at our own world in the wake of the experience. And since high-stakes moral dilemmas are the heart of stories we might wonder what effect the experience of witnessing them will have on our own lower-stakes lives.

Also read:

HUNGER GAME THEORY: Post-Apocalyptic Fiction and the Rebirth of Humanity

Life's White Machine: James Wood and What Doesn't Happen in Fiction

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

Read More
Dennis Junk Dennis Junk

Gone Girl and the Relationship Game: The Charms of Gillian Flynn's Amazingly Seductive Anti-Heroine

Gillian Flynn’s “Gone Girl” is a brilliantly witty and trenchant exploration of how identities shift and transform, both within relationships and as part of an ongoing struggle for each of us to master the plot of our own narratives.

The simple phrase “that guy,” as in the delightfully manipulative call for a man to check himself “You don’t want to be that guy,” underscores something remarkable about what’s billed as our modern age of self-invention. No matter how hard we try, we’re nearly always on the verge of falling into some recognizable category of people—always in peril of becoming a cliché. Even in a mature society characterized by competitive originality, the corralling of what could be biographical chaos into a finite assemblage of themes, if not entire stories, seems as inescapable as ever. Which isn’t to say that true originality is nowhere to be found—outside of fiction anyway—but that resisting the pull of convention, or even (God forbid) tradition, demands sustained effort. And like any other endeavor requiring disciplined exertion, you need a ready store of motivation to draw on if you’re determined not to be that guy, or that couple, or one of those women.

            When Nick Dunne, a former writer of pop culture reviews for a men’s magazine in Gillian Flynn’s slickly conceived, subtly character-driven, and cleverly satirical novel Gone Girl, looks over the bar he’s been reduced to tending at the young beauty who’s taken a shine to him in his capacity as part-time journalism professor, the familiarity of his dilemma—the familiarity even of the jokes about his dilemma—makes for a type of bitingly bittersweet comedy that runs throughout the first half of the story. “Now surprise me with a drink,” Andie, his enamored student, enjoins him.

She leaned forward so her cleavage was leveraged against the bar, her breasts pushed upward. She wore a pendant on a thin gold chain; the pendant slid between her breasts down under her sweater. Don’t be that guy, I thought. The guy who pants over where the pendant ends. (261)

Nick, every reader knows, is not thinking about Andie’s heart. For many a man in this situation, all the condemnation infused into those two words abruptly transforms into an awareness of his own cheap sanctimony as he realizes about the only thing separating any given guy from becoming that guy is the wrong set of circumstances. As Nick recounts,

You ask yourself, Why? I’d been faithful to Amy always. I was the guy who left the bar early if a woman was getting too flirty, if her touch was feeling too nice. I was not a cheater. I don’t (didn’t?) like cheaters: dishonest, disrespectful, petty, spoiled. I had never succumbed. But that was back when I was happy. I hate to think the answer is that easy, but I had been happy all my life, and now I was not, and Andie was there, lingering after class, asking me questions about myself that Amy never had, not lately. Making me feel like a worthwhile man, not the idiot who lost his job, the dope who forgot to put the toilet seat down, the blunderer who just could never quite get it right, whatever it was. (257)

More than most of us are comfortable acknowledging, the roles we play, or rather the ones we fall into, in our relationships are decided on collaboratively—if there’s ever any real deciding involved at all. Nick turns into that guy because the role, the identity, imposed on him by his wife Amy makes him miserable, while the one he gets to assume with Andie is a more complimentary fit with what he feels is his true nature. As he muses while relishing the way Andie makes him feel, “Love makes you want to be a better man—right, right. But maybe love, real love, also gives you permission to just be the man you are” (264).

            Underlying and eventually overtaking the mystery plot set in motion by Amy’s disappearance in the opening pages of Gone Girl is an entertainingly overblown dramatization of the struggle nearly all modern couples go through as they negotiate the contours of their respective roles in the relationship. The questions of what we’ll allow and what we’ll demand of our partners seem ever-so-easy to answer while we’re still unattached; what we altogether fail to anticipate are the questions about who our partners will become and how much of ourselves they’ll allow us the space to actuate. Nick fell in love with Amy because of how much he enjoyed being around her, but, in keeping with so many clichés about married life, that would soon change. Early in the novel, it’s difficult to discern just how hyperbolic Nick is being when he describes the metamorphosis. “The Amy of today,” he tells us, “was abrasive enough to want to hurt, sometimes.” He goes on to explain,

I speak specifically of the Amy of today, who was only remotely like the woman I fell in love with. It had been an awful fairytale reverse transformation. Over just a few years, the old Amy, the girl of the big laugh and the easy ways, literally shed herself, a pile of skin and soul on the floor, and out stepped this brittle, bitter Amy. My wife was no longer my wife but a razor-wire knot daring me to unloop her, and I was not up to the job with my thick, numb, nervous fingers. Country fingers. Flyover fingers untrained in the intricate, dangerous work of solving Amy. When I’d hold up the bloody stumps, she’d sigh and turn to her secret mental notebook on which she tallied all my deficiencies, forever noting disappointments, frailties, shortcomings. My old Amy, damn, she was fun. She was funny. She made me laugh. I’d forgotten that. And she laughed. From the bottom of her throat, from right behind that small finger-shaped hollow, which is the best place to laugh from. She released her grievances like handfuls of birdseed: They are there, and they are gone. (89)

In this telling, Amy has gone from being lighthearted and fun to suffocatingly critical and humorless in the span of a few years. This sounds suspiciously like something that guy, the one who cheats on his wife, would say, rationalizing his betrayal by implying she was the one who unilaterally revised the terms of their arrangement. Still, many married men, whether they’ve personally strayed or not, probably find Nick’s description of his plight uncomfortably familiar.

            Amy’s own first-person account of their time together serves as a counterpoint to Nick’s narrative about her disappearance throughout the first half of the novel, and even as you’re on the lookout for clues as to whose descriptions of the other are the more reliable you get the sense that, alongside whatever explicit lies or omissions are indicated by the contradictions and gaps in their respective versions, a pitched battle is taking place between them for the privilege of defining not just each other but themselves as well. At one point, Nick admits that Amy hadn’t always been as easy to be with as he suggested earlier; in fact, he’d fallen in love with her precisely because in trying to keep up with her exacting and boundlessly energetic mind he felt that he became “the ultimate Nick.” He says,

Amy made me believe I was exceptional, that I was up to her level of play. That was both our making and our undoing. Because I couldn’t handle the demands of greatness. I began craving ease and averageness, and I hated myself for it. I turned her into the brittle, prickly thing she became. I had pretended to be one kind of man and revealed myself to be quite another. Worse, I convinced myself our tragedy was entirely her making. I spent years working myself into the very thing I swore she was: a righteous ball of hate. (371-2)

At this point in the novel, Amy’s version of their story seems the more plausible by far. Nick is confessing to mischaracterizing her change, taking some responsibility for it. Now, he seems to accept that he was the one who didn’t live up to the terms of their original agreement.

            But Nick’s understanding of the narrative at this point isn’t as settled as his mea culpa implies. What becomes clear from all the mulling and vacillating is that arriving at a definitive account of who provoked whom, who changed first, or who is ultimately to blame is all but impossible. Was Amy increasingly difficult to please? Or did Nick run out of steam? Just a few pages prior to admitting he’d been the first to undergo a deal-breaking transformation, Nick was expressing his disgust at what his futile efforts to make Amy happy had reduced him to:

For two years I tried as my old wife slipped away, and I tried so hard—no anger, no arguments, the constant kowtowing, the capitulation, the sitcom-husband version of me: Yes, dear. Of course, sweetheart. The fucking energy leached from my body as my frantic-rabbit thoughts tried to figure out how to make her happy, and each action, each attempt, was met with a rolled eye or a sad little sigh. A you just don’t get it sigh.

            By the time we left for Missouri, I was just pissed. I was ashamed of the memory of me—the scuttling, scraping, hunchbacked toadie of a man I’d turned into. So I wasn’t romantic; I wasn’t even nice. (366)

The couple’s move from New York to Nick’s hometown in Missouri to help his sister care for their ailing mother is another point of contention between them. Nick had been laid off from the magazine he’d worked for. Amy had lost her job writing personality quizzes for a women’s magazine, and she’d also lost most of the trust fund her parents had set up for her when those same parents came asking for a loan to help bail them out after some of their investments went bad. So the conditions of the job market and the economy toward the end of the aughts were cranking up the pressure on both Nick and Amy as they strived to maintain some sense of themselves as exceptional human beings. But this added burden fell on each of them equally, so it doesn’t give us much help figuring out who was the first to succumb to bitterness.

Nonetheless, throughout the first half of Gone Girl Flynn works to gradually intensify our suspicion of Nick, making us wonder if he could possibly have flown into a blind rage and killed his wife before somehow dissociating himself from the memory of the crime. He gives the impression, for instance, that he’s trying to keep his affair with Andie secret from us, his readers and confessors, until he’s forced to come clean. Amy at one point reveals she was frightened enough of him to attempt to buy a gun. Nick also appears to have made a bunch of high-cost purchases with credit cards he denies ever having signed up for. Amy even bought extra life insurance a month or so before her disappearance—perhaps in response to some mysterious prompting from her husband. And there’s something weird about Nick’s relationship with his misogynist father, whose death from Alzheimer’s he’s eagerly awaiting. The second half of Gone Girl could have gone on to explore Nick’s psychosis and chronicle his efforts at escaping detection and capture. In that case, Flynn herself would have been surrendering to the pull of convention. But where she ends up going with her novel makes for a story that’s much more original—and, perhaps paradoxically, much more reflective of our modern societal values.

In one sense, the verdict about which character is truly to blame for the breakdown of the marriage is arbitrary. Marriages come apart at the seams because one or both partners no longer feel like they can be themselves all the time —affixing blame is little more than a postmortem exercise in recriminatory self-exculpation. If Gone Girl had been written as a literary instead of a genre work, it probably would have focused on the difficulty and ultimate pointlessness of figuring out whose subjective experiences were closest to reality, since our subjectivity is all we have to go on and our messy lives simply don’t lend themselves to clean narratives. But Flynn instead furnishes her thriller with an unmistakable and fantastically impressive villain, one whose aims and impulses so perfectly caricature the nastiest of our own that we can’t help elevating her to the ranks of our most beloved anti-heroes like Walter White and Frank Underwood (making me a little disappointed that David Fincher is making a movie out of the book instead of a cable or Netflix series).

Amy, we learn early in the novel, is not simply Amy but rather Amazing Amy, the inspiration for a series of wildly popular children’s books written by her parents, both of whom are child psychologists. The books are in fact the source of the money in the trust fund they had set up for her, and it is the lackluster sales of recent editions in the series, along with the spendthrift lifestyle they’d grown accustomed to, that drives them to ask for most of that money back. In the early chapters, Amy writes about how Amazing Amy often serves as a subtle rebuke from her parents, making good decisions in place of her bad ones, doing perfectly what she does ineptly. But later on we find out that the real Amy has nothing but contempt for her parents’ notions of what might constitute the ideal daughter. Far from worrying that she may not be living up to the standard set by her fictional counterpart, Amy feels the title of Amazing is part of her natural due. Indeed, when she first discovers Nick is cheating with Andie—a discovery she makes even before they sleep together the first time—what makes her the angriest about it is how mediocre it makes her seem. “I had a new persona,” she says, “not of my choosing. I was Average Dumb Woman Married to Average Shitty Man. He had single-handedly de-amazed Amazing Amy” (401).  

Normally, Amy does choose which persona she wants to take on; that is in fact the crucial power that makes her worthy of her superhero sobriquet. Most of us no longer want to read stories about women who become the tragic victims of their complicatedly sympathetic but monstrously damaged husbands. With Amy, Flynn turns that convention inside-out. While real life seldom offers even remotely satisfying resolutions to the rival PR campaigns at the heart of so many dissolving marriages, Amy confesses—or rather boasts—to us, her readers and fans, that she was the one who had been acting like someone else at the beginning of the relationship.

Nick loved me. A six-O kind of love: He looooooved me. But he didn’t love me, me. Nick loved a girl who doesn’t exist. I was pretending, the way I often did, pretending to have a personality. I can’t help it, it’s what I’ve always done: The way some women change fashion regularly, I change personalities. What persona feels good, what’s coveted, what’s au courant? I think most people do this, they just don’t admit it, or else they settle on one persona because they’re too lazy or stupid to pull off a switch. (382)

This ability of Amy’s to take on whatever role she deems suitable for her at the moment is complemented by her adeptness at using people’s—her victims’—stereotype-based expectations of women against them. Taken together with her capacity for harboring long-term grudges in response to even the most seemingly insignificant of slights, these powers make Amazing Amy a heroic paragon of postmodern feminism. The catch is that to pull off her grand deception of Nick, the police, and the nattering public, she has to be a complete psychopath.

Amy assumes the first of the personas we encounter, Diary Amy, through writing, and the story she tells, equal parts fiction and lying, takes us in so effectively because it reprises some of our culture’s most common themes. Beyond the diary, the overarching story of Gone Girl so perfectly subverts the conventional abuse narrative that it’s hard not to admire Amy for refusing to be a character in it. Even after she’s confessed to framing Nick for her murder, the culmination of a plan so deviously vindictive her insanity is beyond question, it’s hard not to root for her when she pits herself against Desi, a former classmate she dated while attending a posh prep school called Wickshire Academy. Desi serves as the ideal anti-villain for our amazing anti-heroine. Amy writes,

It’s good to have at least one man you can use for anything. Desi is a white-knight type. He loves troubled women. Over the years, after Wickshire, when we’d talk, I’d ask after his latest girlfriend, and no matter the girl, he would always say: “Oh, she’s not doing very well, unfortunately.” But I know it is fortunate for Desi—the eating disorders, the painkiller addictions, the crippling depressions. He’s never happier than when he’s at a bedside. Not in bed, just perched nearby with broth and juice and a gently starched voice. Poor darling. (551-2)

Though an eagerness to save troubled women may not seem so damning at first blush, we soon learn that Desi’s ministrations are motived more by the expectation of gratitude and the relinquishing of control than by any genuine proclivity toward altruism. After Amy shows up claiming she’s hiding from Nick, she quickly becomes a prisoner in Desi’s house. And the way he treats her, so solicitous, so patronizing, so creepy—you almost can’t wait for Amy to decide he’s outlived his usefulness.

The struggle between Nick and Amy takes place against the backdrop of a society obsessed with celebrity and scandal. One of the things Nick is forced to learn, not to compete with Amy—which he’d never be able to do—but to merely survive, is to make himself sympathetic to people whose only contact with him is through the media. What Flynn conveys in depicting his efforts is the absurdity of trial by media. A daytime talk show host named Ellen Abbott—an obvious sendup of Nancy Grace—stirs her audience into a rage against Nick because he makes the mistake of forcing a smile in front of a camera. One of the effects of our media obsession is that we ceaselessly compare ourselves with characters in movies and TV. Nick finds himself at multiple points enacting scenes from cop shows, trying to act the familiar role of the innocent suspect. But he knows all the while that the story everyone is most familiar with, whether from fictional shows or those billed as nonfiction, is of the husband who tries to get away with murdering his wife.

  Being awash in stories featuring celebrities and, increasingly, real people who are supposed to be just like us wouldn’t have such a virulent impact if we weren’t driven to compete with all of them. But, whenever someone says you don’t want to be that guy, what they’re really saying is that you should be better than that guy. Most of the toggling between identities Amy does is for the purpose of outshining any would-be rivals. In one oddly revealing passage, she even claims that focusing on outcompeting other couples made her happier than trying to win her individual struggle against Nick:

I thought we would be the most perfect union: the happiest couple around. Not that love is a competition. But I don’t understand the point of being together if you’re not the happiest.

            I was probably happier for those few years—pretending to be someone else—than I ever have been before or after. I can’t decide what that means. (386-7)

The problem was that neither could maintain this focus on being the happiest couple because each found themselves competing against the other. It’s all well and good to look at how well your relationship works—how happy you both are in it—until some women you know start suggesting their husbands are more obedient than yours.

            The challenge that was unique to Amy—that wasn’t really at all unique to Amy—entailed transitioning from the persona she donned to attract Nick and persuade him to marry her to a persona that would allow her to be comfortably dominant in the marriage. The persona women use to first land their man Amy refers to as the Cool Girl.

Being the Cool Girl means I am a hot, brilliant, funny woman who adores football, poker, dirty jokes, and burping, who plays video games, drinks cheap beer, loves threesomes and anal sex, and jams hot dogs and hamburgers into her mouth like she’s hosting the world’s biggest culinary gang bang while somehow still maintaining a size 2, because Cool Girls are above all hot. Hot and understanding. Cool Girls never get angry; they only smile in a chagrined, loving manner and let their men do whatever they want. Go ahead, shit on me, I don’t mind, I’m the Cool Girl. (383)

Amy is convinced Andie too is only pretending to be Cool Girl—because the Cool Girl can’t possibly exist. And the ire she directs at Nick arises out of her disappointment that he could believe the role she’d taken on was genuine.

I hated Nick for being surprised when I became me. I hated him for not knowing it had to end, for truly believing he had married this creature, this figment of the imagination of a million masturbatory men, semen-fingered and self-satisfied. (387)

Her reason for being contemptuous of women who keep trying to be the Cool Girl, the same reason she can’t bring herself to continue playing the role, is even more revealing. “If you let a man cancel plans or decline to do things for you,” she insists, “you lose.” She goes on to explain,

You don’t get what you want. It’s pretty clear. Sure, he may be happy, he may say you’re the coolest girl ever, but he’s saying it because he got his way. He’s calling you a Cool Girl to fool you! That’s what men do: They try to make it sound like you are the Cool Girl so you will bow to their wishes. Like a car salesman saying, How much do you want to pay for this beauty? when you didn’t agree to buy it yet. That awful phrase men use: “I mean, I know you wouldn’t mind if I…” Yes, I do mind. Just say it. Don’t lose, you dumb little twat. (387-8)

So Amy doesn’t want to be the one who loses any more than Nick wants to be that “sitcom-husband version” of himself, the toadie, the blunderer. And all either of them manages to accomplish by resisting is to make the other miserable.

            Gen-Xers like Nick and Amy were taught to dream big, to grow up and change the world, to put their minds to it and become whatever they most want to be. Then they grew up and realized they had to find a way to live with each other and make some sort of living—even though whatever spouse, whatever job, whatever life they settled for inevitably became a symbol of the great cosmic joke that had been played on them. From thinking you’d be a hero and change the world to being cheated on by the husband who should have known he wasn’t good enough for you in first place—it’s quite a distance to fall. And it’s easy to imagine how much more you could accomplish without the dead weight of a broken heart or the burden of a guilty conscience. All you have driving you on is your rage, even the worst of which flares up only for a few days or weeks at most before exhausting itself. Then you return to being the sad, wounded, abandoned, betrayed little critter you are. Not Amy, though. Amy was the butt of the same joke as the rest of us, though in her case it was even more sadistic. She was made to settle for a lesser life than she’d been encouraged to dream of just like the rest of us. She was betrayed just like the rest of us. But she suffers no pangs of guilt, no aching of a broken heart. And Amy’s rage is inexhaustible. She really can be whoever she wants—and she’s already busy becoming more than an amazing idea. 

Also read:
HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

And:

WHAT MAKES "WOLF HALL" SO GREAT?

Read More
Dennis Junk Dennis Junk

Rebecca Mead’s Middlemarch Pilgrimage and the 3 Wrong Ways to Read a Novel

Rebecca Mead’s “My Life in Middlemarch,” about her lifelong love of George Eliot’s masterpiece, discusses various ways to think of the relationship between readers and authors, as well as the relationship between readers and characters. To get at the meat of these questions, though, you have to begin with an understanding of what literature is and how a piece of fiction works.

  All artists are possessed of the urge to render through some artificial medium the experience of something real. Since artists are also possessed of a desire to share their work and have it appreciated, they face the conundrum of having to wrest the attention of their audience away from the very reality they hope to relay some piece of—a feat which can only be accomplished with the assurance of something extraordinary. Stories that are too real are seldom interesting, while the stories that are the most riveting usually feature situations audiences are unlikely to encounter in their real lives. This was the challenge George Eliot faced when she set out to convey something of the reality of a provincial English town in the early nineteenth century; she had to come up with a way to write a remarkable novel about unremarkable characters in an unremarkable setting. And she somehow managed to do just that. By almost any measure, Eliot’s efforts to chronicle the fates of everyday people in a massive work many everyday people would enjoy reading were wildly, albeit complicatedly, successful.

  Before Middlemarch, the convention for English novels was to blandish readers with the promise of stories replete with romance or adventure, culminating more often than not with a wedding. Eliot turned the marriage plot on its head, beginning her novel with a marriage whose romantic underpinnings were as one-sided as they were transparently delusional. So what, if not luridness or the whiff of wish-fulfillment, does Eliot use to lure us into the not-so-fantastic fictional world of Middlemarch? Part of the answer is that, like many other nineteenth century novelists, she interspersed her own observations and interpretations with the descriptions and events that make up the story. “But Eliot was doing something in addition with those moments of authorial interjection,” Rebecca Mead writes in her book My Life in Middlemarch before going on to explain,

She insists that the reader look at the characters in the book from her own elevated viewpoint. We are granted a wider perspective, and a greater insight, than is available to their neighbors down in the world of Middlemarch. By showing us the way each character is bound within his or her own narrow viewpoint, while providing us with a broader view, she nurtures what Virginia Woolf described as “the melancholy virtue of tolerance.” “If Art does not enlarge men’s sympathies, it does nothing morally,” Eliot once wrote. “The only effect I ardently long to produce by my writings, is that those who read them should be better able to imagine and to feel the pains and the joys of those who differ from themselves in everything but the broad fact of being struggling erring human creatures.” (55-6)

Eliot’s story about ordinary people, in other words, is made extraordinary by the insightful presence of its worldly-wise author throughout its pages.

  But this solution to the central dilemma of art—how to represent reality so as to be worthy of distraction from reality—runs into another seeming contradiction. Near the conclusion of Middlemarch, Dorothea, one of the many protagonists, assures her sister Celia that her second marriage will not be the disaster her first was, explaining that the experience of falling in love this time around was much more auspicious than it had been before. Celia, wanting to understand what was so different, presses her, asking “Can’t you tell me?” Dorothea responds, “No, dear, you would have to feel with me, else you would never know” (783). The question that arises is whether a reader can be made to “imagine and feel the pains and joys” of a character while at the same time being granted access to the author’s elevated perspective. Can we in the audience simultaneously occupy spaces both on the ground alongside the character and hovering above her with a vantage on her place in the scheme of history and the wider world?

  One of the main sources of tension for any storyteller is the conflict between the need to convey information and provide context on the one hand and the goal of representing, or even simulating experiences on the other. A weakness Eliot shares with many other novelists who lived before the turn of the last century is her unchecked impulse to philosophize when she could be making a profounder impact with narrative description or moment-by-moment approximations of a character’s thoughts and feelings. For instance, Dorothea, or Miss Brooke as she’s called in the first pages of the novel, yearns to play some important role in the betterment of humankind. Mead’s feelings about Dorothea and what she represents are likely shared by most who love the book. She writes,

As Miss Brooke, Dorothea remains for me the embodiment of that unnameable, agonizing ache of adolescence, in which burgeoning hopes and ambitions and terrors and longings are all roiled together. When I spend time in her company, I remember what it was like to be eighteen, and at the beginning of things. (43)

Dorothea becomes convinced that by marrying a much older scholar named Casaubon and helping him to bring his life’s work to fruition she’ll be fulfilling her lofty aspirations. So imagine her crushing disappointment upon discovering that Casaubon is little more than an uninspired and bloodless drudge who finds her eagerness to aid in his research more of an annoying distraction than an earnest effort at being supportive. The aspects of this disappointment that are unique to Dorothea, however, are described only glancingly. After locating her character in a drawing room, overwhelmed by all she’s seen during her honeymoon in Rome—and her new husband’s cold indifference to it all, an indifference which encompasses her own physical presence—Eliot retreats into generality:

Not that this inward amazement of Dorothea’s was anything very exceptional: many souls in their young nudity are tumbled out among incongruities and left to “find their feet” among them, while their elders go about their business. Nor can I suppose that when Mrs. Casaubon is discovered in a fit of weeping six weeks after her wedding, the situation will be regarded as tragic. Some discouragement, some faintness of heart at the new real future which replaces the imaginary, is not unusual, and we do not expect people to be deeply moved by what is not unusual. That element of tragedy which lies in the very fact of frequency, has not yet wrought itself into the coarse emotion of mankind; and perhaps our frames could hardly bear much of it. If we had a keen vision and feeling of all ordinary human life, it would be like hearing the grass grow and the squirrel’s heart beat, and we should die of that roar which lies on the other side of silence. As it is, the quickest of us walk about well wadded with stupidity. (185)

The obvious truth that humans must be selective in their sympathies is an odd point for an author to focus on at such a critical juncture in her heroine’s life—especially an author whose driving imperative is to widen the scope of her readers’ sympathies.

  My Life in Middlemarch, a memoir devoted to Mead’s evolving relationship with the novel and its author, takes for granted that Eliot’s masterpiece stands at the pinnacle of English literature, the greatest accomplishment of one of the greatest novelists in history. Virginia Woolf famously described it as “the magnificent book which with all its imperfections is one of the few English novels written for grown-up people.” Martin Amis called it “the central English novel” and said that it was “without weaknesses,” except perhaps for Dorothea’s overly idealized second love Will Ladislaw.  Critics from F.R. Leavis to Harold Bloom have celebrated Eliot as one of the greatest novelists of all time. But Middlemarch did go through a period when it wasn’t as appreciated as it had been originally and has been again since the middle of the twentieth century. Mead quotes a couple of appraisals from the generation succeeding Eliot’s:

“It is doubtful whether they are novels disguised as treatises, or treatises disguised as novels,” one critic wrote of her works. Another delivered the verdict that her books “seem to have been dictated to a plain woman of genius by the ghost of David Hume.” (217)

And of course Woolf would have taken issue with Amis’s claim that Eliot’s novel has no weaknesses, since her oft-quoted line about Middlemarch being for grownups contains the phrase “for all its imperfections.” In the same essay, Woolf says of Eliot,

The more one examines the great emotional scenes the more nervously one anticipates the brewing and gathering and thickening of the cloud which will burst upon our heads at the moment of crisis in a shower of disillusionment and verbosity. It is partly that her hold upon dialogue, when it is not dialect, is slack; and partly that she seems to shrink with an elderly dread of fatigue from the effort of emotional concentration. She allows her heroines to talk too much.

She has little verbal felicity. She lacks the unerring taste which chooses one sentence and compresses the heart of the scene within that. ‘Whom are you doing to dance with?’ asked Mr Knightley, at the Weston’s ball. ‘With you, if you will ask me,’ said Emma; and she has said enough. Mrs Casaubon would have talked for an hour and we should have looked out of the window.

Mead’s own description of the scene of Dorothea’s heartbreak in Rome is emblematic of both the best and the worst of Eliot’s handling of her characters. She writes,

For several pages, Eliot examines Dorothea’s emotions under a microscope, as if she were dissecting her heroine’s brain, the better to understand the course of its electrical flickers. But then she moves quickly and just as deeply into the inward movement of Casaubon’s emotions and sensations. (157)

Literary scholars like to justify the application of various ideologies to their analyses of novels by comparing the practice to looking through different lenses in search of new insights. But how much fellow feeling can we have for “electrical flickers” glimpsed through an eyepiece? And how are we to assess the degree to which critical lenses either clarify or distort what they’re held up to? What these metaphors of lenses and microscopes overlook is the near impossibility of sympathizing with specimens under a glass.

            The countless writers and critics who celebrate Middlemarch actually seem to appreciate all the moments when Eliot allows her storytelling to be submerged by her wry asides, sermons, and disquisitions. Indeed, Middlemarch may be best thought of as a kind of hybrid, and partly because of this multifariousness, but partly too because of the diversity of ways admirers like Mead have approached and appreciated it over the generations, the novel makes an ideal test case for various modes of literary reading. Will Ladislaw, serendipitously in Rome at the same time as the Casaubons, converses with Dorothea about all the art she has seen in the city. “It is painful,” she says, “to be told that anything is very fine and not be able to feel that it is fine—something like being blind, while people talk of the sky.” Ladislaw assures her, “Oh, there is a great deal in the feeling for art which must be acquired,” and then goes on to explain,

Art is an old language with a great many artificial affected styles, and sometimes the chief pleasure one gets out of knowing them is the mere sense of knowing. I enjoy the art of all sorts here immensely; but I suppose if I could pick my enjoyment to pieces I should find it made up of many different threads. There is something in daubing a little one’s self, and having an idea of the process. (196)

Applied to literature, Ladislaw’s observation—or confession—suggests that simply being in the know with regard to a work of great renown offers a pleasure of its own, apart from the direct experience of reading. (Imagine how many classics would be abandoned midway were they not known as such.) Mead admits to succumbing to this sort of glamor when she was first beginning to love Eliot and her work:

I knew that some important critics considered Middlemarch to be the greatest novel in the English language, and I wanted to be among those who understood why. I loved Middlemarch, and loved being the kind of person who loved it. It gratified my aspirations to maturity and learnedness. To have read it, and to have appreciated it, seemed a step on the road to being one of the grown-ups for whom it was written. (6-7).

What Mead is describing here is an important, albeit seldom mentioned, element in our response to any book. And there’s no better word for it than branding. Mead writes,

Books gave us a way to shape ourselves—to form our thoughts and to signal to each other who we were and who we wanted to be. They were part of our self-fashioning, no less than our clothes. (6)

The time in her life she’s recalling here is her late adolescence, the time when finding and forging all the pieces of our identities is of such pressing importance to us—and, not coincidentally, a time when we are busy laying the foundations of what will be our lifelong tastes in music and books.

It should not be lost on anyone that Mead was close in age to Dorothea in the early chapters when she first began reading Middlemarch, both of them “at the beginning of things.” Mead, perhaps out of nostalgia, tries to honor that early connection she felt even as she is compelled to challenge it as a legitimate approach to reading. Early in My Life in Middlemarch she writes,

Reading is sometimes thought of as a form of escapism, and it’s a common turn of phrase to speak of getting lost in a book. But a book can also be where one finds oneself; and when a reader is grasped and held by a book, reading does not feel like an escape from life so much as it feels like an urgent, crucial dimension of life itself. There are books that grow with the reader as the reader grows, like a graft to a tree. (16)

Much later in the book, though, after writing about how Eliot received stacks of letters from women insisting that they were the real-life version of Dorothea—and surmising that Eliot would have responded to such claims with contempt—Mead makes a gesture of deference toward critical theory, with its microscopes and illusory magnifications. She writes,

Such an approach to fiction—where do I see myself in there?—is not how a scholar reads, and it can be limiting in its solipsism. It’s hardly an enlarging experience to read a novel as if it were mirror of oneself. One of the useful functions of literary criticism and scholarship is to suggest alternative lenses through which a book might be read. (172)

As Eliot’s multimodal novel compels us to wonder how we should go about reading it, Mead’s memoir brilliantly examines one stance after another we might take in relationship to a novel. What becomes clear in the process is that literary scholars leave wide open the question of the proper way to read in part because they lack an understanding what literary narratives really are. If a novel is a disguised treatise, then lenses that could peer through the disguise are called for. If a novel is a type of wish-fulfillment, then to appreciate it we really should imagine ourselves as the protagonist.

             One of the surprises of Middlemarch is that, for all the minute inspection the narrator subjects the handful of protagonists to, none of them is all that vividly rendered. While in the most immersive of novels the characters come to life in a way that makes it easy to imagine them stepping out of the pages into real life, Eliot’s characters invite the reader to step from real life into the pages of the book. We see this in the way Eliot moves away from Dorothea at her moment of crisis. We see it too in the character of Tertius Lydgate, a doctor with a passion for progress to match Dorothea’s. Eliot describes the birth of this passion with a kind of distant and universal perspective that reaches out to envelop readers in their own reminiscences:

Most of us who turn to any subject we love remember some morning or evening hour when we got on a high stool to reach down an untried volume, or sat with parted lips listening to a new talker, or for very lack of books began to listen to the voices within, as the first traceable beginning of our love. Something of that sort happened to Lydgate. He was a quick fellow, and when hot from play, would toss himself in a corner, and in five minutes be deep in any sort of book that he could lay his hands on. (135)

Who among the likely readers of Middlemarch would object to the sentiment expressed in the line about Lydgate’s first encounter with medical texts, after “it had already occurred to him that books were stuff, and that life was stupid”?

Mead herself seems to have been drawn into Middlemarch first by its brand and then by her ready ease in identifying with Dorothea. But, while her book is admirably free of ideological musings, she does get quite some distance beyond her original treatment of the characters as avatars. And she even suggests this is perhaps a natural progression in a serious reader’s relationship to a classic work.

Identification with character is one way in which most ordinary readers do engage with a book, even if it is not where a reader’s engagement ends. It is where part of the pleasure, and the urgency, of reading lies. It is one of the ways that a novel speaks to a reader, and becomes integrated into the reader’s own imaginative life. Even the most sophisticated readers read novels in the light of their own experience, and in such recognition, sympathy may begin. (172-3)

What, then, do those sophisticated readers graduate into once they’ve moved beyond naïve readings? The second mode of reading, the one that academics prefer, involves the holding up of those ideological lenses. Mead describes her experience of the bait-and-switch perpetrated against innumerable young literature aficionados throughout their university educations:

I was studying English literature because I loved books, a common enough motivation among students of literature, but I soon discovered that love didn’t have much purchase when it came to our studies. It was the mideighties, the era of critical theory—an approach to literature that had been developed at Yale, among other distant and exotic locales. I’d never heard of critical theory before I got to Oxford, but I soon discovered it was what the most sophisticated-seeming undergraduates were engaged by. Scholars applied the tools of psychoanalysis or feminism to reveal the ways in which the author was blind to his or her own desire or prejudice, or they used the discipline of deconstruction to dispense with the author altogether. (Thus, J. Hillis Miller on George Eliot: “This incoherent, heterogeneous, ‘unreadable,’ or nonsythesizable quality of the text of Middlemarch jeopardizes the narrator’s effort of totalization.”) Books—or texts, as they were called by those versed in theory—weren’t supposed merely to be read, but to be interrogated, as if they had committed some criminal malfeasance. (145)

What identifying with the characters has to recommend it is that it makes of the plot a series of real-time experiences. The critical approaches used by academics take for granted that participating in the story like this puts you at risk of contracting the author’s neuroses or acquiring her prejudices. To critical theorists, fiction is a trick to make us all as repressed and as given to oppressing women and minorities as both the author and the culture to which she belongs. By applying their prophylactic theories, then, academic critics flatter themselves by implying that they are engaging in a type of political activism.

In her descriptions of what this second mode of reading tends to look like up close, Mead hints at a third mode that she never fully considers. The second mode works on the assumption that all fiction is allegory, representing either some unconscious psychological drama or some type of coded propaganda.  But, as Mead recounts, the process of decoding texts, which in the particular case she witnesses consists of an “application of Marxist theory to literature,” has many of the hallmarks of a sacred ritual:

I don’t recall which author was the object of that particular inquisition, but I do remember the way the room was crowded with the don’s acolytes. Monkish-looking young men with close-shaven heads wearing black turtlenecks huddled with their notebooks around the master, while others lounged on the rug at his feet. It felt very exclusive—and, with its clotted jargon, willfully difficult. (145)

The religious approach to reading favored by academics is decidedly Old Testament; it sees literature and the world as full of sin and evil, and makes of reading a kind of expiation or ritual purging. A more New Testament approach would entail reading a fictional story as a parable while taking the author as a sort of messianic figure—a guide and a savior. Here’s Mead describing a revelation she reached when she learned about Eliot’s relationship with her stepsons. It came to her after a period of lamenting how Middlemarch had nothing to say about her own similarly challenging family situation:

A book may not tell us exactly how to live our own lives, but our own lives can teach us how to read a book. Now when I read the novel in the light of Eliot’s life, and in the light of my own, I see her experience of unexpected family woven deep into the fabric of the novel—not as part of the book’s obvious pattern, but as part of its tensile strength. Middlemarch seems charged with the question of being a stepmother: of how one might do well by one’s stepchildren, or unwittingly fail them, and of all that might be gained from opening one’s heart wider. (110)

The obvious drawback to this approach—tensile strength?—is that it makes of the novel something of a Rorschach, an ambiguous message we read into whatever meaning we most desire at a given moment. But, as Mead points out, this is to some degree inevitable. She writes that

…all readers make books over in their own image, and according to their own experience. My Middlemarch is not the same as anyone else’s Middlemarch; it is not even the same as my Middlemarch of twenty-five years ago. Sometimes, we find that a book we love has moved another person in the same ways as it has moved ourselves, and one definition of compatibility might be when two people have highlighted the same passages in their editions of a favorite novel. But we each have our own internal version of the book, with lines remembered and resonances felt. (172)

How many other readers, we may wonder, see in Middlemarch an allegory of stepparenthood? And we may also wonder how far this rather obvious point should be taken? If each reader’s experience of a novel is completely different from every other readers’, then we’re stuck once more with solipsism. But if this were true we wouldn’t be able to talk about a novel to other people in the first place, let alone discover whether it’s moved others in the same way as us.   

           Mead doesn’t advocate any of the modes of reading she explores, and she seems to have taken on each of them at the point in her life she did without any conscious deliberation. But it’s the religious reading of Middlemarch, the one that treats the story as an extended parable, that she has ultimately alighted on—at least as of the time when she was writing My Life in Middlemarch. This is largely owing to how easily the novel lends itself to this type of reading. From the early chapters in which we’re invited to step into the shoes of these ardent, bookish, big-spirited characters to the pages detailing their inevitable frustrations and disappointments, Eliot really does usher her readers—who must be readerly and ambitious even to take up such a book—toward a more mature mindset. Some of the most moving scenes feature exchanges between Dorothea and Dr. Lydgate, and in their subtle but unmistakable sympathy toward one another they seem to be reaching out to us with the same sympathy. Here is Dorothea commiserating with Lydgate in the wake of a scandal which has tarnished his name and thwarted his efforts at reform:

And that all this should have come to you who had meant to lead a higher life than the common, and to find out better ways—I cannot bear to rest in this as unchangeable. I know you meant that. I remember what you said to me when you first spoke to me about the hospital. There is no sorrow I have thought more about than that—to love what is great, and try to reach it, and yet to fail. (727)

Dorothea helps Lydgate escape Middlemarch and establish himself in London, where he becomes a successful physician but never pushes through any reforms to the profession. And this is one of Dorothea’s many small accomplishments. The lesson of the parable is clear in the famous final lines of the novel:

Her finely-touched spirit had still its fine issues, though they were not widely visible. Her full nature, like that river of which Cyrus broke the strength, spent itself in channels which had no great name on the earth. But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is partly owing to the number who lived faithfully a hidden life, and rest in unvisited tombs. (799)

Entire generations of writers, scholars, and book-lovers have taken this message to heart and found solace in its wisdom. But its impact depends not so much on the reader’s readiness to sympathize with the characters as on their eagerness to identify with them. In other words, Eliot is helping her readers to sympathize with themselves, not “to imagine and to feel the pains and the joys of those who differ from themselves in everything but the broad fact of being struggling erring human creatures.”

            For all the maturity of its message, Middlemarch invites a rather naïve reading. But what’s wrong with reading this way? And how else might we read if we want to focus more on learning to sympathize with those who differ from us? My Life in Middlemarch chronicles the journeys Mead takes to several places of biographical significance hoping to make some connection or attain some greater understanding of Eliot and her writing. Mead also visits libraries in England and the US so she can get her hands on some of the sacred texts still bearing ink scribbled onto paper by the author’s own hand. In an early chapter of Middlemarch, Eliot shows readers Casaubon’s letter proposing marriage to Dorothea, and it’s simultaneously comic and painful for being both pedantic and devoid of feeling. As I read the letter, I had the discomfiting thought that it was only a slight exaggeration of Eliot’s usual prose. Mead likewise characterizes one of Eliot’s contemporary fans, Alexander Main, in a way uncomfortably close the way she comes across herself. Though she writes at one point, “I recognize in his enthusiasm for her works enough of my own admiration for her to feel an awkward fellowship with him,” she doesn’t seem to appreciate the extent to which Main’s relationship to Eliot and her work resembles her own. But she also hints at something else in her descriptions of Main, something that may nod to an alternative mode of reading beyond the ones she’s already explored. She writes,

In his excessive, grandiose, desperately lonely letters, Main does something that most of us who love books do, to some extent or another. He talks about the characters as if they were real people—as vivid, or more so, than people in his own life. He makes demands and asks questions of an author that for most of us remain imaginary but which he transformed, by force of will and need, into an intense epistolary relationship. He turned his worship and admiration of George Eliot into a one-sided love affair of sorts, by which he seems to have felt sustained even as he felt still hungrier for engagement. (241-2)

Main, and to some extent Mead as well, make of Eliot a godly figure fit for worship, and who but God could bring characters into life—into our lives—who are as real and vivid as other actual living breathing human beings we love or hate or like or tolerate?

As Mead’s reading into Middlemarch an allegory of stepmotherhood illustrates, worshipping authors and treating their works as parables comes with the risk of overburdening what’s actually on the page with whatever meanings the faithful yearn to find there, allowing existential need to override honest appreciation. But the other modes are just as problematic. Naïve identification with the heroine, as Mead points out, limits the scope of our sympathy and makes it hard to get into a novel whose characters are strange or otherwise contrary to our individual tastes. It also makes a liability of characters with great weaknesses or flaws or any other traits that would make being in their shoes distasteful or unpleasant. Treating a story as an allegory on the other hand can potentially lead to an infinite assortment of interpretations, all of questionable validity. This mode of reading also implies that writers of fiction have the same goals to argue or persuade as writers of tracts and treatises. The further implication is that all the elements of storytelling are really little more that planks making up the Trojan horse conveying the true message of the story—and if this were the case why would anyone read fictional stories in the first place? It’s quite possible that this ideological approach to teaching literature has played some role in the declining number of people who read fiction.

What’s really amazing about the people who love any book is that, like Alexander Main, they all tend to love certain characters and despise certain others as if they were real people. There may be a certain level of simple identification with the protagonist, but we also tend to identify with real people who are similar to us—that’s the basis of many friendships and relationships. It’s hard to separate identification from fellow-feeling and sympathy even in what seem to be the most wish-fulfilling stories. Does anybody really want to be Jane Eyre or Elizabeth Bennet? Does anyone really want to be James Bond or Katniss Everdeen? Or do we just admire and try to emulate some of their qualities? Characters in fiction are designed to be more vivid than people in real life because, not even being real, they have to be extraordinary in some way to get anyone to pay attention to them. Their contours are more clearly delineated, their traits exaggerated, and their passions intensified. This doesn’t mean, however, that we’re falling for some kind of trick when we forget for a moment that they’re not like anyone we’ll ever meet.

Characters, to be worthy of attention, have to be caricatures—like real people but with a few identifying characteristics blown all out of realistic proportion. Dorothea is a caricature. Casaubon is for sure a caricature. But we understand them using the same emotional and cognitive processes we use to understand real people. And it is in exercising these very perspective-taking and empathizing abilities that our best hope for expanding our sympathies lies. What’s the best way to read a work of literature? First, realize that the author is not a god. In fact, forget the author as best you can. Eliot makes it difficult for us to overlook her presence in any scene, and for that reason it may be time to buck the convention and admit that Middlemarch, as brilliantly conceived as it was, as pioneering and revolutionary as it was, is not by any means the greatest novel ever written. What’s more important to concentrate on than the author is the narrator, who may either be herself an important character in the story, or who may stand as far out of the picture as she can so as not to occlude our view. What’s most important though is to listen to the story the narrator tells, to imagine it’s really happening, right before our eyes, right in the instant we experience it. And, at least for that moment, forget what anyone else has to say about what we're witnessing.

Also read: 

MUDDLING THROUGH "LIFE AFTER LIFE": A REFLECTION ON PLOT AND CHARACTER IN KATE ATKINSON’S NEW NOVEL

And: 

WHY SHAKESPEARE NAUSEATED DARWIN: A REVIEW OF KEITH OATLEY'S "SUCH STUFF AS DREAMS"

And: 

GONE GIRL AND THE RELATIONSHIP GAME: THE CHARMS OF GILLIAN FLYNN'S AMAZINGLY SEDUCTIVE ANTI-HEROINE

Read More
Dennis Junk Dennis Junk

The Time for Tales and A Tale for the Time Being

The pages of Ruth Ozeki’s novel A Tale for the Time Being are brimful with the joys and heartache, not just of fiction in general, but of literary fiction in particular, with the bite of reality that sinks deeper than that of the commercial variety, which sacrifices lasting impact for immediate thrills—this despite the Gothic and science fiction elements Ozeki judiciously sprinkles throughout her chapters.

Storytelling in the twenty-first century is a tricky business. People are faced with too many real-world concerns to be genuinely open to the possibility of caring what happens to imaginary characters in a made-up universe. Some of us even feel a certain level of dread as we read reviews or lists of the best books of any given year, loath to add yet another modern classic to the towering mental stack we resolve to read. Yet we’re hardly ever as grateful to anyone as we are to an author or filmmaker who seduces us with a really good story. No sooner have we closed a novel whose characters captured our heart and whose plights succeeded in making us forget our own daily anxieties, however briefly, than we feel compelled to proselytize on behalf of that story to anyone whose politeness we can exploit long enough to deliver the message. The enchantment catches us so unawares, and strikes us as so difficult to account for according to any reckoning of daily utility, that it feels perfectly natural to relay our literary experiences in religious language. But, as long as we devote sufficient attention to an expertly wrought, authentically heartfelt narrative, even those of us with the most robust commitment to the secular will find ourselves succumbing to the magic.

The pages of Ruth Ozeki’s novel A Tale for the Time Being are brimful with the joys and heartache, not just of fiction in general, but of literary fiction in particular, with the bite of reality that sinks deeper than that of the commercial variety, which sacrifices lasting impact for immediate thrills—this despite the Gothic and science fiction elements Ozeki judiciously sprinkles throughout her chapters. The story is about a woman who becomes obsessed with a story despite herself because she can’t help worrying about the fate of the teenage girl who wrote it. This woman, a novelist who curiously bears the same first name as the novel’s author, only reluctantly brings a barnacle-covered freezer bag concealing a Hello Kitty lunchbox to her home after discovering it washed up on a beach off the coast of British Columbia. And it’s her husband Oliver—named after Ozeki’s real-life husband—who brings the bag into the kitchen so he can examine the contents, over her protests. Inside the lunchbox, Oliver finds a stack of letters written in Japanese and dating to the Second World War. With them, there is a diary written in French, and what at first appears to be a copy of Proust’s In Search of Lost Time but turns out to be another diary, with pages in the handwritten English of a teenage Japanese girl named Nao, pronounced much like the word “now.”

Ruth begins reading the diary, sometimes by herself, sometimes to Oliver before they go to bed. Nao addresses her reader very personally, and delights in the idea that she may be sharing her story with a single individual. As she poses one question after another to this lone reader—who we know has turned out to be Ruth—Nao’s loneliness, her desperate need to unburden herself, wraps itself around Ruth and readers of Ozeki’s novel alike. In the early passages, we learn that Nao’s original plan for the pages under Proust’s cover was to write a biography of her 104-year-old great grandmother Jiko, a Buddhist nun living in a temple in the Miyagi Prefecture, just north of the Fukishima nuclear power plant, and then toss it into the waves of the Pacific for a single beachcomber to find. Contemplating the significance of her personal gesture toward some anonymous future reader, she writes,

If you ask me, it’s fantastically cool and beautiful. It’s like a message in a bottle, cast out onto the ocean of time and space. Totally personal, and real, too, right out of old Jiko’s and Marcel’s prewired world. It’s the opposite of a blog. It’s an antiblog, because it’s meant for only one special person, and that person is you. And if you’ve read this far, you probably understand what I mean. Do you understand? Do you feel special yet? (26)

Nao does manage to write a lot about her own experiences with her great grandmother, and we learn a bit about Jiko’s life before Nao was born. But for the most part Nao’s present troubles override her intentions to write about someone else’s life. Meanwhile, the character Ruth is trying to write a memoir about nursing her mother, who has recently died after a long struggle with Alzheimer’s, but she’s also getting distracted by Nao’s tribulations, even to the point of conducting a search for the girl based on whatever telling details she comes across in the diary.

            All great stories pulse with resonances from great stories past, and A Tale for the Time Being, whether as a result of human commonalities, direct influence, or cultural osmosis, reverberates with the tones and themes of Catcher in the Rye, Donnie Darko, The Karate Kid, My Girl, and the first third of Roberto Bolaño’s 2666. The characters in Ozeki’s multiple embedded and individually shared narratives fascinate themselves with, even as they reluctantly succumb to, the mystery of communion between storyteller and audience. But, since the religion—or perhaps rather the philosophy—that suffuses the characters’ lives is not Christian but Buddhist, it’s only fitting that the focus is on how narrative time flows according to a type of eternal present, always waking us up to the present moment. The surprise in store for readers of the novel, though, is that narrative communion—the bond of compassion between narrators and audiences—is intricately tied to our experience of time. This connection comes through in the double meaning of the phrase “for the time being,” which conveys that whatever you’re discussing, though not ideal, is sufficient for now, until something more suitable comes along. But it can also refer to a being existing in time—hence to all beings. As Nao explains in the early pages of her diary,

A time being is someone who lives in time, and that means you, and me, and every one of us who is, or was, or ever will be. As for me, right now I am sitting in a French maid Café in Akiba Electricity Town, listening to a sad chanson that is playing sometime in your past, which is also my present, writing this and wondering about you, somewhere in my future. And if you’re reading this, then maybe by now you’re wondering about me, too. (3)

Time is what connects every being—and, as Jiko insists, every object—that moves through it, moment by moment. Further, as Nao demonstrates, the act of narrating, in collusion with a separate act of attending, renders any separation in time somewhat moot.

            Revealing that the contrapuntal narration of A Tale for the Time Being represents the flowering of a relationship between two women who never even meet, whose destinies, it is tantalizingly suggested, may not even be unfolding in the same universe, will likely do nothing to blunt the impact of the story, for the same reasons the characters continually find themselves contemplating. Ruth, once she’s fully engaged in Nao’s story, becomes frustrated at how hard it is “to get a sense from the diary of the texture of time passing.” She wants so much to understand what Nao was going through—what she is going through in the story—but, as she explains, “No writer, even the most proficient, could re-enact in words the flow of a life lived, and Nao was hardly that skillful” (64). We may chuckle at how Ozeki is covering her tracks with this line about her prodigiously articulate sixteen-year-old protagonist. But she’s also showing just how powerful Ruth’s urge is to keep up her end of the connection. This paradox of urgent inconsequence is introduced by Nao on the first page of her diary. After posing a series of questions about the reader, she writes,  

Actually, it doesn’t matter very much, because by the time you read this, everything will be different, and you will be nowhere in particular, flipping idly through the pages of this book, which happens to be the diary of my last days on earth, wondering if you should keep on reading. (3)

In an attempt to mimic the flow of time as Nao experienced it, Ruth paces her reading of the diary to correspond with the spans between entries. All the while, though, she’s desperate to find out what happened—what happens—because she’s worried that Nao may have killed herself, or may still be on the verge of doing so.

            Nao wrote her diary in English because she spent most of her life in Sunnyvale, California, where her father Haruki worked for a tech firm. After he lost his job for what Nao assumes are economic reasons, he moved with her and her mother back to Tokyo. Nao starts having difficulties at school right away, but she’s reluctant to transfer to what she calls a stupid kids’ school because, as she explains,

It’s probably been a while since you were in junior high school, but if you can remember the poor loser foreign kid who entered your eighth-grade class halfway through the year, then maybe you will feel some sympathy for me. I was totally clueless about how you’re supposed to act in a Japanese classroom, and my Japanese sucked, and at the time I was almost fifteen and older than the other kids and big for my age, too, from eating so much American food. Also, we were broke so I didn’t have an allowance or any nice stuff, so basically I got tortured. In Japan they call it ijime, but that word doesn’t begin to describe what the kids used to do to me. I would probably already be dead if Jiko hadn’t taught me how to develop my superpower. Ijime is why it’s not an option for me to go to a stupid kids’ school, because in my experience, stupid kids can be even meaner than smart kids because they don’t have as much to lose. School just isn’t safe. (44)

The types of cruelty Nao is subjected to are insanely creative, and one of her teachers even participates so he can curry favor with the popular students. They pretend she’s invisible and only detectable by a terrible odor she exudes—and she perversely begins to believe she really does give off a tell-tale scent of poverty. They pinch her and pull out strands of her hair. At one point, they even stage an elaborate funeral for her and post a video of it online. Curiously, though, when Ruth searches for the title, nothing comes up.

            Unfortunately, the extent of Nao’s troubles stretches beyond the walls of the school. Her father, whom she and her mother believed to have been hired for a new job, tries to kill himself by lying down in front of a train. But the engineers see him in time to stop, and all he manages to do is incur a fine. Once Nao’s mother brings him home, he reveals the new job was a lie, that he’s been earning money gambling, and that now he’s lost nearly every penny. One of the few things Ruth finds in her internet searches is an essay by Nao’s father about why he and other Japanese men are attracted to the idea of suicide. She decides to email the professor who posted this essay, claiming that it’s “urgent” for her to find the author and his daughter. When Oliver reminds Ruth that all of this occurred some distance in the past, she’s dumbfounded. As the third-person narrator of the Ruth chapters explains,

It wasn't that she'd forgotten, exactly. The problem was more a kind of slippage. When she was writing a novel, living deep inside a fictional world, the days got jumbled together, and entire weeks or months or even years would yield to the ebb and flow of the dream. Bills went unpaid, emails unanswered, calls unreturned. Fiction had its own time and logic. That was its power. But the email she’d just written to the professor was not fiction. It was real, as real as the diary. (313-4)

Before ending up in that French maid café (the precise character of which is a bone of contention between Ruth and Oliver) trying to write Jiko’s life story, Nao had gone, at her parents’ prompting, to spend a summer with her great grandmother at the temple. And it’s here she learns to develop what Jiko calls her superpower. Of course, as she describes the lessons in her diary, Nao is also teaching to Ruth what Jiko taught to her. So Ozeki’s readers get the curious sense that not only is Ruth trying to care for Nao but Nao is caring for Ruth in return.

During her stay at the temple, Nao learns that Jiko became a nun after her son, Nao’s great uncle and her father’s namesake, died in World War II. It is the letters he wrote home as he underwent pilot training that Oliver and Ruth find with Nao’s diary, and they eventually learn that the other diary, the one written in French, is his as well. Even though Nao only learns about the man she comes to call Haruki #1 from her grandmother’s stories, along with the letters and a few ghostly visitations, she develops a type of bond with him, identifying with his struggles, admiring his courage. At the same time, she comes to sympathize with and admire Jiko all the more for how she overcame the grief of losing her son. “By the end of the summer, with Jiko’s help,” Nao writes,

I was getting stronger. Not Just strong in my body, but strong in my mind. In my mind, I was becoming a superhero, like Jubei-chan, the Samurai Girl, only I was Nattchan, the Super Nun, with abilities bestowed upon me by Lord Buddha that included battling the waves, even if I always lost, and being able to withstand astonishing amounts of pain and hardship. Jiko was helping me cultivate my supapawa! by encouraging me to sit zazen for many hours without moving, and showing me how not to kill anything, not even the mosquitoes that buzzed around my face when I was sitting in the hondo at dusk or lying in bed at night. I learned not to swat them even when they bit me and also not to scratch the itch that followed. At first, when I woke up, my face and arms were swollen from the bites, but little by little, my blood and skin grew tough and immune to their poison and I didn’t break out in bumps no matter how much I’d been bitten. And soon there was no difference between me and the mosquitoes. My skin was no longer a wall that separated us, and my blood was their blood. I was pretty proud of myself, so I went and found Jiko and I told her. She smiled. (204)

Nao is learning to break through the walls that separate her from others by focusing her mind on simply being, but the theme Ozeki keeps bringing us back to in her novel is that it’s not only through meditation that we open our minds compassionately to others.

            Despite her new powers, though, Nao’s problems at home and at school only get worse when she returns from her sojourn at the temple. And, as the urgency of her story increases, a long-simmering tension between Ruth and Oliver boils over into spats and insults. The added strain stems not so much from their different responses to Nao’s story as it does from the way the narration resonates with many of Ruth’s own feelings and experiences, making her more aware of all that’s going on in her own mind. When Nao explains how she came to be writing in a blanked out copy of In Search of Lost Time, for instance, she describes how the woman who makes the diaries “does it so authentically you don’t even notice the hack, and you almost think that the letters just slipped off the pages and fell to the floor like a pile of dead ants” (20). Later, Ruth reflects on how,

When she was little, she was always surprised to pick up a book in the morning, and open it, and find the letters aligned neatly in their places. Somehow she expected them to be all jumbled up, having fallen to the bottom when the covers were shut. Nao had described something similar, seeing the blank pages of Proust and wondering if the letters had fallen off like dead ants. When Ruth had read this, she’d felt a jolt of recognition. (63)

The real source of the trouble, though, is Nao’s description of her father as a hikikomori, which Ruth defines in one of the footnotes that constantly remind readers of her presence in Nao’s story as a “recluse, a person who refuses to leave the house” (70). Having agreed to retreat to the sparsely populated Whaletown, on the tiny island of Cortes, to care for her mother and support Oliver as he undertook a series of eminently eccentric botanical projects, part art and part ecological experimentations—the latest of which he calls the “Neo-Eocene”—Ruth is now beginning to long for the bustling crowds and hectic conveniences of urban civilization: “Engulfed by the thorny roses and massing bamboo, she stared out the window and felt like she’d stepped into a malevolent fairy tale” (61).

A Tale for the Time Being is so achingly alive the characters’ thoughts and words lift up off the pages as if borne aloft on clouds of their own breath. Ironically, though, if there’s one character who suffers from overscripting it’s the fictional counterpart to Ozeki’s real-life husband. Oliver’s role is to see what Ruth cannot, moving the plot forward with strained scientific explanations of topics ranging from Pacific gyres, to Linnaean nomenclature, to quantum computing, raising the suspicion that he’s serving as the personification of the author’s own meticulous research habits. Under the guise of the masculine tendency to launch into spontaneous lectures—which to be fair Ozeki has at least one female character manifesting as well—Oliver at times transforms from a living character into a scholarly mouthpiece, performing a parody of professorial pedantism. For the most part, though, the philosophical ruminations in the novel emanate so naturally from the characters and are woven so seamlessly into the narrative that you can’t help following along with the characters' thoughts, as if taking part in a conversation. When Nao writes about discovering through a Google search that “A la recherché du temps perdu” means “In search of lost time,” she encourages her reader to contemplate what that means along with her:

Weird, right? I mean, there I was, sitting in a French maid café in Akiba, thinking about lost time, and old Marcel Proust was sitting in France a hundred years ago, writing a whole book about the exact same subject. So maybe his ghost was lingering between the covers and hacking into my mind, or maybe it was just a crazy coincidence, but either way, how cool is that? I think coincidences are cool, even if they don’t mean anything, and who knows? Maybe they do! I’m not saying everything happens for a reason. It was more just that it felt as if me and old Marcel were on the same wavelength. (23)

Waves and ghosts and crazy coincidences make up some of the central themes of the novel, but underlying them all is the spooky connectedness we humans so readily feel with one another, even against the backdrop of our capacity for the most savage cruelty.

A Tale for the Time Being balances its experimental devices with time-honored storytelling techniques. It’s central conceits are emblematic of today’s reigning fictional aesthetic, which embodies a playful exploration of the infinite possibilities of all things meta—stories about stories, narratives pondering the nature of narrative, authors becoming characters, characters becoming authors, fiction that’s indistinguishable from nonfiction and vice versa. We might call this tradition jootsism, after computational neuroscientist Douglas Hofstadter’s term jootsing, or jumping out of the system, which he theorizes is critical to the emergence of consciousness from physical substrates, whether biochemical or digital. Some works in this tradition fatally undermine the systems they jump out of, revealing that the story readers are emotionally invested in is a hoax. Indeed, one strain of literary theory that’s been highly influential over the past four decades equates taking pleasure from stories with indulging in the reaffirmation of prejudices against minorities. Authors in this school therefore endeavor to cast readers out of their own narratives as punishment for their complicity in societal oppression. Fortunately, the types of gimmickry common to this brand of postmodernism—deliberately obnoxious and opaquely nonsensical neon-lit syntactical acrobatics—appear to have drastically declined in popularity, though the urge toward guilt-tripping readers and the obsession with the most harebrained and pusillanimous forms of identity politics persist. There are hidden messages in all media, we’re taught to believe, and those messages are the lifeblood of all that’s evil in our civilization.

But even those of us who don’t subscribe to the ideology that sees all narratives as sinister political allegories are still looking to be challenged, and perhaps even enlightened, every time we pick up a new book. If there is a set of conventions realist literary fiction is trying to break itself free of, it’s got to be the standard lineup of what masquerade as epiphanies but are really little more than varieties of passive acceptance or world-weary acquiescence in the face of life’s inexorables: disappointment, death, the delimiting of opportunity, the dimming of beauty, the diminishing of faculties, the desperate need for love, the absence of any ideal candidate for love. The postmodern wing of the avant-garde offers nothing in place of these old standbys but meaningless antics and sadomasochistic withholdings of the natural pleasure humans derive from sharing stories. While the theme that emerges from all the jootsing in A Tale for the Time Being, a theme that paradoxically produces its own proof, is that the pleasure we get from stories comes from a kind of magic—the superpower of the storyteller, with the generous complicity of the reader—and that this is the same magic that forms the bonds between us and our loved ones. In a world where teenagers subject each other to unspeakable cruelty, where battling nations grind bodies by the boatload into lifeless, nameless mush, there is still solace to be had, and hope, in the daily miracle of how easily we’re made to feel the profoundest sympathy for people we never even meet, simply by experiencing their stories.

Also read:

WHAT MAKES "WOLF HALL" SO GREAT?

And:

MUDDLING THROUGH "LIFE AFTER LIFE": A REFLECTION ON PLOT AND CHARACTER IN KATE ATKINSON’S NEW NOVEL

And:

REBECCA MEAD’S MIDDLEMARCH PILGRIMAGE AND THE 3 WRONG WAYS TO READ A NOVEL

Read More
Dennis Junk Dennis Junk

Lab Flies: Joshua Greene’s Moral Tribes and the Contamination of Walter White

Joshua Greene’s book “Moral Tribes” posits a dual-system theory of morality, where a quick, intuitive system 1 makes judgments based on deontological considerations—”it’s just wrong—whereas the slower, more deliberative system 2 takes time to calculate the consequences of any given choice. Audiences can see these two systems on display in the series “Breaking Bad,” as well as in critics’ and audiences’ responses.

Walter White’s Moral Math

In an episode near the end of Breaking Bad’s fourth season, the drug kingpin Gus Fring gives his meth cook Walter White an ultimatum. Walt’s brother-in-law Hank is a DEA agent who has been getting close to discovering the high-tech lab Gus has created for Walt and his partner Jesse, and Walt, despite his best efforts, hasn’t managed to put him off the trail. Gus decides that Walt himself has likewise become too big a liability, and he has found that Jesse can cook almost as well as his mentor. The only problem for Gus is that Jesse, even though he too is fed up with Walt, will refuse to cook if anything happens to his old partner. So Gus has Walt taken at gunpoint to the desert where he tells him to stay away from both the lab and Jesse. Walt, infuriated, goads Gus with the fact that he’s failed to turn Jesse against him completely, to which Gus responds, “For now,” before going on to say,

In the meantime, there’s the matter of your brother-in-law. He is a problem you promised to resolve. You have failed. Now it’s left to me to deal with him. If you try to interfere, this becomes a much simpler matter. I will kill your wife. I will kill your son. I will kill your infant daughter.

In other words, Gus tells Walt to stand by and let Hank be killed or else he will kill his wife and kids. Once he’s released, Walt immediately has his lawyer Saul Goodman place an anonymous call to the DEA to warn them that Hank is in danger. Afterward, Walt plans to pay a man to help his family escape to a new location with new, untraceable identities—but he soon discovers the money he was going to use to pay the man has already been spent (by his wife Skyler). Now it seems all five of them are doomed. This is when things get really interesting.

      Walt devises an elaborate plan to convince Jesse to help him kill Gus. Jesse knows that Gus would prefer for Walt to be dead, and both Walt and Gus know that Jesse would go berserk if anyone ever tried to hurt his girlfriend’s nine-year-old son Brock. Walt’s plan is to make it look like Gus is trying to frame him for poisoning Brock with risin. The idea is that Jesse would suspect Walt of trying to kill Brock as punishment for Jesse betraying him and going to work with Gus. But Walt will convince Jesse that this is really just Gus’s ploy to trick Jesse into doing what he has forbidden Gus to do up till now—and kill Walt himself. Once Jesse concludes that it was Gus who poisoned Brock, he will understand that his new boss has to go, and he will accept Walt’s offer to help him perform the deed. Walt will then be able to get Jesse to give him the crucial information he needs about Gus to figure out a way to kill him.

It’s a brilliant plan. The one problem is that it involves poisoning a nine-year-old child. Walt comes up with an ingenious trick which allows him to use a less deadly poison while still making it look like Brock has ingested the ricin, but for the plan to work the boy has to be made deathly ill. So Walt is faced with a dilemma: if he goes through with his plan, he can save Hank, his wife, and his two kids, but to do so he has to deceive his old partner Jesse in just about the most underhanded way imaginable—and he has to make a young boy very sick by poisoning him, with the attendant risk that something will go wrong and the boy, or someone else, or everyone else, will die anyway. The math seems easy: either four people die, or one person gets sick. The option recommended by the math is greatly complicated, however, by the fact that it involves an act of violence against an innocent child.

In the end, Walt chooses to go through with his plan, and it works perfectly. In another ingenious move, though, this time on the part of the show’s writers, Walt’s deception isn’t revealed until after his plan has been successfully implemented, which makes for an unforgettable shock at the end of the season. Unfortunately, this revelation after the fact, at a time when Walt and his family are finally safe, makes it all too easy to forget what made the dilemma so difficult in the first place—and thus makes it all too easy to condemn Walt for resorting to such monstrous means to see his way through.

            Fans of Breaking Bad who read about the famous thought-experiment called the footbridge dilemma in Harvard psychologist Joshua Greene’s multidisciplinary and momentously important book Moral Tribes: Emotion, Reason, and the Gap between Us and Them will immediately recognize the conflicting feelings underlying our responses to questions about serving some greater good by committing an act of violence. Here is how Greene describes the dilemma:

A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Next to you is a railway workman wearing a large backpack. The only way to save the five people is to push this man off the footbridge and onto the tracks below. The man will die as a result, but his body and backpack will stop the trolley from reaching the others. (You can’t jump yourself because you, without a backpack, are not big enough to stop the trolley, and there’s no time to put one on.) Is it morally acceptable to save the five people by pushing this stranger to his death? (113-4)

As was the case for Walter White when he faced his child-poisoning dilemma, the math is easy: you can save five people—strangers in this case—through a single act of violence. One of the fascinating things about common responses to the footbridge dilemma, though, is that the math is all but irrelevant to most of us; no matter how many people we might save, it’s hard for us to see past the murderous deed of pushing the man off the bridge. The answer for a large majority of people faced with this dilemma, even in the case of variations which put the number of people who would be saved much higher than five, is no, pushing the stranger to his death is not morally acceptable.

Another fascinating aspect of our responses is that they change drastically with the modification of a single detail in the hypothetical scenario. In the switch dilemma, a trolley is heading for five people again, but this time you can hit a switch to shift it onto another track where there happens to be a single person who would be killed. Though the math and the underlying logic are the same—you save five people by killing one—something about pushing a person off a bridge strikes us as far worse than pulling a switch. A large majority of people say killing the one person in the switch dilemma is acceptable. To figure out which specific factors account for the different responses, Greene and his colleagues tweak various minor details of the trolley scenario before posing the dilemma to test participants. By now, so many experiments have relied on these scenarios that Greene calls trolley dilemmas the fruit flies of the emerging field known as moral psychology.

The Automatic and Manual Modes of Moral Thinking

One hypothesis for why the footbridge case strikes us as unacceptable is that it involves using a human being as an instrument, a means to an end. So Greene and his fellow trolleyologists devised a variation called the loop dilemma, which still has participants pulling a hypothetical switch, but this time the lone victim on the alternate track must stop the trolley from looping back around onto the original track. In other words, you’re still hitting the switch to save the five people, but you’re also using a human being as a trolley stop. People nonetheless tend to respond to the loop dilemma in much the same way they do the switch dilemma. So there must be some factor other than the prospect of using a person as an instrument that makes the footbridge version so objectionable to us.

Greene’s own theory for why our intuitive responses to these dilemmas are so different begins with what Daniel Kahneman, one of the founders of behavioral economics, labeled the two-system model of the mind. The first system, a sort of autopilot, is the one we operate in most of the time. We only use the second system when doing things that require conscious effort, like multiplying 26 by 47. While system one is quick and intuitive, system two is slow and demanding. Greene proposes as an analogy the automatic and manual settings on a camera. System one is point-and-click; system two, though more flexible, requires several calibrations and adjustments. We usually only engage our manual systems when faced with circumstances that are either completely new or particularly challenging.

According to Greene’s model, our automatic settings have functions that go beyond the rapid processing of information to encompass our basic set of moral emotions, from indignation to gratitude, from guilt to outrage, which motivates us to behave in ways that over evolutionary history have helped our ancestors transcend their selfish impulses to live in cooperative societies. Greene writes,

According to the dual-process theory, there is an automatic setting that sounds the emotional alarm in response to certain kinds of harmful actions, such as the action in the footbridge case. Then there’s manual mode, which by its nature tends to think in terms of costs and benefits. Manual mode looks at all of these cases—switch, footbridge, and loop—and says “Five for one? Sounds like a good deal.” Because manual mode always reaches the same conclusion in these five-for-one cases (“Good deal!”), the trend in judgment for each of these cases is ultimately determined by the automatic setting—that is, by whether the myopic module sounds the alarm. (233)

What makes the dilemmas difficult then is that we experience them in two conflicting ways. Most of us, most of the time, follow the dictates of the automatic setting, which Greene describes as myopic because its speed and efficiency come at the cost of inflexibility and limited scope for extenuating considerations.

The reason our intuitive settings sound an alarm at the thought of pushing a man off a bridge but remain silent about hitting a switch, Greene suggests, is that our ancestors evolved to live in cooperative groups where some means of preventing violence between members had to be in place to avoid dissolution—or outright implosion. One of the dangers of living with a bunch of upright-walking apes who possess the gift of foresight is that any one of them could at any time be plotting revenge for some seemingly minor slight, or conspiring to get you killed so he can move in on your spouse or inherit your belongings. For a group composed of individuals with the capacity to hold grudges and calculate future gains to function cohesively, the members must have in place some mechanism that affords a modicum of assurance that no one will murder them in their sleep. Greene writes,

To keep one’s violent behavior in check, it would help to have some kind of internal monitor, an alarm system that says “Don’t do that!” when one is contemplating an act of violence. Such an action-plan inspector would not necessarily object to all forms of violence. It might shut down, for example, when it’s time to defend oneself or attack one’s enemies. But it would, in general, make individuals very reluctant to physically harm one another, thus protecting individuals from retaliation and, perhaps, supporting cooperation at the group level. My hypothesis is that the myopic module is precisely this action-plan inspecting system, a device for keeping us from being casually violent. (226)

Hitting a switch to transfer a train from one track to another seems acceptable, even though a person ends up being killed, because nothing our ancestors would have recognized as violence is involved.

Many philosophers cite our different responses to the various trolley dilemmas as support for deontological systems of morality—those based on the inherent rightness or wrongness of certain actions—since we intuitively know the choices suggested by a consequentialist approach are immoral. But Greene points out that this argument begs the question of how reliable our intuitions really are. He writes,

I’ve called the footbridge dilemma a moral fruit fly, and that analogy is doubly appropriate because, if I’m right, this dilemma is also a moral pest. It’s a highly contrived situation in which a prototypically violent action is guaranteed (by stipulation) to promote the greater good. The lesson that philosophers have, for the most part, drawn from this dilemma is that it’s sometimes deeply wrong to promote the greater good. However, our understanding of the dual-process moral brain suggests a different lesson: Our moral intuitions are generally sensible, but not infallible. As a result, it’s all but guaranteed that we can dream up examples that exploit the cognitive inflexibility of our moral intuitions. It’s all but guaranteed that we can dream up a hypothetical action that is actually good but that seems terribly, horribly wrong because it pushes our moral buttons. I know of no better candidate for this honor than the footbridge dilemma. (251)

The obverse is that many of the things that seem morally acceptable to us actually do cause harm to people. Greene cites the example of a man who lets a child drown because he doesn’t want to ruin his expensive shoes, which most people agree is monstrous, even though we think nothing of spending money on things we don’t really need when we could be sending that money to save sick or starving children in some distant country. Then there are crimes against the environment, which always seem to rank low on our list of priorities even though their future impact on real human lives could be devastating. We have our justifications for such actions or omissions, to be sure, but how valid are they really? Is distance really a morally relevant factor when we let children die? Does the diffusion of responsibility among so many millions change the fact that we personally could have a measurable impact?

These black marks notwithstanding, cooperation, and even a certain degree of altruism, come natural to us. To demonstrate this, Greene and his colleagues have devised some clever methods for separating test subjects’ intuitive responses from their more deliberate and effortful decisions. The experiments begin with a multi-participant exchange scenario developed by economic game theorists called the Public Goods Game, which has a number of anonymous players contribute to a common bank whose sum is then doubled and distributed evenly among them. Like the more famous game theory exchange known as the Prisoner’s Dilemma, the outcomes of the Public Goods Game reward cooperation, but only when a threshold number of fellow cooperators is reached. The flip side, however, is that any individual who decides to be stingy can get a free ride from everyone else’s contributions and make an even greater profit. What tends to happen is, over multiple rounds, the number of players opting for stinginess increases until the game is ruined for everyone, a process analogical to a phenomenon in economics known as the Tragedy of the Commons. Everyone wants to graze a few more sheep on the commons than can be sustained fairly, so eventually the grounds are left barren.

The Biological and Cultural Evolution of Morality

            Greene believes that humans evolved emotional mechanisms to prevent the various analogs of the Tragedy of the Commons from occurring so that we can live together harmoniously in tight-knit groups. The outcomes of multiple rounds of the Public Goods Game, for instance, tend to be far less dismal when players are given the opportunity to devote a portion of their own profits to punishing free riders. Most humans, it turns out, will be motivated by the emotion of anger to expend their own resources for the sake of enforcing fairness. Over several rounds, cooperation becomes the norm. Such an outcome has been replicated again and again, but researchers are always interested in factors that influence players’ strategies in the early rounds. Greene describes a series of experiments he conducted with David Rand and Martin Nowak, which were reported in an article in Nature in 2012. He writes,

…we conducted our own Public Goods Games, in which we forced some people to decide quickly (less than ten seconds) and forced others to decide slowly (more than ten seconds). As predicted, forcing people to decide faster made them more cooperative and forcing people to slow down made them less cooperative (more likely to free ride). In other experiments, we asked people, before playing the Public Goods Game, to write about a time in which their intuitions served them well, or about a time in which careful reasoning led them astray. Reflecting on the advantages of intuitive thinking (or the disadvantages of careful reflection) made people more cooperative. Likewise, reflecting on the advantages of careful reasoning (or the disadvantages of intuitive thinking) made people less cooperative. (62)

These results offer strong support for Greene’s dual-process theory of morality, and they even hint at the possibility that the manual mode is fundamentally selfish or amoral—in other words, that the philosophers have been right all along in deferring to human intuitions about right and wrong.

            As good as our intuitive moral sense is for preventing the Tragedy of the Commons, however, when given free rein in a society comprised of large groups of people who are strangers to one another, each with its own culture and priorities, our natural moral settings bring about an altogether different tragedy. Greene labels it the Tragedy of Commonsense Morality. He explains,

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups). (23)

Expanding on the story behind the Tragedy of the Commons, Greene describes what would happen if several groups, each having developed its own unique solution for making sure the commons were protected from overgrazing, were suddenly to come into contact with one another on a transformed landscape called the New Pastures. Each group would likely harbor suspicions against the others, and when it came time to negotiate a new set of rules to govern everybody the groups would all show a significant, though largely unconscious, bias in favor of their own members and their own ways.

The origins of moral psychology as a field can be traced to both developmental and evolutionary psychology. Seminal research conducted at Yale’s Infant Cognition Center, led by Karen Wynn, Kiley Hamlin, and Paul Bloom (and which Bloom describes in a charming and highly accessible book called Just Babies), has demonstrated that children as young as six months possess what we can easily recognize as a rudimentary moral sense. These findings suggest that much of the behavior we might have previously ascribed to lessons learned from adults is actually innate. Experiments based on game theory scenarios and thought-experiments like the trolley dilemmas are likewise thought to tap into evolved patterns of human behavior. Yet when University of British Columbia psychologist Joseph Henrich teamed up with several anthropologists to see how people living in various small-scale societies responded to game theory scenarios like the Prisoner’s Dilemma and the Public Goods Game they discovered a great deal of variation. On the one hand, then, human moral intuitions seem to be rooted in emotional responses present at, or at least close to, birth, but on the other hand cultures vary widely in their conventional responses to classic dilemmas. These differences between cultural conceptions of right and wrong are in large part responsible for the conflict Greene envisions in his Parable of the New Pastures.

But how can a moral sense be both innate and culturally variable?  “As you might expect,” Greene explains, “the way people play these games reflects the way they live.” People in some cultures rely much more heavily on cooperation to procure their sustenance, as is the case with the Lamelara of Indonesia, who live off the meat of whales they hunt in groups. Cultures also vary in how much they rely on market economies as opposed to less abstract and less formal modes of exchange. Just as people adjust the way they play economic games in response to other players’ moves, people acquire habits of cooperation based on the circumstances they face in their particular societies. Regarding the differences between small-scale societies in common game theory strategies, Greene writes,

Henrich and colleagues found that payoffs to cooperation and market integration explain more than two thirds of the variation across these cultures. A more recent study shows that, across societies, market integration is an excellent predictor of altruism in the Dictator Game. At the same time, many factors that you might expect to be important predictors of cooperative behavior—things like an individual’s sex, age, and relative wealth, or the amount of money at stake—have little predictive power. (72)

In much the same way humans are programmed to learn a language and acquire a set of religious beliefs, they also come into the world with a suite of emotional mechanisms that make up the raw material for what will become a culturally calibrated set of moral intuitions. The specific language and religion we end up with is of course dependent on the social context of our upbringing, just as our specific moral settings will reflect those of other people in the societies we grow up in.

Jonathan Haidt and Tribal Righteousness

In our modern industrial society, we actually have some degree of choice when it comes to our cultural affiliations, and this freedom opens the way for heritable differences between individuals to play a larger role in our moral development. Such differences are nowhere as apparent as in the realm of politics, where nearly all citizens occupy some point on a continuum between conservative and liberal. According to Greene’s fellow moral psychologist Jonathan Haidt, we have precious little control over our moral responses because, in his view, reason only comes into play to justify actions and judgments we’ve already made. In his fascinating 2012 book The Righteous Mind, Haidt insists,

Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth. (50)

To explain the moral divide between right and left, Haidt points to the findings of his own research on what he calls Moral Foundations, six dimensions underlying our intuitions about moral and immoral actions. Conservatives tend to endorse judgments based on all six of the foundations, valuing loyalty, authority, and sanctity much more than liberals, who focus more exclusively on care for the disadvantaged, fairness, and freedom from oppression. Since our politics emerge from our moral intuitions and reason merely serves as a sort of PR agent to rationalize judgments after the fact, Haidt enjoins us to be more accepting of rival political groups— after all, you can’t reason with them. 

            Greene objects both to Haidt’s Moral Foundations theory and to his prescription for a politics of complementarity. The responses to questions representing all the moral dimensions in Haidt’s studies form two clusters on a graph, Greene points out, not six, suggesting that the differences between conservatives and liberals are attributable to some single overarching variable as opposed to several individual tendencies. Furthermore, the specific content of the questions Haidt uses to flesh out the values of his respondents have a critical limitation. Greene writes,

According to Haidt, American social conservatives place greater value on respect for authority, and that’s true in a sense. Social conservatives feel less comfortable slapping their fathers, even as a joke, and so on. But social conservatives do not respect authority in a general way. Rather, they have great respect for authorities recognized by their tribe (from the Christian God to various religious and political leaders to parents). American social conservatives are not especially respectful of Barack Hussein Obama, whose status as a native-born American, and thus a legitimate president, they have persistently challenged. (339)

The same limitation applies to the loyalty and sanctity foundations. Conservatives feel little loyalty toward the atheists and Muslims among their fellow Americans. Nor do they recognize the sanctity of Mosques or Hindu holy texts. Greene goes on,

American social conservatives are not best described as people who place special value on authority, sanctity, and loyalty, but rather as tribal loyalists—loyal to their own authorities, their own religion, and themselves. This doesn’t make them evil, but it does make them parochial, tribal. In this they’re akin to the world’s other socially conservative tribes, from the Taliban in Afghanistan to European nationalists. According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise may be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic. (340)

Greene believes such persuasion is possible, even with regard to emotionally and morally charged controversies, because he sees our manual-mode thinking as playing a potentially much greater role than Haidt sees it playing.

Metamorality on the New Pastures

            Throughout The Righteous Mind, Haidt argues that the moral philosophers who laid the foundations of modern liberal and academic conceptions of right and wrong gave short shrift to emotions and intuitions—that they gave far too much credit to our capacity for reason. To be fair, Haidt does honor the distinction between descriptive and prescriptive theories of morality, but he nonetheless gives the impression that he considers liberal morality to be somewhat impoverished. Greene sees this attitude as thoroughly wrongheaded. Responding to Haidt’s metaphor comparing his Moral Foundations to taste buds—with the implication that the liberal palate is more limited in the range of flavors it can appreciate—Greene writes,

The great philosophers of the Enlightenment wrote at a time when the world was rapidly shrinking, forcing them to wonder whether their own laws, their own traditions, and their own God(s) were any better than anyone else’s. They wrote at a time when technology (e.g., ships) and consequent economic productivity (e.g., global trade) put wealth and power into the hands of a rising educated class, with incentives to question the traditional authorities of king and church. Finally, at this time, natural science was making the world comprehensible in secular terms, revealing universal natural laws and overturning ancient religious doctrines. Philosophers wondered whether there might also be universal moral laws, ones that, like Newton’s law of gravitation, applied to members of all tribes, whether or not they knew it. Thus, the Enlightenment philosophers were not arbitrarily shedding moral taste buds. They were looking for deeper, universal moral truths, and for good reason. They were looking for moral truths beyond the teachings of any particular religion and beyond the will of any earthly king. They were looking for what I’ve called a metamorality: a pan-tribal, or post-tribal, philosophy to govern life on the new pastures. (338-9)

While Haidt insists we must recognize the centrality of intuitions even in this civilization nominally ruled by reason, Greene points out that it was skepticism of old, seemingly unassailable and intuitive truths that opened up the world and made modern industrial civilization possible in the first place.  

            As Haidt explains, though, conservative morality serves people well in certain regards. Christian churches, for instance, encourage charity and foster a sense of community few secular institutions can match. But these advantages at the level of parochial groups have to be weighed against the problems tribalism inevitably leads to at higher levels. This is, in fact, precisely the point Greene created his Parable of the New Pastures to make. He writes,

The Tragedy of the Commons is averted by a suite of automatic settings—moral emotions that motivate and stabilize cooperation within limited groups. But the Tragedy of Commonsense Morality arises because of automatic settings, because different tribes have different automatic settings, causing them to see the world through different moral lenses. The Tragedy of the Commons is a tragedy of selfishness, but the Tragedy of Commonsense Morality is a tragedy of moral inflexibility. There is strife on the new pastures not because herders are hopelessly selfish, immoral, or amoral, but because they cannot step outside their respective moral perspectives. How should they think? The answer is now obvious: They should shift into manual mode. (172)

Greene argues that whenever we, as a society, are faced with a moral controversy—as with issues like abortion, capital punishment, and tax policy—our intuitions will not suffice because our intuitions are the very basis of our disagreement.

            Watching the conclusion to season four of Breaking Bad, most viewers probably responded to finding out that Walt had poisoned Brock by thinking that he’d become a monster—at least at first. Indeed, the currently dominant academic approach to art criticism involves taking a stance, both moral and political, with regard to a work’s models and messages. Writing for The New Yorker, Emily Nussbaum, for instance, disparages viewers of Breaking Bad for failing to condemn Walt, writing,

When Brock was near death in the I.C.U., I spent hours arguing with friends about who was responsible. To my surprise, some of the most hard-nosed cynics thought it inconceivable that it could be Walt—that might make the show impossible to take, they said. But, of course, it did nothing of the sort. Once the truth came out, and Brock recovered, I read posts insisting that Walt was so discerning, so careful with the dosage, that Brock could never have died. The audience has been trained by cable television to react this way: to hate the nagging wives, the dumb civilians, who might sour the fun of masculine adventure. “Breaking Bad” increases that cognitive dissonance, turning some viewers into not merely fans but enablers. (83)

To arrive at such an assessment, Nussbaum must reduce the show to the impact she assumes it will have on less sophisticated fans’ attitudes and judgments. But the really troubling aspect of this type of criticism is that it encourages scholars and critics to indulge their impulse toward self-righteousness when faced with challenging moral dilemmas; in other words, it encourages them to give voice to their automatic modes precisely when they should be shifting to manual mode. Thus, Nussbaum neglects outright the very details that make Walt’s scenario compelling, completely forgetting that by making Brock sick—and, yes, risking his life—he was able to save Hank, Skyler, and his own two children.

But how should we go about arriving at a resolution to moral dilemmas and political controversies if we agree we can’t always trust our intuitions? Greene believes that, while our automatic modes recognize certain acts as wrong and certain others as a matter of duty to perform, in keeping with deontological ethics, whenever we switch to manual mode, the focus shifts to weighing the relative desirability of each option's outcomes. In other words, manual mode thinking is consequentialist. And, since we tend to assess outcomes according their impact on other people, favoring those that improve the quality of their experiences the most, or detract from it the least, Greene argues that whenever we slow down and think through moral dilemmas deliberately we become utilitarians. He writes,

If I’m right, this convergence between what seems like the right moral philosophy (from a certain perspective) and what seems like the right moral psychology (from a certain perspective) is no accident. If I’m right, Bentham and Mill did something fundamentally different from all of their predecessors, both philosophically and psychologically. They transcended the limitations of commonsense morality by turning the problem of morality (almost) entirely over to manual mode. They put aside their inflexible automatic settings and instead asked two very abstract questions. First: What really matters? Second: What is the essence of morality? They concluded that experience is what ultimately matters, and that impartiality is the essence of morality. Combing these two ideas, we get utilitarianism: We should maximize the quality of our experience, giving equal weight to the experience of each person. (173)

If you cite an authority recognized only by your own tribe—say, the Bible—in support of a moral argument, then members of other tribes will either simply discount your position or counter it with pronouncements by their own authorities. If, on the other hand, you argue for a law or a policy by citing evidence that implementing it would mean longer, healthier, happier lives for the citizens it affects, then only those seeking to establish the dominion of their own tribe can discount your position (which of course isn’t to say they can’t offer rival interpretations of your evidence).

            If we turn commonsense morality on its head and evaluate the consequences of giving our intuitions priority over utilitarian accounting, we can find countless circumstances in which being overly moral is to everyone’s detriment. Ideas of justice and fairness allow far too much space for selfish and tribal biases, whereas the math behind mutually optimal outcomes based on compromise tends to be harder to fudge. Greene reports, for instance, the findings of a series of experiments conducted by Fieke Harinick and colleagues at the University of Amsterdam in 2000. Negotiations by lawyers representing either the prosecution or the defense were told to either focus on serving justice or on getting the best outcome for their clients. The negotiations in the first condition almost always came to loggerheads. Greene explains,

Thus, two selfish and rational negotiators who see that their positions are symmetrical will be willing to enlarge the pie, and then split the pie evenly. However, if negotiators are seeking justice, rather than merely looking out for their bottom lines, then other, more ambiguous, considerations come into play, and with them the opportunity for biased fairness. Maybe your clients really deserve lighter penalties. Or maybe the defendants you’re prosecuting really deserve stiffer penalties. There is a range of plausible views about what’s truly fair in these cases, and you can choose among them to suit your interests. By contrast, if it’s just a matter of getting the best deal you can from someone who’s just trying to get the best deal for himself, there’s a lot less wiggle room, and a lot less opportunity for biased fairness to create an impasse. (88)

Framing an argument as an effort to establish who was right and who was wrong is like drawing a line in the sand—it activates tribal attitudes pitting us against them, while treating negotiations more like an economic exchange circumvents these tribal biases.

Challenges to Utilitarianism

But do we really want to suppress our automatic moral reactions in favor of deliberative accountings of the greatest good for the greatest number? Deontologists have posed some effective challenges to utilitarianism in the form of thought-experiments that seem to show efforts to improve the quality of experiences would lead to atrocities. For instance, Greene recounts how in a high school debate, he was confronted by a hypothetical surgeon who could save five sick people by killing one healthy one. Then there’s the so-called Utility Monster, who experiences such happiness when eating humans that it quantitatively outweighs the suffering of those being eaten. More down-to-earth examples feature a scapegoat convicted of a crime to prevent rioting by people who are angry about police ineptitude, and the use of torture to extract information from a prisoner that could prevent a terrorist attack. The most influential challenge to utilitarianism, however, was leveled by the political philosopher John Rawls when he pointed out that it could be used to justify the enslavement of a minority by a majority.

Greene’s responses to these criticisms make up one of the most surprising, important, and fascinating parts of Moral Tribes. First, highly contrived thought-experiments about Utility Monsters and circumstances in which pushing a guy off a bridge is guaranteed to stop a trolley may indeed prove that utilitarianism is not true in any absolute sense. But whether or not such moral absolutes even exist is a contentious issue in its own right. Greene explains,

I am not claiming that utilitarianism is the absolute moral truth. Instead I’m claiming that it’s a good metamorality, a good standard for resolving moral disagreements in the real world. As long as utilitarianism doesn’t endorse things like slavery in the real world, that’s good enough. (275-6)

One source of confusion regarding the slavery issue is the equation of happiness with money; slave owners probably could make profits in excess of the losses sustained by the slaves. But money is often a poor index of happiness. Greene underscores this point by asking us to consider how much someone would have to pay us to sell ourselves into slavery. “In the real world,” he writes, “oppression offers only modest gains in happiness to the oppressors while heaping misery upon the oppressed” (284-5).

            Another failing of the thought-experiments thought to undermine utilitarianism is the shortsightedness of the supposedly obvious responses. The crimes of the murderous doctor and the scapegoating law officers may indeed produce short-term increases in happiness, but if the secret gets out healthy and innocent people will live in fear, knowing they can’t trust doctors and law officers. The same logic applies to the objection that utilitarianism would force us to become infinitely charitable, since we can almost always afford to be more generous than we currently are. But how long could we serve as so-called happiness pumps before burning out, becoming miserable, and thus lose the capacity for making anyone else happier? Greene writes,

If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits. (258)

Greene seems to be endorsing what philosophers call "rule utilitarianism." We can approach every choice by calculating the likely outcomes, but as a society we would be better served deciding on some rules for everyone to adhere to. It just may be possible for a doctor to increase happiness through murder in a particular set of circumstances—but most people would vociferously object to a rule legitimizing the practice.

The concept of human rights may present another challenge to Greene in his championing of consequentialism over deontology. It is our duty, after all, to recognize the rights of every human, and we ourselves have no right to disregard someone else’s rights no matter what benefit we believe might result from doing so. In his book The Better Angels of our Nature, Steven Pinker, Greene’s colleague at Harvard, attributes much of the steep decline in rates of violent death over the past three centuries to a series of what he calls Rights Revolutions, the first of which began during the Enlightenment. But the problem with arguments that refer to rights, Greene explains, is that controversies arise for the very reason that people don’t agree which rights we should recognize. He writes,

Thus, appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribespeople feel, you can always posit the existence of a right that corresponds to your feelings. If you feel that abortion is wrong, you can talk about a “right to life.” If you feel that outlawing abortion is wrong, you can talk about a “right to choose.” If you’re Iran, you can talk about your “nuclear rights,” and if you’re Israel you can talk about your “right to self-defense.” “Rights” are nothing short of brilliant. They allow us to rationalize our gut feelings without doing any additional work. (302)

The only way to resolve controversies over which rights we should actually recognize and which rights we should prioritize over others, Greene argues, is to apply utilitarian reasoning.

Ideology and Epistemology

            In his discussion of the proper use of the language of rights, Greene comes closer than in other section of Moral Tribes to explicitly articulating what strikes me as the most revolutionary idea that he and his fellow moral psychologists are suggesting—albeit as of yet only implicitly. In his advocacy for what he calls “deep pragmatism,” Greene isn’t merely applying evolutionary theories to an old philosophical debate; he’s actually posing a subtly different question. The numerous thought-experiments philosophers use to poke holes in utilitarianism may not have much relevance in the real world—but they do undermine any claim utilitarianism may have on absolute moral truth. Greene’s approach is therefore to eschew any effort to arrive at absolute truths, including truths pertaining to rights. Instead, in much the same way scientists accept that our knowledge of the natural world is mediated by theories, which only approach the truth asymptotically, never capturing it with any finality, Greene intimates that the important task in the realm of moral philosophy isn’t to arrive at a full accounting of moral truths but rather to establish a process for resolving moral and political dilemmas.

What’s needed, in other words, isn’t a rock solid moral ideology but a workable moral epistemology. And, just as empiricism serves as the foundation of the epistemology of science, Greene makes a convincing case that we could use utilitarianism as the basis of an epistemology of morality. Pursuing the analogy between scientific and moral epistemologies even farther, we can compare theories, which stand or fall according to their empirical support, to individual human rights, which we afford and affirm according to their impact on the collective happiness of every individual in the society. Greene writes,

If we are truly interested in persuading our opponents with reason, then we should eschew the language of rights. This is, once again, because we have no non-question-begging (and utilitarian) way of figuring out which rights really exist and which rights take precedence over others. But when it’s not worth arguing—either because the question has been settled or because our opponents can’t be reasoned with—then it’s time to start rallying the troops. It’s time to affirm our moral commitments, not with wonky estimates of probabilities but with words that stir our souls. (308-9)

Rights may be the closest thing we have to moral truths, just as theories serve as our stand-ins for truths about the natural world, but even more important than rights or theories are the processes we rely on to establish and revise them.

A New Theory of Narrative

            As if a philosophical revolution weren’t enough, moral psychology is also putting in place what could be the foundation of a new understanding of the role of narratives in human lives. At the heart of every story is a conflict between competing moral ideals. In commercial fiction, there tends to be a character representing each side of the conflict, and audiences can be counted on to favor one side over the other—the good, altruistic guys over the bad, selfish guys. In more literary fiction, on the other hand, individual characters are faced with dilemmas pitting various modes of moral thinking against each other. In season one of Breaking Bad, for instance, Walter White famously writes a list on a notepad of the pros and cons of murdering the drug dealer restrained in Jesse’s basement. Everyone, including Walt, feels that killing the man is wrong, but if they let him go Walt and his family will at risk of retaliation. This dilemma is in fact quite similar to ones he faces in each of the following seasons, right up until he has to decide whether or not to poison Brock. Trying to work out what Walt should do, and anxiously anticipating what he will do, are mental exercises few can help engaging in as they watch the show.

            The current reigning conception of narrative in academia explains the appeal of stories by suggesting it derives from their tendency to fulfill conscious and unconscious desires, most troublesomely our desires to have our prejudices reinforced. We like to see men behaving in ways stereotypically male, women stereotypically female, minorities stereotypically black or Hispanic, and so on. Cultural products like works of art, and even scientific findings, are embraced, the reasoning goes, because they cement the hegemony of various dominant categories of people within the society. This tradition in arts scholarship and criticism can in large part be traced back to psychoanalysis, but it has developed over the last century to incorporate both the predominant view of language in the humanities and the cultural determinism espoused by many scholars in the service of various forms of identity politics. 

            The postmodern ideology that emerged from the convergence of these schools is marked by a suspicion that science is often little more than a veiled effort at buttressing the political status quo, and its preeminent thinkers deliberately set themselves apart from the humanist and Enlightenment traditions that held sway in academia until the middle of the last century by writing in byzantine, incoherent prose. Even though there could be no rational way either to support or challenge postmodern ideas, scholars still take them as cause for leveling accusations against both scientists and storytellers of using their work to further reactionary agendas.

For anyone who recognizes the unparalleled power of science both to advance our understanding of the natural world and to improve the conditions of human lives, postmodernism stands out as a catastrophic wrong turn, not just in academic history but in the moral evolution of our civilization. The equation of narrative with fantasy is a bizarre fantasy in its own right. Attempting to explain the appeal of a show like Breaking Bad by suggesting that viewers have an unconscious urge to be diagnosed with cancer and to subsequently become drug manufacturers is symptomatic of intractable institutional delusion. And, as Pinker recounts in Better Angels, literature, and novels in particular, were likely instrumental in bringing about the shift in consciousness toward greater compassion for greater numbers of people that resulted in the unprecedented decline in violence beginning in the second half of the nineteenth century.

Yet, when it comes to arts scholarship, postmodernism is just about the only game in town. Granted, the writing in this tradition has progressed a great deal toward greater clarity, but the focus on identity politics has intensified to the point of hysteria: you’d be hard-pressed to find a major literary figure who hasn’t been accused of misogyny at one point or another, and any scientist who dares study something like gender differences can count on having her motives questioned and her work lampooned by well intentioned, well indoctrinated squads of well-poisoning liberal wags.

            When Emily Nussbaum complains about viewers of Breaking Bad being lulled by the “masculine adventure” and the digs against “nagging wives” into becoming enablers of Walt’s bad behavior, she’s doing exactly what so many of us were taught to do in academic courses on literary and film criticism, applying a postmodern feminist ideology to the show—and completely missing the point. As the series opens, Walt is deliberately portrayed as downtrodden and frustrated, and Skyler’s bullying is an important part of that dynamic. But the pleasure of watching the show doesn’t come so much from seeing Walt get out from under Skyler’s thumb—he never really does—as it does from anticipating and fretting over how far Walt will go in his criminality, goaded on by all that pent-up frustration. Walt shows a similar concern for himself, worrying over what effect his exploits will have on who he is and how he will be remembered. We see this in season three when he becomes obsessed by the “contamination” of his lab—which turns out to be no more than a house fly—and at several other points as well. Viewers are not concerned with Walt because he serves as an avatar acting out their fantasies (or else the show would have a lot more nude scenes with the principal of the school he teaches in). They’re concerned because, at least at the beginning of the show, he seems to be a good person and they can sympathize with his tribulations.

The much more solidly grounded view of narrative inspired by moral psychology suggests that common themes in fiction are not reflections or reinforcements of some dominant culture, but rather emerge from aspects of our universal human psychology. Our feelings about characters, according to this view, aren’t determined by how well they coincide with our abstract prejudices; on the contrary, we favor the types of people in fiction we would favor in real life. Indeed, if the story is any good, we will have to remind ourselves that the people whose lives we’re tracking really are fictional. Greene doesn’t explore the intersection between moral psychology and narrative in Moral Tribes, but he does give a nod to what we can only hope will be a burgeoning field when he writes, 

Nowhere is our concern for how others treat others more apparent than in our intense engagement with fiction. Were we purely selfish, we wouldn’t pay good money to hear a made-up story about a ragtag group of orphans who use their street smarts and quirky talents to outfox a criminal gang. We find stories about imaginary heroes and villains engrossing because they engage our social emotions, the ones that guide our reactions to real-life cooperators and rogues. We are not disinterested parties. (59)

Many people ask why we care so much about people who aren’t even real, but we only ever reflect on the fact that what we’re reading or viewing is a simulation when we’re not sufficiently engrossed by it.

            Was Nussbaum completely wrong to insist that Walt went beyond the pale when he poisoned Brock? She certainly showed the type of myopia Greene attributes to the automatic mode by forgetting Walt saved at least four lives by endangering the one. But most viewers probably had a similar reaction. The trouble wasn’t that she was appalled; it was that her postmodernism encouraged her to unquestioningly embrace and give voice to her initial feelings. Greene writes, 

It’s good that we’re alarmed by acts of violence. But the automatic emotional gizmos in our brains are not infinitely wise. Our moral alarm systems think that the difference between pushing and hitting a switch is of great moral significance. More important, our moral alarm systems see no difference between self-serving murder and saving a million lives at the cost of one. It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy. (253)

Greene doesn’t reveal whether he’s a Breaking Bad fan or not, but his discussion of the footbridge dilemma gives readers a good indication of how he’d respond to Walt’s actions.

If you don’t feel that it’s wrong to push the man off the footbridge, there’s something wrong with you. I, too, feel that it’s wrong, and I doubt that I could actually bring myself to push, and I’m glad that I’m like this. What’s more, in the real world, not pushing would almost certainly be the right decision. But if someone with the best of intentions were to muster the will to push the man off the footbridge, knowing for sure that it would save five lives, and knowing for sure that there was no better alternative, I would approve of this action, although I might be suspicious of the person who chose to perform it. (251)

The overarching theme of Breaking Bad, which Nussbaum fails utterly to comprehend, is the transformation of a good man into a bad one. When he poisons Brock, we’re glad he succeeded in saving his family, some of us are even okay with methods, but we’re worried—suspicious even—about what his ability to go through with it says about him. Over the course of the series, we’ve found ourselves rooting for Walt, and we’ve come to really like him. We don’t want to see him break too bad. And however bad he does break we can’t help hoping for his redemption. Since he’s a valuable member of our tribe, we’re loath to even consider it might be time to start thinking he has to go.

Also read:

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

And

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

The World Perspective in War and Peace: Tolstoy’s Genius for Integrating Multiple Perspectives

As disappointing as the second half of “War and Peace” is, Tolstoy genius when it comes to perspective makes the first half one of the truly sublime reading experiences on offer to lovers of literature.

            Sometime around the age of twenty, probably as I was reading James M. Cain’s novel Mildred Pierce, I settled on the narrative strategy I have preferred ever since. At the time, I would have called it third-person limited omniscient, but I would learn later, in a section of a class on nineteenth century literature devoted to Jane Austen’s Emma, that the narrative style I always felt so compelled by was referred to more specifically by literary scholars as free indirect discourse. Regardless of the label, I had already been unconsciously emulating the style for some time by then in my own short stories. Some years later, I became quite fond of the reviews and essays of the literary critic James Wood, partly because he eschewed all the idiotic and downright fraudulent nonsense associated with postmodern pseudo-theories, but partly too because in his book How Fiction Works he both celebrated and expounded at length upon that same storytelling strategy that I found to be the most effective in pulling me into the dramas of fictional characters.

            Free indirect discourse (or free indirect style, as it’s sometimes called) blends first-person with third-person narration, so that even when descriptions aren’t tagged by the author as belonging to the central character we readers can still assume what is being attended to and how it’s being rendered in words are revealing something of that character’s mind. In other words, the author takes the liberty of moving in and out of the character’s mind, detailing thoughts, actions, and outside conditions or events in whatever way most effectively represents—and even simulates—the drama of the story. It’s a tricky thing to master, demanding a sense of proportion and timing, a precise feeling for the key intersecting points of character and plot. And it has a limitation: you really can’t follow more than one character at a time, because doing so would upset the tone and pacing of the story, or else it would expose the shallowness of the author’s penetration. Jumping from one mind to another makes the details seem not so much like a manifestation of the characters’ psyche as a simple byproduct of the author’s writing habits.

            Fiction writers get around this limitation in a number of ways. Some break their stories into sections or chapters and give each one over to a different character. You have to be really good to pull this off successfully; it usually still ends up lending an air of shallowness to the story. Most really great works rendered in free indirect discourse—Herzog, Sabbath’s Theater, Mantel’s Cromwell novels—stick to just one character throughout, and, since the strategy calls for an intensely thorough imagining of the character, the authors tend to stick to protagonists who are somewhat similar to themselves. John Updike, whose linguistic talents were prodigious enough to set him apart even in an era of great literary masters, barely even attempted to bend his language to his characters, and so his best works, like those in the Rabbit series, featured characters who are at least a bit like Updike himself.  

            But what if an author could so thoroughly imagine an entire cast of characters and have such a keen sense of every scene’s key dramatic points that she could incorporate their several perspectives without turning every page into a noisy and chaotic muddle? What if the trick could be pulled off with such perfect timing and proportion that readers’ attention would wash over the scene, from character to character spanning all the objects and accidents in between, without being thrown into confusion and without any attention being drawn to the presence of the author? Not many authors try it—it’s usually a mark of inexperience or lack of talent—but Leo Tolstoy somehow managed to master it.

War and Peace is the quintessentially huge and intimidating novel—more of a punch line to jokes about pretentious literature geeks than a great masterwork everyone feels obliged to read at some point in her life. But, as often occurs when I begin reading one of the classics, I was surprised to discover not just how unimposing it is page-by-page but how immersed in the story I became by the end of the first few chapters. My general complaint about novels from the nineteenth century is that the authors wrote from too great a distance from their characters, in prose that’s too formal and wooden. It’s impossible to tell if the lightness of touch in War in Peace, as I’m reading it, is more Tolstoy’s or more the translators Richard Pevear and Larissa Volokhonsky’s, but the original author’s handling of perspective is what shines through most spectacularly.

            I’m only as far into the novel as the beginning of volume II (a little past page 300 of over 1200 pages), but much of Tolstoy’s mastery is already on fine display. The following long paragraph features the tragically plain Princess Marya, who for financial reasons is being presented to the handsome Prince Anatole as a candidate for a mutually advantageous marriage. Marya’s pregnant sister-in-law, Liza, referred to as “the little princess” and described as having a tiny mustache on her too-short upper lip, has just been trying, with the help of the pretty French servant Mademoiselle Bourienne, to make her look as comely as possible for her meeting with the young prince and his father Vassily. But Marya has become frustrated with her own appearance, and, aside from her done-up hair, has decided to present herself as she normally is. The scene begins after the two men have arrived and Marya enters the room.

When Princess Marya came in, Prince Vassily and his son were already in the drawing room, talking with the little princess and Mlle Bourienne. When she came in with her heavy step, planting her heels, the men and Mlle Bourienne rose, and the little princess, pointing to her said, “Voila Marie!” Princess Marya saw them all, and saw them in detail. She saw the face of Prince Vassily, momentarily freezing in a serious expression at the sight of the princess, and the face of the little princess, curiously reading on the faces of the guests the impression Marie made. She also saw Mlle Bourienne with her ribbon, and her beautiful face, and her gaze—lively as never before—directed at him; but she could not see him, she saw only something big, bright, and beautiful, which moved towards her as she came into the room. Prince Vassily went up to her first, and she kissed the bald head that bowed over her hand, and to his words replied that, on the contrary, she remembered him very well. Then Anatole came up to her. She still did not see him. She only felt a gentle hand firmly take hold of her hand, and barely touched the white forehead with beautiful, pomaded blond hair above it. When she looked at him, his beauty struck her. Anatole, the thumb of his right hand placed behind a fastened button of his uniform, chest thrust out, shoulders back, swinging his free leg slightly, and inclining his head a little, gazed silently and cheerfully at the princess, obviously without thinking of her at all. Anatole was not resourceful, not quick and eloquent in conversation, but he had instead a capacity, precious in society, for composure and unalterable assurance. When an insecure man is silent at first acquaintance and shows an awareness of the impropriety of this silence and a wish to find something to say, it comes out badly; but Anatole was silent, swung his leg, and cheerfully observed the princess’s hairstyle. It was clear that he could calmly remain silent like that for a very long time. “If anyone feels awkward because of this silence, speak up, but I don’t care to,” his look seemed to say. Besides that, in Anatole’s behavior with women there was a manner which more than any other awakens women’s curiosity, fear, and even love—a manner of contemptuous awareness of his own superiority. As if he were saying to them with his look: “I know you, I know, but why should I bother with you? And you’d be glad if I did!” Perhaps he did not think that when he met women (and it is even probable that he did not, because he generally thought little), but such was his look and manner. The princess felt it, and, as if wishing to show him that she dared not even think of interesting him, turned to the old prince. The conversation was general and lively, thanks to the little princess’s voice and the lip with its little mustache which kept rising up over her white teeth. She met Prince Vassily in that jocular mode often made use of by garrulously merry people, which consists in the fact that, between the person thus addressed and oneself, there are supposed to exist some long-established jokes and merry, amusing reminiscences, not known to everyone, when in fact there are no such reminiscences, as there were none between the little princess and Prince Vassily. Prince Vassily readily yielded to this tone; the little princess also involved Anatole, whom she barely knew, in this reminiscence of never-existing funny incidents. Mlle Bourienne also shared in these common reminiscences, and even Princess Marya enjoyed feeling herself drawn into this merry reminiscence. (222-3)

In this pre-film era, Tolstoy takes an all-seeing perspective that’s at once cinematic and lovingly close up to his characters, suggesting the possibility that much of the deep focus on individual minds in contemporary fiction is owing to an urge for the one narrative art form to occupy a space left untapped by the other. Still, as simple as Tolstoy’s incorporation of so many minds into the scope of his story may seem as it lies neatly inscribed and eternally memorialized on the page, a fait accompli, his uncanny sense of where to point the camera, as it were, to achieve the most evocative and forwardly propulsive impact in the scene is one not many writers can be counted on to possess. Again, the pitfall lesser talents fall prey to when trying to integrate multiple perspectives like this arises out of an inability to avoid advertising their own presence, which entails a commensurate detraction from the naturalness and verisimilitude of the characters. The way Tolstoy maintains his own invisibility in those perilously well-lit spaces between his characters begins with the graceful directness and precision of his prose but relies a great deal as well on his customary method of characterization.

For Tolstoy, each character’s experience is a particular instance of a much larger trend. So, when the lens of his descriptions focuses in on a character in a particular situation, the zooming doesn’t occur merely in the three-dimensional space of what a camera would record but in the landscape of recognizable human experience as well. You see this in the lines above about how "in Anatole’s behavior with women there was a manner which more than any other awakens women’s curiosity, fear, and even love," and the "jocular mode often made use of by garrulously merry people." 

Here is a still more illustrative example from when the Countess Rostov is reflecting on a letter from her son Nikolai informing her that he was wounded in battle but also that he’s been promoted to a higher rank.

How strange, extraordinary, joyful it was that her son—that son who twenty years ago had moved his tiny limbs barely perceptibly inside her, that son over whom she had quarreled with the too-indulgent count, that son who had first learned to say “brush,” and then “mamma,” that this son was now there, in a foreign land, in foreign surroundings, a manly warrior, alone, with no help or guidance, and doing there some manly business of his own. All the worldwide, age-old experience showing that children grow in an imperceptible way from the cradle to manhood, did not exist for the countess. Her son’s maturing had been at every point as extraordinary for her as if there had not been millions upon millions of men who had matured in just the same way. As it was hard to believe twenty years ago that the little being who lived somewhere under her heart would start crying, and suck her breast, and begin to talk, so now it was hard to believe that this same being could be the strong, brave man, an example to sons and people, that he was now, judging by his letter. (237)

There’s only a single person in the history of the world who would have these particular feelings in response to this particular letter, but at the same time these same feelings will be familiar—or at least recognizable—to every last person who reads the book.  

While reading War and Peace, you have the sense, not so much that you’re being told a grand and intricate story by an engagingly descriptive author, but that you’re witnessing snippets of countless interconnected lives, selections from a vast historical multitude that are both arbitrary and yet, owing to that very connectedness, significant. Tolstoy shifts breezily between the sociological and the psychological with such finesse that it’s only in retrospect that you realize what he’s just done. As an epigraph to his introduction, Pevear quotes Isaac Babel: “If the world could write by itself, it would write like Tolstoy.”  

The biggest drawback to this approach (if you don’t count its reliance on ideas about universals in human existence, which are a bit unfashionable of late) is that since there’s no way to know how long the camera will continue to follow any given character, or who it will be pointed at next, emotional investments in any one person have little chance to accrue any interest. For all the forward momentum of looming marriages and battle deaths, there’s little urgency attached to the fate of any single individual. Indeed, there’s a pervasive air of comic inconsequence, sometimes bordering on slapstick, in all the glorious strivings and abrupt pratfalls. (Another pleasant surprise in store for those who tackle this daunting book is how funny it is.) Of course, with a novel that stretches beyond the thousand-page mark, an author has plenty of time to train readers which characters they can expect to hear more about. Once that process begins, it’s difficult to laugh at their disappointments and tragedies. 

Also read:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

And:

WHAT'S THE POINT OF DIFFICULT READING?

And:

WHO NEEDS COMPLEX NARRATIVES? : TIM PARKS' ENLIGHTENED CYNICISM 

Read More
Dennis Junk Dennis Junk

Muddling through "Life after Life": A Reflection on Plot and Character in Kate Atkinson’s New Novel

Kate Atkinson’s “Life after Life” is absorbing and thought-provoking. But it also leaves the reader feeling wrung out. The issue is that if you’re going to tinker with one element of storytelling, the other elements must rock solid to hold the entire structure together.

            Every novelist wants to be the one who rewrites the rules of fiction. But it’s remarkable how for all the experimentations over the past couple of centuries most of the basic elements of storytelling have yet to be supplanted. To be sure, a few writers have won over relatively small and likely ephemeral audiences with their scofflaw writerly antics. But guys like D.F. Wallace and Don DeLillo (and even post-Portrait Joyce) only succeeded by appealing to readers’ desire to fit in with the reigning cohort of sophisticates. If telling stories can be thought of as akin to performing magic, with the chief sleight-of-hand being to make the audience forget for a moment that what they’re witnessing is, after all, just a story, then the meager success of experimental fiction over the past few decades can be ascribed to the way it panders to a subset of readers who like to think of themselves as too cool to believe in magic. In the same way we momentarily marvel, not at a magician’s skillfulness at legerdemain, but at the real magic we’ve just borne witness to, the feat of story magic is accomplished by misdirecting attention away from the mechanics of narrative toward the more compelling verisimilitude of the characters and the concrete immediacy of the their dilemmas. The authors of experimental works pointedly eschew misdirection and instead go out of their way to call attention to the inner workings of narrative, making for some painfully, purposefully bad stories which may nonetheless garner a modicum of popularity because each nudge and wink to the reader serves as a sort of secret hipster handshake.

            That the citadel of realism has withstood  innumerable full-on assaults suggests that the greats who first codified the rules of story writing—the Homers, the Shakespeares, the Austens, the Flauberts, the pre-Ulysses Joyces—weren’t merely making them up whole-cloth and hoping they would catch on, but rather discovering them as entry points to universal facets of the human imagination. Accordingly, the value of any given attempt at fashioning a new narrative mode isn’t exclusively determined by its popularity or staying power. Negative results in fiction, just as in science, can be as fascinating and as fruitful as positive findings because designs with built-in flaws can foster appreciation for more finely tuned and fully functional works. Aspiring novelists might even view the myriad frustrations of experimental fiction as comprising a trail of clues to follow along the path to achievements more faithful to the natural aims of the art form. Such an approach may strike aficionados of the avant-garde as narrow-minded or overly constraining. But writing must operate within a limited set of parameters to be recognized and appreciated as belonging to the category of literary art. And within that category, both societies and individuals find the experience of reading some stories to be more fulfilling, more impactful, more valuable than others. Tastes, societal and individual, along with other factors extrinsic to the story, cannot be discounted. But, though the firmness with which gradations of quality can be established is disputable, the notion that no basis at all reliable could exist for distinguishing the best of stories from the worst resides in a rather remote region on the plausibility scale.

            As an attempt at innovation, Kate Atkinson’s latest novel is uniquely instructive because it relies on a combination of traditional and experimental storytelling techniques. Life after Life has two design flaws, one built-in deliberately, and the other, more damaging one borne either of a misconception or a miscalculation. The deliberate flaw is the central conceit of the plot. Ursula Todd, whose birth in an English house called Fox Corner on a day of heavy snow in February of 1910 we witness again and again, meets with as many untimely demises, only to be granted a new beginning in the next chapter according to the author’s whimsy. Ursula isn’t ever fully aware of what occurred in the previous iterations of her expanding personal multiverse, but she has glimmerings, akin to intense déjà vu, that are at several points vivid enough to influence her decisions. A few tragic occurrences even leave traces on what Ursula describes as the “palimpsest” (506) of time pronounced enough to goad her into drastic measures. One of these instances, when the child Ursula pushes a maid named Bridget down the stairs at Fox Corner to prevent her from attending an Armistice celebration where she’ll contract the influenza that dooms them both, ends up being the point where Ursula as a character comes closest to transcending the abortive contrivances of the plot. But another one, her trying to prevent World War II by killing Hitler before he comes to power, only brings the novel’s second design flaw into sharper focus. Wouldn’t keeping Hitler from playing his now historical role be the first revision that occurred to just about anyone?

But for all the authorial manipulations Life after Life is remarkably readable. Atkinson’s prose and her mastery of scene place her among the best novelists working today. The narration rolls along with a cool precision and a casual sophistication that effortlessly takes on perspective after perspective without ever straying too far from Ursula. And the construction of the scenes as overlapping vignettes, each with interleaved time-travels of its own, often has the effect of engrossing your attention enough to distract you from any concern that the current timeline will be unceremoniously abandoned while also obviating, for the most part, any tedium of repetition. Some of the most devastating scenes occur in the chapters devoted to the Blitz, during which Ursula finds herself in the basement of a collapsed apartment building, once as a resident, and later as a volunteer for a rescue service. The first time through, Ursula is knocked unconscious by the explosion that topples the building. What she sees as she comes to slides with disturbing ease from the mundane to the macabre.

Looking up through the fractured floorboards and the shattered beams she could see a dress hanging limply on a coat hanger, hooked to a picture rail. It was the picture rail in the Miller’s lounge on the ground floor, Ursula recognized the wallpaper of sallow, overblown roses. She had seen Lavinia Nesbit on the stairs wearing the dress only this evening, when it had been the color of pea soup (and equally limp). Now it was a gray bomb-dust shade and had migrated down a floor. A few yards from her head she could see her own kettle, a big brown thing, surplus to requirements in Fox Corner. She recognized it from the thick twine wound around the handle one day long ago by Mrs. Glover. Everything was in the wrong place now, including herself. (272)

The narration then moves backward in time to detail how she ended up amid the rubble of the building, including an encounter with Lavinia on the stairs, before returning to that one hitherto innocuous item. Her neighbor had been wearing a brooch in the shape of a cat with a rhinestone for an eye.

Her attention was caught again by Lavinia Nesbit’s dress hanging from the Miller’s picture rail. But it wasn’t Lavinia Nesbit’s dress. A dress didn’t have arms in it. Not sleeves, but arms. With hands. Something on the dress winked at Ursula, a little cat’s eye caught by the crescent moon. The headless, legless body of Lavinia Nesbit herself was hanging from the Miller’s picture rail. It was so absurd that a laugh began to boil up inside Ursula. It never broke because something shifted—a beam, or part of the wall—and she was sprinkled with a shower of talcum-like dust. Her heart thumped uncontrollably in her chest. It was sore, a time-delay bomb waiting to go off. (286)

It’s hard not to imagine yourself in that basement as you read, right down to the absurd laugh that never makes it into existence. This is Atkinson achieving with élan one of the goals of genre fiction—and where else would we expect to find a line about the heroine’s heart thumping uncontrollably in her chest? But in inviting readers to occupy Ursula’s perspective Atkinson has had to empty some space.

            The seamlessness of the narration and the vivid, often lurid episodes captured in the unfailingly well-crafted scenes of Ursula’s many lives effect a degree of immersion in the story that successfully counterbalances the ejective effects of Atkinson’s experimentations with the plot. The experience of these opposing forces—being simultaneously pulled into and cast out of the story—is what makes Life after Life both so intriguing and so instructive. One of the qualities that make stories good is that their various elements operate outside the audience’s awareness. Just as the best performances in cinema are the ones that embody a broad range of emotion while allowing viewers to forget, at least for the moment, that what they’re witnessing is in fact a performance—you’re not watching Daniel Day-Lewis, for instance, but Abraham Lincoln—the best stories immerse readers to the point where they’re no longer considering the story as a story but anxious to discover what lies in store for the characters. True virtuosos in both cinema and fiction, like magicians, want you to have a direct encounter with what never happens and only marvel afterward at the virtuosity that must’ve gone into arranging the illusion. The trick for an author who wants to risk calling attention to the authored nature of the story is to find a way to enfold her manipulations into the reader’s experiences with the characters. Ursula’s many lives must be accepted and understood as an element of the universe in which the plot of Life after Life unfolds and as part of the struggles we hope to see her through by the end of the novel. Unfortunately, the second design flaw, the weakness of Ursula as a character, sabotages the endeavor.

           The most obvious comparison to the repetitious plot of Life after Life is to the 1993 movie Groundhog Day, in which Bill Murray plays a character, Phil Connors, who keeps waking up to re-live the same day. What makes audiences accept this blatantly unrealistic premise is that Phil responds to his circumstances in such a convincing way, co-opting our own disbelief. As the movie progresses, Phil adjusts to the new nature of reality by adopting a new set of goals, and by this point our attention is focused much more on his evolving values than on the potential distraction of the plot’s impossibility. Eventually, the liberties screenwriters Danny Rubin and Harold Ramis have taken with the plot become so intermingled with the character and his development that witnessing his transformations is as close to undergoing them ourselves as the medium can hope to bring us. While at first we might’ve resisted the contrivance, just as Phil does, by the end its implausibility couldn’t be any more perfectly beside the point. In other words, the character’s struggles and transformation are compelling enough to misdirect our attention away from the author’s manipulations. That’s the magic of the film.

            In calling attention to the authoredness of the story within the confines of the story itself, Life after Life is also similar to Ian McEwan’s 2001 novel Atonement. But McEwan doesn’t drop the veil until near the end of the story; only then do we discover that one of the characters, Briony Tallis, is actually the author of everything we’ve been reading and that she has altered the events to provide a happier and more hopeful ending for two other characters whose lives she had, in her youthful naiveté, set on a tragic course. Giving them the ending they deserve is the only way she knows of now to atone for the all the pain she caused them in the past. Just as Phil’s transformation misdirects our attention from the manipulations of the plot in Groundhog Day, the revelation of how terrible the tragedy was that occurred to the characters in Atonement covers McEwan’s tracks, as we overlook the fact that he’s tricked us as to the true purpose of the narrative because we’re too busy sympathizing with Briony’s futile urge to set things right. In both cases, the experimentation with plot is thoroughly integrated with the development of a strong, unforgettable character, and any expulsive distraction is subsumed by more engrossing revelations. In both cases, the result is pure magic.

            Ursula Todd on the other hand may have been deliberately conceived of as, if not an entirely empty vessel, then a sparsely furnished one. Atkinson may have intended for her to serve as a type of everywoman to make it easy for readers to take on her perspective as she experiences events like the bombing of her apartment building. While we come to know and sympathize with characters like Phil and Briony, we go some distance toward actually becoming Ursula, letting her serve as our avatar in the various historical moments the story allows us to inhabit. By not filling in the outline of Ursula’s character, Atkinson may have been attempting to make our experience of all the scenes more direct and immediate. But the actual effect is to make them less impactful. We have to care about someone in the scene, someone trying to deal with the dilemma it depicts, before we can invest any emotion in it. Atkinson’s description of Lavinia Nesbit’s body makes it easy to imagine, and dismembered bodies are always disturbing to encounter. But her relationship to Ursula is casual, and in the context of the mulligan-calling plot her death is without consequence.

           Another possible explanation for the weakness of Ursula as a character is that Atkinson created her based on the assumption arising out of folk psychology that personality is reducible to personal history, that what happens to you determines who you become. Many authors and screenwriters fall into this trap of thinking they’re exploring characters when all they’re really doing is recounting a series of tragedies that have befallen them. But things happen to everyone. Character is what you do. Ursula is provisioned with a temperament—introverted, agreeable, conscientious—and she has a couple of habits—she’s a stickler for literary quotation—but she’s apathetic about the myriad revisions her life undergoes, and curiously unconcerned about the plot of her own personal story. For all her references to her shifting past, she has no plans or schemes or ambitions for the future. She exists within an intricate network of relationships, but what loves she has are tepid or taken for granted. And throughout the novel what we take at first to be her private thoughts nearly invariably end up being interrupted by memories of how other characters responded when she voiced them. At many points, especially at the beginning of the novel, she’s little more than a bookish girl waiting around for the next really bad thing to happen to her.

After she pushes the maid Bridget down the stairs to prevent her from bringing home the contagion that killed them both in previous lives, Ursula’s mother, Sylvie, sends her to a psychiatrist named Dr. Kellet who introduces her to Nietzsche’s concept of amor fati, which he defines as, “A simple acceptance of what comes to us, regarding it as neither bad nor good.” He then traces the idea back Pindar, whose take he translates as, “become such as you are, having learned what that is” (164). What does Ursula become? After the incident with the maid, there are a couple more instances of her taking action to avoid the tragedies of earlier iterations, and as the novel progresses it does seem like she might be becoming a little less passively stoic, a little less inert and defeated. But as a character she seems to be responding to the serial do-overs of the plot by taking on the attitude that it doesn’t matter what she does or what she becomes. In one of the timelines she most proactively shapes for herself, she travels to the continent to study Modern Languages so she can be a teacher, but even in this life she does little but idly wait for something to happen. Before returning to England,

She had deferred for a year, saying she wanted an opportunity to see a little of the world before “settling down” to a lifetime at the blackboard. That was her rationale anyway, the one that she paraded for parental scrutiny, whereas her true hope was that something would happen in the course of her time abroad that would mean she need never take up the place. What that “something” was she had no idea (“Love perhaps,” Millie said wistfully). Anything really would mean she didn’t end up as an embittered spinster in a girls’ grammar school, spooling her way through the conjugation of foreign verbs, chalk dust falling from her clothes like dandruff. (She based this portrait on her own schoolmistresses.) It wasn’t a profession that had garnered much enthusiasm in her immediate circle either. (333-4)

Again, the scenes and the mindset are easy to imagine (or recall), but just as Ursula’s plan fails to garner much enthusiasm, her plight—her fate—fails to arouse much concern, nowhere near enough, at any rate, to misdirect our attention from the authoredness of the plot.

            There’s a scene late in the novel that has Ursula’s father, Hugh, pondering his children’s personalities. “Ursula, of course, was different to all of them,” he thinks. “She was watchful, as if she were trying to drink in the whole world through those little green eyes that were both his and hers.” He can’t help adding, “She was rather unnerving” (486). But to be unnerving she would have to at least threaten to do something; she would have to be nosy or meddlesome, like Briony, instead of just watchful. What Hugh seems to be picking up on is that Ursula simply knows more than she should, a precocity borne of her wanderings on the palimpsest of time. But whereas a character like Phil quickly learns to exploit his foreknowledge it never occurs to Ursula to make any adjustments unless it’s to save her life or the life of a family member. Tellingly, however, there are a couple of characters in Life after Life for whom amor fati amounts to something other than an argument for impassivity.

Most people muddled through events and only in retrospect realized their significance. The Führer was different, he was consciously making history for the future. Only a true narcissist could do that. And Speer was designing buildings for Berlin so that they would look good when they were ruins a thousand years from now, his gift to the Führer. (To think on such a scale! Ursula lived hour by hour, another consequence of motherhood, the future as much a mystery as the past.) (351)

Of course, we know Ursula’s living hour by hour isn’t just a consequence of her being a mother, since this is the only timeline on which she becomes one. The moral shading to the issue of whether one should actually participate in history is cast over Ursula’s Aunt Izzie as well. Both Ursula and the rest of her family express a vague—or for Sylvie not so vague—disapproval of Izzie, which is ironic because she’s the most—really the only memorable character in the novel. Aunt Izzie actually does things. She elopes to Paris with a married man. She writes a series of children’s books. She moves to California with a playwright. And she’s always there to help Ursula when she gets in trouble.

            Whatever the reason was behind Atkinson’s decision to make her protagonist a mere silent watcher, the consequences for the novel as a whole are to render it devoid of any sense of progression or momentum. Imagine Groundhog Day without a character whose incandescent sarcasm and unchanneled charisma gradually give way to profound fellow-feeling, replaced by one who re-lives the same day over and over without ever seeming to learn or adjust, who never even comes close to pulling off that one perfect day that proves she’s worthy to wake up to a real tomorrow. Imagine Atonement without Briony’s fierce interiority and simmering loneliness. Most stories are going to seem dull compared to these two, but they demonstrate that however fleeting a story’s impact on audiences may be, it begins and ends with the central character’s active engagement with the world and the transformations they undergo as a result of it. Maybe Atkinson wanted to give her readers an experience of life’s preciousness, the contingent nature of everything we hold dear, an antidote to all the rushing desperation to shape an ideal life for ourselves and the wistful worry that we’re at every moment falling short. Unfortunately, those themes make for a story that, as vivid as it can be at points, is as eminently forgettable as its dreamless protagonist. “You may as well have another tot of rum,” a bartender says to the midwife who is being kept from attending Ursula’s umpteenth birth by a snowstorm in the book’s closing line. “You won’t be going anywhere in a hurry tonight” (529). In other words, you’d better find a way to make the most of it.

Also read:

WHAT MAKES "WOLF HALL" SO GREAT?

And:

WHAT MAKES "WOLF HALL" SO GREAT?

And:

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

Sabbath Says: Philip Roth and the Dilemmas of Ideological Castration

With “Sabbath’s Theater,” Philip Roth has called down the thunder. The story does away with the concept of a likable character while delivering a wildly absorbing experience. And it satirizes all the woeful facets of how literature is taught today.

Sabbath’s Theater is the type of book you lose friends over. Mickey Sabbath, the adulterous title character who follows in the long literary line of defiantly self-destructive, excruciatingly vulnerable, and offputtingly but eloquently lustful leading males like Holden Caulfield and Humbert Humbert, strains the moral bounds of fiction and compels us to contemplate the nature of our own voyeuristic impulse to see him through to the end of the story—and not only contemplate it but defend it, as if in admitting we enjoy the book, find its irreverences amusing, and think that in spite of how repulsive he often is there still might be something to be said for poor old Sabbath we’re confessing to no minor offense of our own. Fans and admiring critics alike can’t resist rushing to qualify their acclaim by insisting they don’t condone his cheating on both of his wives, the seduction of a handful of his students, his habit of casually violating others’ privacy, his theft, his betrayal of his lone friend, his manipulations, his racism, his caustic, often cruelly precise provocations—but by the time they get to the end of Sabbath’s debt column it’s a near certainty any list of mitigating considerations will fall short of getting him out of the red. Sabbath, once a puppeteer who now suffers crippling arthritis, doesn’t seem like a very sympathetic character, and yet we sympathize with him nonetheless. In his wanton disregard for his own reputation and his embrace, principled in a way, of his own appetites, intuitions, and human nastiness, he inspires a fascination none of the literary nice guys can compete with. So much for the argument that the novel is a morally edifying art form.

            Thus, in Sabbath, Philip Roth has created a character both convincing and compelling who challenges a fundamental—we may even say natural—assumption about readers’ (or viewers’) role in relation to fictional protagonists, one made by everyone from the snarky authors of even the least sophisticated Amazon.com reviews to the theoreticians behind the most highfalutin academic criticism—the assumption that characters in fiction serve as vehicles for some message the author created them to convey, or which some chimerical mechanism within the “dominant culture” created to serve as agents of its own proliferation. The corollary is that the task of audience members is to try to decipher what the author is trying to say with the work, or what element of the culture is striving to perpetuate itself through it. If you happen to like the message the story conveys, or agree with it at some level, then you recommend the book and thus endorse the statement. Only rarely does a reviewer realize or acknowledge that the purpose of fiction is not simply to encourage readers to behave as the protagonists behave or, if the tale is a cautionary one, to expect the same undesirable consequences should they choose to behave similarly. Sabbath does in fact suffer quite a bit over the course of the novel, and much of that suffering comes as a result of his multifarious offenses, so a case can be made on behalf of Roth’s morality. Still, we must wonder if he really needed to write a story in which the cheating husband is abandoned by both of his wives to make the message sink in that adultery is wrong—especially since Sabbath doesn’t come anywhere near to learning that lesson himself. “All the great thoughts he had not reached,” Sabbath muses in the final pages, “were beyond enumeration; there was no bottom to what he did not have to say about the meaning of his life” (779).

           Part of the reason we can’t help falling back on the notions that fiction serves a straightforward didactic purpose and that characters should be taken as models, positive or negative, for moral behavior is that our moral emotions are invariably and automatically engaged by stories; indeed, what we usually mean when we say we got into a story is that we were in suspense as we anticipated whether the characters ultimately met with the fates we felt they deserved. We reflexively size up any character the author introduces the same way we assess the character of a person we’re meeting for the first time in real life. For many readers, the question of whether a novel is any good is interchangeable with the question of whether they liked the main characters, assuming they fare reasonably well in the culmination of the plot. If an author like Roth evinces an attitude drastically different from ours toward a character of his own creation like Sabbath, then we feel that in failing to condemn him, in holding him up as a model, the author is just as culpable as his character. In a recent edition of PBS’s American Masters devoted to Roth, for example, Jonathan Franzen, a novelist himself, describes how even he couldn’t resist responding to his great forebear’s work in just this way. “As a young writer,” Franzen recalls, “I had this kind of moralistic response of ‘Oh, you bad person, Philip Roth’” (54:56).

            That fiction’s charge is to strengthen our preset convictions through a process of narrative tempering, thus catering to our desire for an orderly calculus of just deserts, serves as the basis for a contract between storytellers and audiences, a kind of promise on which most commercial fiction delivers with a bang. And how many of us have wanted to throw a book out of the window when we felt that promise had been broken? The goal of professional and academic critics, we may imagine, might be to ease their charges into an appreciation of more complex narrative scenarios enacted by characters who escape easy categorization. But since scholarship in the humanities, and in literary criticism especially, has been in a century-long sulk over the greater success of science and the greater renown of scientists, professors of literature have scarcely even begun to ponder what anything resembling a valid answer to the questions of how fiction works and what the best strategies for experiencing it might look like. Those who aren’t pouting in a corner about the ascendancy of science—but the Holocaust!—are stuck in the muck of the century-old pseudoscience of psychoanalysis. But the real travesty is that the most popular, politically inspired schools of literary criticism—feminism, Marxism, postcolonialism—actively preach the need to ignore, neglect, and deny the very existence of moral complexity in literature, violently displacing any appreciation of difficult dilemmas with crudely tribal formulations of good and evil.

            For those inculcated with a need to take a political stance with regard to fiction, the only important dynamics in stories involve the interplay of society’s privileged oppressors and their marginalized victims. In 1976, nearly twenty years before the publication of Sabbath’s Theater, the feminist critic Vivian Gornick lumped Roth together with Saul Bellow and Norman Mailer in an essay asking “Why Do These Men Hate Women?” because she took issue with the way women are portrayed in their novels. Gornick, following the methods standard to academic criticism, doesn’t bother devoting any space in her essay to inconvenient questions about how much we can glean about these authors from their fictional works or what it means that the case for her prosecution rests by necessity on a highly selective approach to quoting from those works. And this slapdash approach to scholarship is supposedly justified because she and her fellow feminist critics believe women are in desperate need of protection from the incalculable harm they assume must follow from such allegedly negative portrayals. In this concern for how women, or minorities, or some other victims are portrayed and how they’re treated by their notional oppressors—rich white guys—Gornick and other critics who make of literature a battleground for their political activism are making the same assumption about fiction’s straightforward didacticism as the most unschooled consumers of commercial pulp. The only difference is that the academics believe the message received by audiences is all that’s important, not the message intended by the author. The basis of this belief probably boils down to its obvious convenience.

            In Sabbath’s Theater, the idea that literature, or art of any kind, is reducible to so many simple messages, and that these messages must be measured against political agendas, is dashed in the most spectacularly gratifying fashion. Unfortunately, the idea is so seldom scrutinized, and the political agendas are insisted on so inclemently, clung to and broadcast with such indignant and prosecutorial zeal, that it seems not one of the critics, nor any of the authors, who were seduced by Sabbath were able to fully reckon with the implications of that seduction. Franzen, for instance, in a New Yorker article about fictional anti-heroes, dodges the issue as he puzzles over the phenomenon that “Mickey Sabbath may be a disgustingly self-involved old goat,” but he’s somehow still sympathetic. The explanation Franzen lights on is that

the alchemical agent by which fiction transmutes my secret envy or my ordinary dislike of “bad” people into sympathy is desire. Apparently, all a novelist has to do is give a character a powerful desire (to rise socially, to get away with murder) and I, as a reader, become helpless not to make that desire my own. (63)

If Franzen is right—and this chestnut is a staple of fiction workshops—then the political activists are justified in their urgency. For if we’re powerless to resist adopting the protagonist’s desires as our own, however fleetingly, then any impulse to victimize women or minorities must invade readers’ psyches at some level, conscious or otherwise. The simple fact, however, is that Sabbath has not one powerful desire but many competing desires, ones that shift as the novel progresses, and it’s seldom clear even to Sabbath himself what those desires are. (And is he really as self-involved as Franzen suggests? It seems to me rather that he compulsively tries to get into other people’s heads, reflexively imagining elaborate stories for them.)

            While we undeniably respond to virtuous characters in fiction by feeling anxiety on their behalf as we read about or watch them undergo the ordeals of the plot, and we just as undeniably enjoy seeing virtue rewarded alongside cruelty being punished—the goodies prevailing over the baddies—these natural responses do not necessarily imply that stories compel our interest and engage our emotions by providing us with models and messages of virtue. Stories aren’t sermons. In his interview for American Masters, Roth explained what a writer’s role is vis-à-vis social issues.

My job isn’t to be enraged. My job is what Chekhov said the job of an artist was, which is the proper presentation of the problem. The obligation of the writer is not to provide the solution to a problem. That’s the obligation of a legislator, a leader, a crusader, a revolutionary, a warrior, and so on. That’s not the goal or aim of a writer. You’re not selling it, and you’re not inviting condemnation. You’re inviting understanding. (59:41)

The crucial but overlooked distinction that characters like Sabbath—but none so well as Sabbath—bring into stark relief is the one between declarative knowledge on the one hand and moment-by-moment experience on the other. Consider for a moment how many books and movies we’ve all been thoroughly engrossed in for however long it took to read or watch them, only to discover a month or so later that we can’t remember even the broadest strokes of how their plots resolved themselves—much less what their morals might have been.

            The answer to the question of what the author is trying to say is that he or she is trying to give readers a sense of what it would be like to go through what the characters are going through—or what it would be like to go through it with them. In other words, authors are not trying to say anything; they’re offering us an experience, once-removed and simulated though it may be. This isn’t to say that these simulated experiences don’t engage our moral emotions; indeed, we’re usually only as engaged in a story as our moral emotions are engaged by it. The problem is that in real-time, in real life, political ideologies, psychoanalytic theories, and rigid ethical principles are too often the farthest thing from helpful. “Fuck the laudable ideologies,” Sabbath helpfully insists: “Shallow, shallow, shallow!” Living in a complicated society with other living, breathing, sick, cruel, saintly, conniving, venal, altruistic, deceitful, noble, horny humans demands not so much a knowledge of the rules as a finely honed body of skills—and our need to develop and hone these skills is precisely why we evolved to find the simulated experiences of fictional narratives both irresistibly fascinating and endlessly pleasurable. Franzen was right that desires are important, the desire to be a good person, the desire to do things others may condemn, the desire to get along with our families and friends and coworkers, the desire to tell them all to fuck off so we can be free, even if just for an hour, to breathe… or to fuck an intern, as the case may be. Grand principles offer little guidance when it comes to balancing these competing desires. This is because, as Sabbath explains, “The law of living: fluctuation. For every thought a counterthought, for every urge a counterurge” (518).

            Fiction then is not a conveyance for coded messages—how tedious that would be (how tedious it really is when writers make this mistake); it is rather a simulated experience of moral dilemmas arising from scenarios which pit desire against desire, conviction against reality, desire against conviction, reality against desire, in any and all permutations. Because these experiences are once-removed and, after all, merely fictional, and because they require our sustained attention, the dilemmas tend to play out in the vicinity of life’s extremes. Here’s how Sabbath’s Theater opens:

            Either forswear fucking others or the affair is over.

            This was the ultimatum, the maddeningly improbable, wholly unforeseen ultimatum, that the mistress of fifty-two delivered in tears to her lover of sixty-four on the anniversary of an attachment that had persisted with an amazing licentiousness—and that, no less amazingly, had stayed their secret—for thirteen years. But now with hormonal infusions ebbing, with the prostate enlarging, with probably no more than another few years of semi-dependable potency still his—with perhaps not that much more life remaining—here at the approach of the end of everything, he was being charged, on pain of losing her, to turn himself inside out. (373)

The ethical proposition that normally applies in situations like this is that adultery is wrong, so don’t commit adultery. But these two have been committing adultery with each other for thirteen years already—do we just stop reading? And if we keep reading, maybe nodding once in a while as we proceed, cracking a few wicked grins along the way, does that mean we too must be guilty?

                               *****

            Much of the fiction written by male literary figures of the past generation, guys like Roth, Mailer, Bellow, and Updike, focuses on the morally charged dilemmas instanced by infidelity, while their gen-x and millennial successors, led by guys like Franzen and David Foster Wallace, have responded to shifting mores—and a greater exposure to academic literary theorizing—by completely overhauling how these dilemmas are framed. Whereas the older generation framed the question as how can we balance the intense physical and spiritual—even existential—gratification of sexual adventure on the one hand with our family obligations on the other, for their successors the question has become how can we males curb our disgusting, immoral, intrinsically oppressive lusting after young women inequitably blessed with time-stamped and overwhelmingly alluring physical attributes. “The younger writers are so self-conscious,” Katie Roiphe writes in a 2009 New York Times essay, “so steeped in a certain kind of liberal education, that their characters can’t condone even their own sexual impulses; they are, in short, too cool for sex.” Roiphe’s essay, “The Naked and the Confused,” stands alongside a 2012 essay in The New York Review of Books by Elaine Blair, “Great American Losers,” as the best descriptions of the new literary trend toward sexually repressed and pathetically timid male leads. The typical character in this vein, Blair writes, “is the opposite of entitled: he approaches women cringingly, bracing for a slap.”

            The writers in the new hipster cohort create characters who bury their longings layers-deep in irony because they’ve been assured the failure on the part of men of previous generations to properly check these same impulses played some unspecified role in the abysmal standing of women in society. College students can’t make it past their first semester without hearing about the evils of so-called objectification, but it’s nearly impossible to get a straight answer from anyone, anywhere, to the question of how objectification can be distinguished from normal, non-oppressive male attraction and arousal. Even Roiphe, in her essay lamenting the demise of male sexual virility in literature, relies on a definition of male oppression so broad that it encompasses even the most innocuous space-filling lines in the books of even the most pathetically diffident authors, writing that “the sexism in the work of the heirs apparent” of writers like Roth and Updike,

is simply wilier and shrewder and harder to smoke out. What comes to mind is Franzen’s description of one of his female characters in “The Corrections”: “Denise at 32 was still beautiful.” To the esteemed ladies of the movement I would suggest this is not how our great male novelists would write in the feminist utopia.

How, we may ask, did it get to the point where acknowledging that age influences how attractive a woman is qualifies a man for designation as a sexist? Blair, in her otherwise remarkably trenchant essay, lays the blame for our oversensitivity—though paranoia is probably a better word—at the feet of none other than those great male novelists themselves, or, as David Foster Wallace calls them, the Great Male Narcissists. She writes,

Because of the GMNs, these two tendencies—heroic virility and sexist condescension—have lingered in our minds as somehow yoked together, and the succeeding generations of American male novelists have to some degree accepted the dyad as truth. Behind their skittishness is a fearful suspicion that if a man gets what he wants, sexually speaking, he is probably exploiting someone.

That Roth et al were sexist, condescending, disgusting, narcissistic—these are articles of faith for feminist critics. Yet when we consider how expansive the definition of terms like sexism and misogyny have become—in practical terms, they both translate to: not as radically feminist as me—and the laughably low standard of evidence required to convince scholars of the accusations, female empowerment starts to look like little more than a reserved right to stand in self-righteous judgment of men for giving voice to and acting on desires anyone but the most hardened ideologue will agree are only natural.

             The effect on writers of this ever-looming threat of condemnation is that they either allow themselves to be silenced or they opt to participate in the most undignified of spectacles, peevishly sniping their colleagues, falling all over themselves to be granted recognition as champions for the cause. Franzen, at least early in his career, was more the silenced type. Discussing Roth, he wistfully endeavors to give the appearance of having moved beyond his initial moralistic responses. “Eventually,” he says, “I came to feel as if that was coming out of an envy: like, wow, I wish I could be as liberated of worry about other’s people’s opinion of me as Roth is” (55:18). We have to wonder if his espousal of the reductive theory that sympathy for fictional characters is based solely on the strength of their desires derives from this same longing for freedom to express his own. David Foster Wallace, on the other hand, wasn’t quite as enlightened or forgiving when it came to his predecessors. Here’s how he explains his distaste for a character in one of Updike’s novels, openly intimating the author’s complicity:

It’s that he persists in the bizarre adolescent idea that getting to have sex with whomever one wants whenever one wants is a cure for ontological despair. And so, it appears, does Mr. Updike—he makes it plain that he views the narrator’s impotence as catastrophic, as the ultimate symbol of death itself, and he clearly wants us to mourn it as much as Turnbull does. I’m not especially offended by this attitude; I mostly just don’t get it. Erect or flaccid, Ben Turnbull’s unhappiness is obvious right from the book’s first page. But it never once occurs to him that the reason he’s so unhappy is that he’s an asshole.

So the character is an asshole because he wants to have sex outside of marriage, and he’s unhappy because he’s an asshole, and it all traces back to the idea that having sex with whomever one wants is a source of happiness? Sounds like quite the dilemma—and one that pronouncing the main player an asshole does nothing to solve. This passage is the conclusion to a review in which Wallace tries to square his admiration for Updike’s writing with his desire to please a cohort of women readers infuriated by the way Updike writes about—portrays—women (which begs the question of why they’d read so many of his books). The troubling implication of his compromise is that if Wallace were himself to freely express his sexual feelings, he’d be open to the charge of sexism too—he’d be an asshole. Better to insist he simply doesn’t “get” why indulging his sexual desires might alleviate his “ontological despair.” What would Mickey Sabbath make of the fact that Wallace hanged himself when he was only forty-six, eleven years after publishing that review? (This isn’t just a nasty rhetorical point; Sabbath has a fascination with artists who commit suicide.)

The inadequacy of moral codes and dehumanizing ideologies when it comes to guiding real humans through life’s dilemmas, along with their corrosive effects on art, is the abiding theme of Sabbath’s Theater. One of the pivotal moments in Sabbath’s life is when a twenty-year-old student he’s in the process of seducing leaves a tape recorder out to be discovered in a lady’s room at the university. The student, Kathy Goolsbee, has recorded a phone sex session between her and Sabbath, and when the tape finds its way into the hands of the dean, it becomes grounds for the formation of a committee of activists against the abuse of women. At first, Kathy doesn’t realize how bad things are about to get for Sabbath. She even offers to give him a blow job as he berates her for her carelessness. Trying to impress on her the situation’s seriousness, he says,

Your people have on tape my voice giving reality to all the worst things they want the world to know about men. They have a hundred times more proof of my criminality than could be required by even the most lenient of deans to drive me out of every decent antiphallic educational institution in America. (586)

The committee against Sabbath proceeds to make the full recorded conversation available through a call-in line (the nineties equivalent of posting the podcast online). But the conversation itself isn’t enough; one of the activists gives a long introduction, which concludes,

The listener will quickly recognize how by this point in his psychological assault on an inexperienced young woman, Professor Sabbath has been able to manipulate her into thinking that she is a willing participant. (567-8)

Sabbath knows full well that even consensual phone sex can be construed as a crime if doing so furthers the agenda of those “esteemed ladies of the movement” Roiphe addresses. 

Reading through the lens of a tribal ideology ineluctably leads to the refraction of reality beyond recognizability, and any aspiring male writer quickly learns in all his courses in literary theory that the criteria for designation as an enemy to the cause of women are pretty much whatever the feminist critics fucking say they are. Wallace wasn’t alone in acquiescing to feminist rage by denying his own boorish instincts. Roiphe describes the havoc this opportunistic antipathy toward male sexuality wreaks in the minds of male writers and their literary creations:

Rather than an interest in conquest or consummation, there is an obsessive fascination with trepidation, and with a convoluted, postfeminist second-guessing. Compare [Benjamin] Kunkel’s tentative and guilt-ridden masturbation scene in “Indecision” with Roth’s famous onanistic exuberance with apple cores, liver and candy wrappers in “Portnoy’s Complaint.” Kunkel: “Feeling extremely uncouth, I put my penis away. I might have thrown it away if I could.” Roth also writes about guilt, of course, but a guilt overridden and swept away, joyously subsumed in the sheer energy of taboo smashing: “How insane whipping out my joint like that! Imagine what would have been had I been caught red-handed! Imagine if I had gone ahead.” In other words, one rarely gets the sense in Roth that he would throw away his penis if he could.

And what good comes of an ideology that encourages the psychological torture of bookish young men? It’s hard to distinguish the effects of these so-called literary theories from the hellfire scoldings delivered from the pulpits of the most draconian and anti-humanist religious patriarchs. Do we really need to ideologically castrate all our male scholars to protect women from abuse and further the cause of equality?

*****

The experience of sexual relations between older teacher and younger student in Sabbath’s Theater is described much differently when the gender activists have yet to get involved—and not just by Sabbath but by Kathy as well. “I’m of age!” she protests as he chastises her for endangering his job and opening him up to public scorn; “I do what I want” (586). Absent the committee against him, Sabbath’s impression of how his affairs with his students impact them reflects the nuance of feeling inspired by these experimental entanglements, the kind of nuance that the “laudable ideologies” can’t even begin to capture.

There was a kind of art in his providing an illicit adventure not with a boy of their own age but with someone three times their age—the very repugnance that his aging body inspired in them had to make their adventure with him feel a little like a crime and thereby give free play to their budding perversity and to the confused exhilaration that comes of flirting with disgrace. Yes, despite everything, he had the artistry still to open up to them the lurid interstices of life, often for the first time since they’d given their debut “b.j.” in junior high. As Kathy told him in that language which they all used and which made him want to cut their heads off, through coming to know him she felt “empowered.” (566)

Opening up “the lurid interstices of life” is precisely what Roth and the other great male writers—all great writers—are about. If there are easy answers to the questions of what characters should do, or if the plot entails no more than a simple conflict between a blandly good character and a blandly bad one, then the story, however virtuous its message, will go unattended.

            But might there be too much at stake for us impressionable readers to be allowed free reign to play around in imaginary spheres peopled by morally dubious specters? After all, if denouncing the dreamworlds of privileged white men, however unfairly, redounds to the benefit of women and children and minorities, then perhaps it’s to the greater good. In fact, though, right alongside the trends of increasing availability for increasingly graphic media portrayals of sex and violence have occurred marked decreases in actual violence and the abuse of women. And does anyone really believe it’s the least literate, least media-saturated societies that are the kindest to women? The simple fact is that the theory of literature subtly encouraging oppression can’t be valid. But the problem is once ideologies are institutionalized, once a threshold number of people depend on their perpetuation for their livelihoods, people whose scholarly work and reputations are staked on them, then victims of oppression will be found, their existence insisted on, regardless of whether they truly exist or not.

In another scandal Sabbath was embroiled in long before his flirtation with Kathy Goolsbee, he was brought up on charges of indecency because in the course of a street performance he’d exposed a woman’s nipple. The woman herself, Helen Trumbull, maintains from the outset of the imbroglio that whatever Sabbath had done, he’d done it with her consent—just as will be the case with his “psychological assault” on Kathy. But even as Sabbath sits assured that the case against him will collapse once the jury hears the supposed victim testify on his behalf, the prosecution takes a bizarre twist:

In fact, the victim, if there even is one, is coming this way, but the prosecutor says no, the victim is the public. The poor public, getting the shaft from this fucking drifter, this artist. If this guy can walk along a street, he says, and do this, then little kids think it’s permissible to do this, and if little kids think it’s permissible to do this, then they think it’s permissible to blah blah banks, rape women, use knives. If seven-year-old kids—the seven nonexistent kids are now seven seven-year-old kids—are going to see that this is fun and permissible with strange women… (663-4)

Here we have Roth’s dramatization of the fundamental conflict between artists and moralists. Even if no one is directly hurt by playful scenarios, that they carry a message, one that threatens to corrupt susceptible minds, is so seemingly obvious it’s all but impossible to refute. Since the audience for art is “the public,” the acts of depravity and degradation it depicts are, if anything, even more fraught with moral and political peril than any offense against an individual victim, real or imagined.  

            This theme of the oppressive nature of ideologies devised to combat oppression, the victimizing proclivity of movements originally fomented to protect and empower victims, is most directly articulated by a young man named Donald, dressed in all black and sitting atop a file cabinet in a nurse’s station when Sabbath happens across him at a rehab clinic. Donald “vaguely resembled the Sabbath of some thirty years ago,” and Sabbath will go on to apologize for interrupting him, referring to him as “a man whose aversions I wholeheartedly endorse.” What he was saying before the interruption:

“Ideological idiots!” proclaimed the young man in black. “The third great ideological failure of the twentieth century. The same stuff. Fascism. Communism. Feminism. All designed to turn one group of people against another group of people. The good Aryans against the bad others who oppress them. The good poor against the bad rich who oppress them. The good women against the bad men who oppress them. The holder of ideology is pure and good and clean and the other wicked. But do you know who is wicked? Whoever imagines himself to be pure is wicked! I am pure, you are wicked… There is no human purity! It does not exist! It cannot exist!” he said, kicking the file cabinet for emphasis. “It must not and should not exist! Because it’s a lie. … Ideological tyranny. It’s the disease of the century. The ideology institutionalizes the pathology. In twenty years there will be a new ideology. People against dogs. The dogs are to blame for our lives as people. Then after dogs there will be what? Who will be to blame for corrupting our purity?” (620-1)

It’s noteworthy that this rant is made by a character other than Sabbath. By this point in the novel, we know Sabbath wouldn’t speak so artlessly—unless he was really frightened or angry. As effective and entertaining an indictment of “Ideological tyranny” as Sabbath’s Theater is, we shouldn’t expect to encounter anywhere in a novel by a storyteller as masterful as Roth a character operating as a mere mouthpiece for some argument. Even Donald himself, Sabbath quickly gleans, isn’t simply spouting off; he’s trying to impress one of the nurses.

            And it’s not just the political ideologies that conscript complicated human beings into simple roles as oppressors and victims. The pseudoscientific psychological theories that both inform literary scholarship and guide many non-scholars through life crises and relationship difficulties function according to the same fundamental dynamic of tribalism; they simply substitute abusive family members for more generalized societal oppression and distorted or fabricated crimes committed in the victim’s childhood for broader social injustices. Sabbath is forced to contend with this particular brand of depersonalizing ideology because his second wife, Roseanna, picks it up through her AA meetings, and then becomes further enmeshed in it through individual treatment with a therapist named Barbara. Sabbath, who considers himself a failure, and who is carrying on an affair with the woman we meet in the opening lines of the novel, is baffled as to why Roseanna would stay with him. Her therapist provides an answer of sorts.

But then her problem with Sabbath, the “enslavement,” stemmed, according to Barbara, from her disastrous history with an emotionally irresponsible mother and a violent alcoholic father for both of whom Sabbath was the sadistic doppelganger. (454)

Roseanna’s father was a geology professor who hanged himself when she was a young teenager. Sabbath is a former puppeteer with crippling arthritis. Naturally, he’s confused by the purported identity of roles.

These connections—between the mother, the father, and him—were far clearer to Barbara than they were to Sabbath; if there was, as she liked to put it, a “pattern” in it all, the pattern eluded him. In the midst of a shouting match, Sabbath tells his wife, “As for the ‘pattern’ governing a life, tell Barbara it’s commonly called chaos” (455).

When she protests, “You are shouting at me like my father,” Sabbath asserts his individuality: “The fuck that’s who I’m shouting at you like! I’m shouting at you like myself!” (459). Whether you see his resistance as heroic or not probably depends on how much credence you give to those psychological theories.

            From the opening lines of Sabbath’s Theater when we’re presented with the dilemma of the teary-eyed mistress demanding monogamy in their adulterous relationship, the simple response would be to stand in easy judgment of Sabbath, and like Wallace did to Updike’s character, declare him an asshole. It’s clear that he loves this woman, a Croatian immigrant named Drenka, a character who at points steals the show even from the larger-than-life protagonist. And it’s clear his fidelity would mean a lot to her. Is his freedom to fuck other women really so important? Isn’t he just being selfish? But only a few pages later our easy judgment suddenly gets more complicated:

As it happened, since picking up Christa several years back Sabbath had not really been the adventurous libertine Drenka claimed she could no longer endure, and consequently she already had the monogamous man she wanted, even if she didn’t know it. To women other than her, Sabbath was by now quite unalluring, not just because he was absurdly bearded and obstinately peculiar and overweight and aging in every obvious way but because, in the aftermath of the scandal four years earlier with Kathy Goolsbee, he’s become more dedicated than ever to marshaling the antipathy of just about everyone as though he were, in fact, battling for his rights. (394)

Christa was a young woman who participated in a threesome with Sabbath and Drenka, an encounter to which Sabbath’s only tangible contribution was to hand the younger woman a dildo.

            One of the central dilemmas for a character who loves the thrill of sex, who seeks in it a rekindling of youthful vigor—“the word’s rejuvenation,” Sabbath muses at one point (517)—the adrenaline boost borne of being in the wrong and the threat of getting caught, what Roiphe calls “the sheer energy of taboo smashing,” becomes ever more indispensable as libido wanes with age. Even before Sabbath ever had to contend with the ravages of aging, he reveled in this added exhilaration that attends any expedition into forbidden realms. What makes Drenka so perfect for him is that she has not just a similarly voracious appetite but a similar fondness for outrageous sex and the smashing of taboo. And it’s this mutual celebration of the verboten that Sabbath is so reluctant to relinquish. Of Drenka, he thinks,

The secret realm of thrills and concealment, this was the poetry of her existence. Her crudeness was the most distinguishing force in her life, lent her life its distinction. What was she otherwise? What was he otherwise? She was his last link with another world, she and her great taste for the impermissible. As a teacher of estrangement from the ordinary, he had never trained a more gifted pupil; instead of being joined by the contractual they were interconnected by the instinctual and together could eroticize anything (except their spouses). Each of their marriages cried out for a countermarriage in which the adulterers attack their feelings of captivity. (395)

Those feelings of captivity, the yearnings to experience the flow of the old juices, are anything but adolescent, as Wallace suggests of them; adolescents have a few decades before they have to worry about dwindling arousal. Most of them have the opposite problem.

            The question of how readers are supposed to feel about a character like Sabbath doesn’t have any simple answers. He’s an asshole at several points in the novel, but at several points he’s not. One of the reasons he’s so compelling is that working out what our response to him should be poses a moral dilemma of its own. Whether or not we ultimately decide that adultery is always and everywhere wrong, the experience of being privy to Sabbath’s perspective can help us prepare ourselves for our own feelings of captivity, lusting nostalgia, and sexual temptation. Most of us will never find ourselves in a dilemma like Sabbath gets himself tangled in with his friend Norman’s wife, for instance, but it would be to our detriment to automatically discount the old hornball’s insights.

He could discern in her, whenever her husband spoke, the desire to be just a little cruel to Norman, saw her sneering at the best of him, at the very best things in him. If you don’t go crazy because of your husband’s vices, you go crazy because of his virtues. He’s on Prozac because he can’t win. Everything is leaving her except for her behind, which her wardrobe informs her is broadening by the season—and except for this steadfast prince of a man marked by reasonableness and ethical obligation the way others are marked by insanity or illness. Sabbath understood her state of mind, her state of life, her state of suffering: dusk is descending, and sex, our greatest luxury, is racing away at a tremendous speed, everything is racing off at a tremendous speed and you wonder at your folly in having ever turned down a single squalid fuck. You’d give your right arm for one if you are a babe like this. It’s not unlike the Great Depression, not unlike going broke overnight after years of raking it in. “Nothing unforeseen that happens,” the hot flashes inform her, “is likely ever again going to be good.” Hot flashes mockingly mimicking the sexual ecstasies. Dipped, she is, in the very fire of fleeting time. (651)

Welcome to messy, chaotic, complicated life.

Sabbath’s Theater is, in part, Philip Roth’s raised middle finger to the academic moralists whose idiotic and dehumanizing ideologies have spread like a cancer into all the venues where literature is discussed and all the avenues through which it’s produced. Unfortunately, the unrecognized need for culture-wide chemotherapy hasn’t gotten any less dire in the nearly two decades since the novel was published. With literature now drowning in the devouring tide of new media, the tragic course set by the academic custodians of art toward bloodless prudery and impotent sterility in the name of misguided political activism promises to do nothing but ensure the ever greater obsolescence of epistemologically doomed and resoundingly pointless theorizing, making of college courses the places where you go to become, at best, profoundly confused about where you should stand with relation to fiction and fictional characters, and, at worst, a self-righteous demagogue denouncing the chimerical evils allegedly encoded into every text or cultural artifact. All the conspiracy theorizing about the latent evil urgings of literature has amounted to little more than another reason not to read, another reason to tune in to Breaking Bad or Mad Men instead. But the only reason Roth’s novel makes such a successful case is that it at no point allows itself to be reducible to a mere case, just as Sabbath at no point allows himself to be conscripted as a mere argument. We don’t love or hate him; we love and hate him. But we sort of just love him because he leaves us free to do both as we experience his antics, once removed and simulated, but still just as complicatedly eloquent in their message of “Fuck the laudable ideologies”—or not, as the case may be. 

Also read

JUST ANOTHER PIECE OF SLEAZE: THE REAL LESSON OF ROBERT BOROFSKY'S "FIERCE CONTROVERSY"

And

PUTTING DOWN THE PEN: HOW SCHOOL TEACHES US THE WORST POSSIBLE WAY TO READ LITERATURE

And

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education

Maria Konnikova’s book “Mastermind: How to Think Like Sherlock Holmes” got me really excited because if the science of psychology is ever brought up in discussions of literature, it’s usually the pseudoscience of Sigmund Freud. Konnikova, whose blog went a long way toward remedying that tragedy, wanted to offer up an alternative approach. However, though the book shows great promise, it’s ultimately disappointing.

Whenever he gets really drunk, my brother has the peculiar habit of reciting the plot of one or another of his favorite shows or books. His friends and I like to tease him about it—“Watch out, Dan’s drunk, nobody mention The Wire!”—and the quirk can certainly be annoying, especially if you’ve yet to experience the story first-hand. But I have to admit, given how blotto he usually is when he first sets out on one of his grand retellings, his ability to recall intricate plotlines right down to their minutest shifts and turns is extraordinary. One recent night, during a timeout in an epic shellacking of Notre Dame’s football team, he took up the tale of Django Unchained, which incidentally I’d sat next to him watching just the week before. Tuning him out, I let my thoughts shift to a post I’d read on The New Yorker’s cinema blog The Front Row.

            In “The Riddle of Tarantino,” film critic Richard Brody analyzes the director-screenwriter’s latest work in an attempt to tease out the secrets behind the popular appeal of his creations and to derive insights into the inner workings of his mind. The post is agonizingly—though also at points, I must admit, exquisitely—overwritten, almost a parody of the grandiose type of writing one expects to find within the pages of the august weekly. Bemused by the lavish application of psychoanalytic jargon, I finished the essay pitying Brody for, in all his writerly panache, having nothing of real substance to say about the movie or the mind behind it. I wondered if he knows the scientific consensus on Freud is that his influence is less in the line of, say, a Darwin or an Einstein than of an L. Ron Hubbard.

            What Brody and my brother have in common is that they were both moved enough by their cinematic experience to feel an urge to share their enthusiasm, complicated though that enthusiasm may have been. Yet they both ended up doing the story a disservice, succeeding less in celebrating the work than in blunting its impact. Listening to my brother’s rehearsal of the plot with Brody’s essay in mind, I wondered what better field there could be than psychology for affording enthusiasts discussion-worthy insights to help them move beyond simple plot references. How tragic, then, that the only versions of psychology on offer in educational institutions catering to those who would be custodians of art, whether in academia or on the mastheads of magazines like The New Yorker, are those in thrall to Freud’s cultish legacy.

There’s just something irresistibly seductive about the promise of a scientific paradigm that allows us to know more about another person than he knows about himself. In this spirit of privileged knowingness, Brody faults Django for its lack of moral complexity before going on to make a silly accusation. Watching the movie, you know who the good guys are, who the bad guys are, and who you want to see prevail in the inevitably epic climax. “And yet,” Brody writes,

the cinematic unconscious shines through in moments where Tarantino just can’t help letting loose his own pleasure in filming pain. In such moments, he never seems to be forcing himself to look or to film, but, rather, forcing himself not to keep going. He’s not troubled by representation but by a visual superego that restrains it. The catharsis he provides in the final conflagration is that of purging the world of miscreants; it’s also a refining fire that blasts away suspicion of any peeping pleasure at misdeeds and fuses aesthetic, moral, and political exultation in a single apotheosis.

The strained stateliness of the prose provides a ready distraction from the stark implausibility of the assessment. Applying Occam’s Razor rather than Freud’s at once insanely elaborate and absurdly reductionist ideology, we might guess that what prompted Tarantino to let the camera linger discomfortingly long on the violent misdeeds of the black hats is that he knew we in the audience would be anticipating that “final conflagration.”

The more outrageous the offense, the more pleasurable the anticipation of comeuppance—but the experimental findings that support this view aren’t covered in film or literary criticism curricula, mired as they are in century-old pseudoscience.

I’ve been eagerly awaiting the day when scientific psychology supplants psychoanalysis (as well as other equally, if not more, absurd ideologies) in academic and popular literary discussions. Coming across the blog Literally Psyched on Scientific American’s website about a year ago gave me a great sense of hope. The tagline, “Conceived in literature, tested in psychology,” as well as the credibility conferred by the host site, promised that the most fitting approach to exploring the resonance and beauty of stories might be undergoing a long overdue renaissance, liberated at last from the dominion of crackpot theorists. So when the author, Maria Konnikova, a doctoral candidate at Columbia, released her first book, I made a point to have Amazon deliver it as early as possible.

Mastermind: How to Think Like Sherlock Holmes does indeed follow the conceived-in-literature-tested-in-psychology formula, taking the principles of sound reasoning expounded by what may be the most recognizable fictional character in history and attempting to show how modern psychology proves their soundness. In what she calls a “Prelude” to her book, Konnikova explains that she’s been a Holmes fan since her father read Conan Doyle’s stories to her and her siblings as children.

The one demonstration of the detective’s abilities that stuck with Konnikova the most comes when he explains to his companion and chronicler Dr. Watson the difference between seeing and observing, using as an example the number of stairs leading up to their famous flat at 221B Baker Street. Watson, naturally, has no idea how many stairs there are because he isn’t in the habit of observing. Holmes, preternaturally, knows there are seventeen steps. Ever since being made aware of Watson’s—and her own—cognitive limitations through this vivid illustration (which had a similar effect on me when I first read “A Scandal in Bohemia” as a teenager), Konnikova has been trying to find the secret to becoming a Holmesian observer as opposed to a mere Watsonian seer. Already in these earliest pages, we encounter some of the principle shortcomings of the strategy behind the book. Konnikova wastes no time on the question of whether or not a mindset oriented toward things like the number of stairs in your building has any actual advantages—with regard to solving crimes or to anything else—but rather assumes old Sherlock is saying something instructive and profound.

Mastermind is, for the most part, an entertaining read. Its worst fault in the realm of simple page-by-page enjoyment is that Konnikova often belabors points that upon reflection expose themselves as mere platitudes. The overall theme is the importance of mindfulness—an important message, to be sure, in this age of rampant multitasking. But readers get more endorsement than practical instruction. You can only be exhorted to pay attention to what you’re doing so many times before you stop paying attention to the exhortations. The book’s problems in both the literary and psychological domains, however, are much more serious. I came to the book hoping it would hold some promise for opening the way to more scientific literary discussions by offering at least a glimpse of what they might look like, but while reading I came to realize there’s yet another obstacle to any substantive analysis of stories. Call it the TED effect. For anything to be read today, or for anything to get published for that matter, it has to promise to uplift readers, reveal to them some secret about how to improve their lives, help them celebrate the horizonless expanse of human potential.

Naturally enough, with the cacophony of competing information outlets, we all focus on the ones most likely to offer us something personally useful. Though self-improvement is a worthy endeavor, the overlooked corollary to this trend is that the worthiness intrinsic to enterprises and ideas is overshadowed and diminished. People ask what’s in literature for me, or what can science do for me, instead of considering them valuable in their own right—and instead of thinking, heaven forbid, we may have a duty to literature and science as institutions serving as essential parts of the foundation of civilized society.

In trying to conceive of a book that would operate as a vehicle for her two passions, psychology and Sherlock Holmes, while at the same time catering to readers’ appetite for life-enhancement strategies and spiritual uplift, Konnikova has produced a work in the grip of a bewildering and self-undermining identity crisis. The organizing conceit of Mastermind is that, just as Sherlock explains to Watson in the second chapter of A Study in Scarlet, the brain is like an attic. For Konnikova, this means the mind is in constant danger of becoming cluttered and disorganized through carelessness and neglect. That this interpretation wasn’t what Conan Doyle had in mind when he put the words into Sherlock’s mouth—and that the meaning he actually had in mind has proven to be completely wrong—doesn’t stop her from making her version of the idea the centerpiece of her argument. “We can,” she writes,

learn to master many aspects of our attic’s structure, throwing out junk that got in by mistake (as Holmes promises to forget Copernicus at the earliest opportunity), prioritizing those things we want to and pushing back those that we don’t, learning how to take the contours of our unique attic into account so that they don’t unduly influence us as they otherwise might. (27)

This all sounds great—a little too great—from a self-improvement perspective, but the attic metaphor is Sherlock’s explanation for why he doesn’t know the earth revolves around the sun and not the other way around. He states quite explicitly that he believes the important point of similarity between attics and brains is their limited capacity. “Depend upon it,” he insists, “there comes a time when for every addition of knowledge you forget something that you knew before.” Note here his topic is knowledge, not attention.

It is possible that a human mind could reach and exceed its storage capacity, but the way we usually avoid this eventuality is that memories that are seldom referenced are forgotten. Learning new facts may of course exhaust our resources of time and attention. But the usual effect of acquiring knowledge is quite the opposite of what Sherlock suggests. In the early 1990’s, a research team led by Patricia Alexander demonstrated that having background knowledge in a subject area actually increased participants’ interest in and recall for details in an unfamiliar text. One of the most widely known replications of this finding was a study showing that chess experts have much better recall for the positions of pieces on a board than novices. However, Sherlock was worried about information outside of his area of expertise. Might he have a point there?

The problem is that Sherlock’s vocation demands a great deal of creativity, and it’s never certain at the outset of a case what type of knowledge may be useful in solving it. In the story “The Lion’s Mane,” he relies on obscure information about a rare species of jellyfish to wrap up the mystery. Konnikova cites this as an example of “The Importance of Curiosity and Play.” She goes on to quote Sherlock’s endorsement for curiosity in The Valley of Fear: “Breadth of view, my dear Mr. Mac, is one of the essentials of our profession. The interplay of ideas and the oblique uses of knowledge are often of extraordinary interest” (151). How does she account for the discrepancy? Could Conan Doyle’s conception of the character have undergone some sort of evolution? Alas, Konnikova isn’t interested in questions like that. “As with most things,” she writes about the earlier reference to the attic theory, “it is safe to assume that Holmes was exaggerating for effect” (150). I’m not sure what other instances she may have in mind—it seems to me that the character seldom exaggerates for effect. In any case, he was certainly not exaggerating his ignorance of Copernican theory in the earlier story.

If Konnikova were simply privileging the science at the expense of the literature, the measure of Mastermind’s success would be in how clearly the psychological theories and findings are laid out. Unfortunately, her attempt to stitch science together with pronouncements from the great detective often leads to confusing tangles of ideas. Following her formula, she prefaces one of the few example exercises from cognitive research provided in the book with a quote from “The Crooked Man.” After outlining the main points of the case, she writes,

How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision. (169)

So far she hasn’t gone beyond the obvious. But she does go on to cite a truly remarkable finding that emerged from research by Amos Tversky and Daniel Kahneman in the early 1980’s. People who read a description of a man named Bill suggesting he lacks imagination tended to feel it was less likely that Bill was an accountant than that he was an accountant who plays jazz for a hobby—even though the two points of information in that second description make in inherently less likely than the one point of information in the first. The same result came when people were asked whether it was more likely that a woman named Linda was a bank teller or both a bank teller and an active feminist. People mistook the two-item choice as more likely. Now, is this experimental finding an example of how people fail to sift crucial from incidental facts?

The findings of this study are now used as evidence of a general cognitive tendency known as the conjunction fallacy. In his book Thinking, Fast and Slow, Kahneman explains how more detailed descriptions (referring to Tom instead of Bill) can seem more likely, despite the actual probabilities, than shorter ones. He writes,

The judgments of probability that our respondents offered, both in the Tom W and Linda problems, corresponded precisely to judgments of representativeness (similarity to stereotypes). Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. (159)

So people are confused because the less probable version is actually easier to imagine. But here’s how Konnikova tries to explain the point by weaving it together with Sherlock’s ideas:

Holmes puts it this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories in our minds. (173)

But Sherlock is not referring to our minds’ tendency to mistake coherence for probability, the tendency that has us seeing more detailed and hence less probable stories as more likely. How could he have been? Instead, he’s talking about the importance of independently assessing the facts instead of passively accepting the assessments of others. Konnikova is fudging, and in doing so she’s shortchanging the story and obfuscating the science.

As the subtitle implies, though, Mastermind is about how to think; it is intended as a self-improvement guide. The book should therefore be judged based on the likelihood that readers will come away with a greater ability to recognize and avoid cognitive biases, as well as the ability to sustain the conviction to stay motivated and remain alert. Konnikova emphasizes throughout that becoming a better thinker is a matter of determinedly forming better habits of thought. And she helpfully provides countless illustrative examples from the Holmes canon, though some of these precepts and examples may not be as apt as she’d like. You must have clear goals, she stresses, to help you focus your attention. But the overall purpose of her book provides a great example of a vague and unrealistic end-point. Think better? In what domain? She covers examples from countless areas, from buying cars and phones, to sizing up strangers we meet at a party. Sherlock, of course, is a detective, so he focuses his attention of solving crimes. As Konnikova dutifully points out, in domains other than his specialty, he’s not such a mastermind.

What Mastermind works best as is a fun introduction to modern psychology. But it has several major shortcomings in that domain, and these same shortcomings diminish the likelihood that reading the book will lead to any lasting changes in thought habits. Concepts are covered too quickly, organized too haphazardly, and no conceptual scaffold is provided to help readers weigh or remember the principles in context. Konnikova’s strategy is to take a passage from Conan Doyle’s stories that seems to bear on noteworthy findings in modern research, discuss that research with sprinkled references back to the stories, and wrap up with a didactic and sententious paragraph or two. Usually, the discussion begins with one of Watson’s errors, moves on to research showing we all tend to make similar errors, and then ends admonishing us not to be like Watson. Following Kahneman’s division of cognition into two systems—one fast and intuitive, the other slower and demanding of effort—Konnikova urges us to get out of our “System Watson” and rely instead on our “System Holmes.” “But how do we do this in practice?” she asks near the end of the book,

How do we go beyond theoretically understanding this need for balance and open-mindedness and applying it practically, in the moment, in situations where we might not have as much time to contemplate our judgments as we do in the leisure of our reading?

The answer she provides: “It all goes back to the very beginning: the habitual mindset that we cultivate, the structure that we try to maintain for our brain attic no matter what” (240). Unfortunately, nowhere in her discussion of built-in biases and the correlates to creativity did she offer any step-by-step instruction on how to acquire new habits. Konnikova is running us around in circles to hide the fact that her book makes an empty promise.

Tellingly, Kahneman, whose work on biases Konnikova cites on several occasions, is much more pessimistic about our prospects for achieving Holmesian thought habits. In the introduction to Thinking, Fast and Slow, he says his goal is merely to provide terms and labels for the regular pitfalls of thinking to facilitate more precise gossiping. He writes,

Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and home. (3)

The worshipful attitude toward Sherlock in Mastermind is designed to pander to our vanity, and so the suggestion that we need to rely on others to help us think is too mature to appear in its pages. The closest Konnikova comes to allowing for the importance of input and criticism from other people is when she suggests that Watson is an indispensable facilitator of Sherlock’s process because he “serves as a constant reminder of what errors are possible” (195), and because in walking him through his reasoning Sherlock is forced to be more mindful. “It may be that you are not yourself luminous,” Konnikova quotes from The Hound of the Baskervilles, “but you are a conductor of light. Some people without possessing genius have a remarkable power of stimulating it. I confess, my dear fellow, that I am very much in your debt” (196).

That quote shows one of the limits of Sherlock’s mindfulness that Konnikova never bothers to address. At times throughout Mastermind, it’s easy to forget that we probably wouldn’t want to live the way Sherlock is described as living. Want to be a great detective? Abandon your spouse and your kids, move into a cheap flat, work full-time reviewing case histories of past crimes, inject some cocaine, shoot holes in the wall of your flat where you’ve drawn a smiley face, smoke a pipe until the air is unbreathable, and treat everyone, including your best (only?) friend with casual contempt. Conan Doyle made sure his character casts a shadow. The ideal character Konnikova holds up, with all his determined mindfulness, often bears more resemblance to Kwai Chang Caine from Kung Fu. This isn’t to say that Sherlock isn’t morally complex—readers love him because he’s so clearly a good guy, as selfish and eccentric as he may be. Konnikova cites an instance in which he holds off on letting the police know who committed a crime. She quotes:

Once that warrant was made out, nothing on earth would save him. Once or twice in my career I feel that I have done more real harm by my discovery of the criminal than ever he had done by his crime. I have learned caution now, and I had rather play tricks with the law of England than with my own conscience. Let us know more before we act.

But Konnikova isn’t interested in morality, complex or otherwise, no matter how central moral intuitions are to our enjoyment of fiction. The lesson she draws from this passage shows her at her most sententious and platitudinous:

You don’t mindlessly follow the same preplanned set of actions that you had determined early on. Circumstances change, and with them so does the approach. You have to think before you leap to act, or judge someone, as the case may be. Everyone makes mistakes, but some may not be mistakes as such, when taken in the context of the time and the situation. (243)

Hard to disagree, isn’t it?

To be fair, Konnikova does mention some of Sherlock’s peccadilloes in passing. And she includes a penultimate chapter titled “We’re Only Human,” in which she tells the story of how Conan Doyle was duped by a couple of young girls into believing they had photographed some real fairies. She doesn’t, however, take the opportunity afforded by this episode in the author’s life to explore the relationship between the man and his creation. She effectively says he got tricked because he didn’t do what he knew how to do, it can happen to any of us, so be careful you don’t let it happen to you. Aren’t you glad that’s cleared up? She goes on to end the chapter with an incongruous lesson about how you should think like a hunter. Maybe we should, but how exactly, and when, and at what expense, we’re never told.

Konnikova clearly has a great deal of genuine enthusiasm for both literature and science, and despite my disappointment with her first book I plan to keep following her blog. I’m even looking forward to her next book—confident she’ll learn from the negative reviews she’s bound to get on this one. The tragic blunder she made in eschewing nuanced examinations of how stories work, how people relate to characters, or how authors create them for a shallow and one-dimensional attempt at suggesting a 100 year-old fictional character somehow divined groundbreaking research findings from the end of the Twentieth and beginning of the Twenty-First Centuries calls to mind an exchange you can watch on YouTube between Neil Degrasse Tyson and Richard Dawkins. Tyson, after hearing Dawkins speak in the way he’s known to, tries to explain why many scientists feel he’s not making the most of his opportunities to reach out to the public.

You’re professor of the public understanding of science, not the professor of delivering truth to the public. And these are two different exercises. One of them is putting the truth out there and they either buy your book or they don’t. That’s not being an educator; that’s just putting it out there. Being an educator is not only getting the truth right; there’s got to be an act of persuasion in there as well. Persuasion isn’t “Here’s the facts—you’re either an idiot or you’re not.” It’s “Here are the facts—and here is a sensitivity to your state of mind.” And it’s the facts and the sensitivity when convolved together that creates impact. And I worry that your methods, and how articulately barbed you can be, ends up being simply ineffective when you have much more power of influence than is currently reflected in your output.

Dawkins begins his response with an anecdote that shows that he’s not the worst offender when it comes to simple and direct presentations of the facts.

A former and highly successful editor of New Scientist Magazine, who actually built up New Scientist to great new heights, was asked “What is your philosophy at New Scientist?” And he said, “Our philosophy at New Scientist is this: science is interesting, and if you don’t agree you can fuck off.”

I know the issue is a complicated one, but I can’t help thinking Tyson-style persuasion too often has the opposite of its intended impact, conveying as it does the implicit message that science has to somehow be sold to the masses, that it isn’t intrinsically interesting. At any rate, I wish that Konnikova hadn’t dressed up her book with false promises and what she thought would be cool cross-references. Sherlock Holmes is interesting. Psychology is interesting. If you don’t agree, you can fuck off.

Also read

FROM DARWIN TO DR. SEUSS: DOUBLING DOWN ON THE DUMBEST APPROACH TO COMBATTING RACISM

And

THE STORYTELLING ANIMAL: A LIGHT READ WITH WEIGHTY IMPLICATIONS

And

LAB FLIES: JOSHUA GREENE’S MORAL TRIBES AND THE CONTAMINATION OF WALTER WHITE

Also a propos is

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Sweet Tooth is a Strange Loop: An Aid to Some of the Dimmer Reviewers of Ian McEwan's New Novel

Ian McEwan is one of my literary heroes. “Atonement” and “Saturday” are among my favorite books. But a lot of readers trip over the more experimental aspects of his work. With “Sweet Tooth,” he once again offers up a gem of a story, one that a disconcerting number of reviewers missed the point of.

(I've done my best to avoid spoilers.)

Anytime a character in Ian McEwan’s new novel Sweet Tooth enthuses about a work of literature, another character can be counted on to come along and pronounce that same work dreadful. So there’s a delightful irony in the declaration at the end of a silly review in The Irish Independent, which begins by begrudging McEwan his “reputation as the pulse-taker of the social and political Zeitgeist,” that the book’s ending “might be enough to send McEwan acolytes scurrying back through the novel to see how he did it, but it made me want to throw the book out the window.” Citing McEwan’s renown among the reading public before gleefully launching into critiques that are as difficult to credit as they are withering seems to be the pattern. The notice in The Economist, for instance, begins,

At 64, with a Hollywood film, a Man Booker prize and a gong from the queen, Ian McEwan has become a grand old man of British letters. Publication of his latest novel, “Sweet Tooth”, was announced on the evening news. A reading at the Edinburgh book festival was introduced by none other than the first minister, Alex Salmond.

But, warns the unnamed reviewer, “For all the attendant publicity, ‘Sweet Tooth’ is not Mr. McEwan’s finest book.” My own personal take on the novel—after seeking out all the most negative reviews I could find (most of them are positive)—is that the only readers who won’t appreciate it, aside from the reviewers who can’t stand how much the reading public celebrates McEwan’s offerings, are the ones whose prior convictions about what literature is and what it should do blind them to even the possibility that a novel can successfully operate on as many levels as McEwan folds into his narrative. For these readers, the mere fact of an author’s moving from one level to the next somehow invalidates whatever gratification they got from the most straightforward delivery of the story.

At the most basic level, Sweet Tooth is the first-person account of how Serena Frome is hired by MI5 and assigned to pose as a representative for an arts foundation offering the writer Thomas Haley a pension that will allow him to quit his teaching job so he can work on a novel. The book’s title refers to the name of a Cold War propaganda initiative to support authors whose themes Serena’s agency superiors expect will bolster the cause of the Non-Communist Left. Though Sweet Tooth is fictional, there actually were programs like it that supported authors like George Orwell. Serena is an oldest-sibling type, with an appreciation for the warm security of established traditions and longstanding institutions, along with an attraction for and eagerness to please authority figures. These are exactly the traits that lead to her getting involved in the project of approaching Tom under false pretenses, an arrangement which becomes a serious dilemma for her as the two begin a relationship and she falls deeper and deeper in love with him. Looking back on the affair at the peak of the tension, she admits,

For all the mess I was in, I didn’t know how I could have done things differently. If I hadn’t joined MI5, I wouldn’t have met Tom. If I’d told him who I worked for at our very first meeting—and why would I tell a stranger that?—he would’ve shown me the door. At every point along the way, as I grew fonder of him, then loved him, it became harder, riskier to tell him the truth even as it became more important to do so. (266)

This plot has many of the markings of genre fiction, the secret-burdened romance, the spy thriller. But even on this basic level there’s a crucial element separating the novel from its pulpier cousins; the stakes are actually quite low. The nation isn’t under threat. No one’s life is in danger. The risks are only to jobs and relationships.

James Lasdun, in an otherwise favorable review for The Guardian, laments these low stakes, suggesting that the novel’s early references to big political issues of the 1970s lead readers to the thwarted expectation of more momentous themes. He writes,

I couldn't help feeling like Echo in the myth when Narcissus catches sight of himself in the pool. “What about the IRA?” I heard myself bleating inwardly as the book began fixating on its own reflection. What about the PLO? The cold war? Civilisation and barbarity? You promised!

But McEwan really doesn’t make any such promise in the book’s opening. Lasdun simply makes the mistake of anticipating Sweet Tooth will be more like McEwan’s earlier novel Saturday. In fact, the very first lines of the book reveal what the main focus of the story will be:

My name is Serena Frome (rhymes with plume) and almost forty years ago I was sent on a secret mission for the British Security Service. I didn’t return safely. Within eighteen months of joining I was sacked, having disgraced myself and ruined my lover, though he certainly had a hand in his own undoing. (1)

That “I didn’t return safely” sets the tone—overly dramatic, mock-heroic, but with a smidgen of self-awareness that suggests she’s having some fun at her own expense. Indeed, all the book’s promotional material referring to her as a spy notwithstanding, Serena is more of a secretary or a clerk than a secret agent. Her only field mission is to offer funds to a struggling writer, not exactly worthy of Ian Fleming.

When Lasdun finally begins to pick up on the lighthearted irony and the over all impish tone of the novel, his disappointment has him admitting that all the playfulness is enjoyable but longing nonetheless for it to serve some greater end. Such longing betrays a remarkable degree of obliviousness to the fact that the final revelation of the plot actually does serve an end, a quite obvious one. Lasdun misses it, apparently because the point is moral as opposed to political. A large portion of the novel’s charm stems from the realization, which I’m confident most readers will come to early on, that Sweet Tooth, for all the big talk about global crises and intrigue, is an intimately personal story about a moral dilemma and its outcomes—at least at the most basic level. The novel’s scope expands beyond this little drama by taking on themes that present various riddles and paradoxes. But whereas countless novels in the postmodern tradition have us taking for granted that literary riddles won’t have answers and plot paradoxes won’t have points, McEwan is going for an effect that’s much more profound.

The most serious criticism I came across was at the end of the Economist review. The unnamed critic doesn’t appreciate the surprise revelation that comes near the end of the book, insisting that afterward, “it is hard to feel much of anything for these heroes, who are all notions and no depth.” What’s interesting is that the author presents this not as an observation but as a logical conclusion. I’m aware of how idiosyncratic responses to fictional characters are, and I accept that my own writing here won’t settle the issue, but I suspect most readers will find the assertion that Sweet Tooth’s characters are “all notion” absurd. I even have a feeling that the critic him or herself sympathized with Serena right up until the final chapter—as the critic from TheIrish Independent must have. Why else would they be so frustrated as to want to throw the book out of the window? Several instances of Serena jumping into life from the page suggest themselves for citation, but here’s one I found particularly endearing. It comes as she’s returning to her parents’ house for Christmas after a long absence and is greeted by her father, an Anglican Bishop, at the door:

“Serena!” He said my name with a kindly, falling tone, with just a hint of mock surprise, and put his arms about me. I dropped my bag at my feet and let myself be enfolded, and as I pressed my face into his shirt and caught the familiar scent of Imperial Leather soap, and of church—of lavender wax—I started to cry. I don’t know why, it just came from nowhere and I turned to water. I don’t cry easily and I was as surprised as he was. But there was nothing I could do about it. This was the copious hopeless sort of crying you might hear from a tired child. I think it was his voice, the way he said my name, that set me off. (217)

This scene reminds me of when I heard my dad had suffered a heart attack several years ago: even though at the time I was so pissed off at the man I’d been telling myself I’d be better off never seeing him again, I barely managed two steps after hanging up the phone before my knees buckled and I broke down sobbing—so deep are these bonds we carry on into adulthood even when we barely see our parents, so shocking when their strength is made suddenly apparent. (Fortunately, my dad recovered after a quintuple bypass.)

But, if the critic for the Economist concluded that McEwan’s characters must logically be mere notions despite having encountered them as real people until the end of the novel, what led to that clearly mistaken deduction? I would be willing to wager that McEwan shares with me a fondness for the writing of the computational neuroscientist Douglas Hofstadter, in particular Gödel, Escher, Bach and I am a Strange Loop, both of which set about arriving at an intuitive understanding of the mystery of how consciousness arises from the electrochemical mechanisms of our brains, offering as analogies several varieties of paradoxical, self-referencing feedback loops, like cameras pointing at the TV screens they feed into. What McEwan has engineered—there’s no better for word for it—with his novel is a multilevel, self-referential structure that transforms and transcends its own processes and premises as it folds back on itself.

            One of the strange loops Hofstadter explores, M.C. Escher’s 1960 lithograph Ascending and Descending, can give us some helpful guidance in understanding what McEwan has done. If you look at the square staircase in Escher’s lithograph a section at a time, you see that each step continues either ascending or descending, depending on the point of view you take. And, according to Hofstadter in Strange Loop,

A person is a point of view—not only a physical point of view (looking out of certain eyes in a certain physical space in the universe), but more importantly a psyche’s point of view: a set of hair-trigger associations rooted in a huge bank of memories. (234)

Importantly, many of those associations are made salient with emotions, so that certain thoughts affect us in powerful ways we might not be able to anticipate, as when Serena cries at the sound of her father’s voice, or when I collapsed at the news of my father’s heart attack. These emotionally tagged thoughts form a strange loop when they turn back on the object, now a subject, doing the thinking. The neuron forms the brain that produces the mind that imagines the neuron, in much the same way as each stair in the picture takes a figure both up and down the staircase. What happened for the negative reviewers of Sweet Tooth is that they completed a circuit of the stairs and realized they couldn’t possibly have been going up (or down), even though at each step along the way they were probably convinced.

McEwan, interviewed by Daniel Zalewski for the New Yorker in 2009, said, “When I’m writing I don’t really think about themes,” admitting that instead he follows Nabokov’s dictum to “Fondle details.”

Writing is a bottom-up process, to borrow a term from the cognitive world. One thing that’s missing from the discussion of literature in the academy is the pleasure principle. Not only the pleasure of the reader but also of the writer. Writing is a self-pleasuring act.

The immediate source of pleasure then for McEwan, and he probably assumes for his readers as well, comes at the level of the observations and experiences he renders through prose.

Sweet Tooth is full of great lines like, “Late October brought the annual rite of putting back the clocks, tightening the lid of darkness over our afternoons, lowering the nation’s mood further” (179). But McEwan would know quite well that writing is also a top-down process; at some point themes and larger conceptual patterns come into play. In his novel Saturday, the protagonist, a neurosurgeon named Henry Perowne, is listening to Angela Hewitt’s performance of Bach’s strangely loopy “Goldberg” Variations. He writes,

Well over an hour has passed, and Hewitt is already at the final Variation, the Quodlibet—uproarious and jokey, raunchy even, with its echoes of peasant songs of food and sex. The last exultant chords fade away, a few seconds’ silence, then the Aria returns, identical on the page, but changed by all the variations that have come before. (261-2)

Just as an identical Aria or the direction of ascent or descent in an image of stairs can be transformed  by a shift in perspective, details about a character, though they may be identical on the page, can have radically different meanings, serve radically different purposes depending on your point of view.

Though in the novel Serena is genuinely enthusiastic about Tom’s fiction, the two express their disagreements about what constitutes good literature at several points. “I thought his lot were too dry,” Serena writes, “he thought mine were wet” (183). She likes sentimental endings and sympathetic characters; he admires technical élan. Even when they agree that a particular work is good, it’s for different reasons: “He thought it was beautifully formed,” she says of a book they both love, “I thought it was wise and sad” (183). Responding to one of Tom’s stories that features a talking ape who turns out never to have been real, Serena says,

I instinctively distrusted this kind of fictional trick. I wanted to feel the ground beneath my feet. There was, in my view, an unwritten contract with the reader that the writer must honor. No single element of an imagined world or any of its characters should be allowed to dissolve on authorial whim. The invented had to be as solid and as self-consistent as the actual. This was a contract founded on mutual trust. (183)

A couple of the reviewers suggested that the last chapter of Sweet Tooth revealed that Serena had been made to inhabit precisely the kind of story that she’d been saying all along she hated. But a moment’s careful reflection would have made them realize this isn’t true at all. What’s brilliant about McEwan’s narrative engineering is that it would satisfy the tastes of both Tom and Serena. Despite the surprise revelation at the end—the trick—not one of the terms of Serena’s contract is broken. The plot works as a trick, but it also works as an intimate story about real people in a real relationship. To get a taste of how this can work, consider the following passage:

Tom promised to read me a Kingsley Amis poem, “A Bookshop Idyll,” about men’s and women’s divergent tastes. It went a bit soppy at the end, he said, but it was funny and true. I said I’d probably hate it, except for the end. (175)

The self-referentiality of the idea makes of it a strange loop, so it can be thought of at several levels, each of which is consistent and solid, but none of which captures the whole meaning.

Sweet Tooth is a fun novel to read, engrossing and thought-provoking, combining the pleasures of genre fiction with some of the mind-expanding thought experiments of some of the best science writing. The plot centers on a troubling but compelling moral dilemma, and, astonishingly, the surprise revelation at the end actually represents a solution to this dilemma. I do have to admit, however, that I agree with the Economist that it’s not McEwan’s best novel. The conceptual plot devices bear several similarities with those in his earlier novel Atonement, and that novel is much more serious, its stakes much higher.

Sweet Tooth is nowhere near as haunting as Atonement. But it doesn’t need to be.

Also read:

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And:

MUDDLING THROUGH "LIFE AFTER LIFE": A REFLECTION ON PLOT AND CHARACTER IN KATE ATKINSON’S NEW NOVEL

Read More
Dennis Junk Dennis Junk

Let's Play Kill Your Brother: Fiction as a Moral Dilemma Game

Anthropologist Jean Briggs discovered one of the keys to Inuit peacekeeping in the style of play adults engage use to engage children. She describes the games in her famous essay, ‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” and in so doing, probably unknowingly, lays the groundwork for an understanding of how our love of fiction evolved, along with our moral sensibilities.

            Season 3 of Breaking Bad opens with two expressionless Mexican men in expensive suits stepping out of a Mercedes, taking a look around the peasant village they’ve just arrived in, and then dropping to the ground to crawl on their knees and elbows to a candlelit shrine where they leave an offering to Santa Muerte, along with a crude drawing of the meth cook known as Heisenberg, marking him for execution. We later learn that the two men, Leonel and Marco, who look almost identical, are in fact twins (played by Daniel and Luis Moncada), and that they are the cousins of Tuco Salamanca, a meth dealer and cartel affiliate they believe Heisenberg betrayed and killed. We also learn that they kill people themselves as a matter of course, without registering the slightest emotion and without uttering a word to each other to mark the occasion. An episode later in the season, after we’ve been made amply aware of how coldblooded these men are, begins with a flashback to a time when they were just boys fighting over an action figure as their uncle talks cartel business on the phone nearby. After Marco gets tired of playing keep-away, he tries to provoke Leonel further by pulling off the doll’s head, at which point Leonel runs to his Uncle Hector, crying, “He broke my toy!”

“He’s just having fun,” Hector says, trying to calm him. “You’ll get over it.”

“No! I hate him!” Leonel replies. “I wish he was dead!”

Hector’s expression turns grave. After a moment, he calls Marco over and tells him to reach into the tub of melting ice beside his chair to get him a beer. When the boy leans over the tub, Hector shoves his head into the water and holds it there. “This is what you wanted,” he says to Leonel. “Your brother dead, right?” As the boy frantically pulls on his uncle’s arm trying to free his brother, Hector taunts him: “How much longer do you think he has down there? One minute? Maybe more? Maybe less? You’re going to have to try harder than that if you want to save him.” Leonel starts punching his uncle’s arm but to no avail. Finally, he rears back and punches Hector in the face, prompting him to release Marco and rise from his chair to stand over the two boys, who are now kneeling beside each other. Looking down at them, he says, “Family is all.”

The scene serves several dramatic functions. By showing the ruthless and violent nature of the boys’ upbringing, it intensifies our fear on behalf of Heisenberg, who we know is actually Walter White, a former chemistry teacher and family man from a New Mexico suburb who only turned to crime to make some money for his family before his lung cancer kills him. It also goes some distance toward humanizing the brothers by giving us insight into how they became the mute, mechanical murderers they are when we’re first introduced to them. The bond between the two men and their uncle will be important in upcoming episodes as well. But the most interesting thing about the scene is that it represents in microcosm the single most important moral dilemma of the whole series.

Marco and Leonel are taught to do violence if need be to protect their family. Walter, the show’s central character, gets involved in the meth business for the sake of his own family, and as he continues getting more deeply enmeshed in the world of crime he justifies his decisions at each juncture by saying he’s providing for his wife and kids. But how much violence can really be justified, we’re forced to wonder, with the claim that you’re simply protecting or providing for your family? The entire show we know as Breaking Bad can actually be conceived of as a type of moral exercise like the one Hector puts his nephews through, designed to impart or reinforce a lesson, though the lesson of the show is much more complicated. It may even be the case that our fondness for fictional narratives more generally, like the ones we encounter in novels and movies and TV shows, originated in our need as a species to develop and hone complex social skills involving powerful emotions and difficult cognitive calculations.

Most of us watching Breaking Bad probably feel Hector went way too far with his little lesson, and indeed I’d like to think not too many parents or aunts and uncles would be willing to risk drowning a kid to reinforce the bond between him and his brother. But presenting children with frightening and stressful moral dilemmas to guide them through major lifecycle transitions—weaning, the birth of siblings, adoptions—which tend to arouse severe ambivalence can be an effective way to encourage moral development and instill traditional values. The ethnographer Jean Briggs has found that among the Inuit peoples whose cultures she studies adults frequently engage children in what she calls “playful dramas” (173), which entail hypothetical moral dilemmas that put the children on the hot seat as they struggle to come up with a solution. She writes about these lessons, which strike many outsiders as a cruel form of teasing by the adults, in “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps,” a chapter she contributed to a 1994 anthology of anthropological essays on peace and conflict. In one example Briggs recounts,

A mother put a strange baby to her breast and said to her own nursling: “Shall I nurse him instead of you?” The mother of the other baby offered her breast to the rejected child and said: “Do you want to nurse from me? Shall I be your mother?” The child shrieked a protest shriek. Both mothers laughed. (176)

This may seem like sadism on the part of the mothers, but it probably functioned to soothe the bitterness arising from the child’s jealousy of a younger nursling. It would also help to settle some of the ambivalence toward the child’s mother, which comes about inevitably as a response to disciplining and other unavoidable frustrations.

Another example Briggs describes seems even more pointlessly sadistic at first glance. A little girl’s aunt takes her hand and puts it on a little boy’s head, saying, “Pull his hair.” The girl doesn’t respond, so her aunt yanks on the boy’s hair herself, making him think the girl had done it. They quickly become embroiled in a “battle royal,” urged on by several adults who find it uproarious. These adults do, however, end up stopping the fight before any serious harm can be done. As horrible as this trick may seem, Briggs believes it serves to instill in the children a strong distaste for fighting because the experience is so unpleasant for them. They also learn “that it is better not to be noticed than to be playfully made the center of attention and laughed at” (177). What became clear to Briggs over time was that the teasing she kept witnessing wasn’t just designed to teach specific lessons but that it was also tailored to the child’s specific stage of development. She writes,

Indeed, since the games were consciously conceived of partly as tests of a child’s ability to cope with his or her situation, the tendency was to focus on a child’s known or expected difficulties. If a child had just acquired a sibling, the game might revolve around the question: “Do you love your new baby sibling? Why don’t you kill him or her?” If it was a new piece of clothing that the child had acquired, the question might be: “Why don’t you die so I can have it?” And if the child had been recently adopted, the question might be: “Who’s your daddy?” (172)

As unpleasant as these tests can be for the children, they never entail any actual danger—Inuit adults would probably agree Hector Salamanca went a bit too far—and they always take place in circumstances and settings where the only threats and anxieties come from the hypothetical, playful dilemmas and conflicts. Briggs explains,

A central idea of Inuit socialization is to “cause thought”: isumaqsayuq. According to [Arlene] Stairs, isumaqsayuq, in North Baffin, characterizes Inuit-style education as opposed to the Western variety. Warm and tender interactions with children help create an atmosphere in which thought can be safely caused, and the questions and dramas are well designed to elicit it. More than that, and as an integral part of thought, the dramas stimulate emotion. (173)

Part of the exercise then seems to be to introduce the children to their own feelings. Prior to having their sibling’s life threatened, the children may not have any idea how they’d feel in the event of that sibling’s death. After the test, however, it becomes much more difficult for them to entertain thoughts of harming their brother or sister—the thought alone will probably be unpleasant.

Briggs also points out that the games send the implicit message to the children that they can be trusted to arrive at the moral solution. Hector knows Leonel won’t let his brother drown—and Leonel learns that his uncle knows this about him. The Inuit adults who tease and tempt children are letting them know they have faith in the children’s ability to resist their selfish or aggressive impulses. Discussing Briggs’s work in his book Moral Origins: The Evolution of Virtue, Altruism, and Shame, anthropologist Christopher Boehm suggests that evolution has endowed children with the social and moral emotions we refer to collectively as consciences, but these inborn moral sentiments need to be activated and shaped through socialization. He writes,

On the one side there will always be our usefully egoistic selfish tendencies, and on the other there will be our altruistic or generous impulses, which also can advance our fitness because altruism and sympathy are valued by our peers. The conscience helps us to resolve such dilemmas in ways that are socially acceptable, and these Inuit parents seem to be deliberately “exercising” the consciences of their children to make morally socialized adults out of them. (226)

The Inuit-style moral dilemma games seem strange, even shocking, to people from industrialized societies, and so it’s clear they’re not a normal part of children’s upbringing in every culture. They don’t even seem to be all that common among hunter-gatherers outside the region of the Arctic. Boehm writes, however,

Deliberately and stressfully subjecting children to nasty hypothetical dilemmas is not universal among foraging nomads, but as we’ll see with Nisa, everyday life also creates real moral dilemmas that can involve Kalahari children similarly. (226)

Boehm goes on to recount an episode from anthropologist Marjorie Shostak’s famous biography Nisa: The Life and Words of a !Kung Womanto show that parents all the way on the opposite side of the world from where Briggs did her fieldwork sometimes light on similar methods for stimulating their children’s moral development.

Nisa seems to have been a greedy and impulsive child. When her pregnant mother tried to wean her, she would have none of it. At one point, she even went so far as to sneak into the hut while her mother was asleep and try to suckle without waking her up. Throughout the pregnancy, Nisa continually expressed ambivalence toward the upcoming birth of her sibling, so much so that her parents anticipated there might be some problems. The !Kung resort to infanticide in certain dire circumstances, and Nisa’s parents probably reasoned she was at least somewhat familiar with the coping mechanism many other parents used when killing a newborn was necessary. What they’d do is treat the baby as an object, not naming it or in any other way recognizing its identity as a family member. Nisa explained to Shostak how her parents used this knowledge to impart a lesson about her baby brother.

After he was born, he lay there, crying. I greeted him, “Ho, ho, my baby brother! Ho, ho, I have a little brother! Some day we’ll play together.” But my mother said, “What do you think this thing is? Why are you talking to it like that? Now, get up and go back to the village and bring me my digging stick.” I said, “What are you going to dig?” She said, “A hole. I’m going to dig a hole so I can bury the baby. Then you, Nisa, will be able to nurse again.” I refused. “My baby brother? My little brother? Mommy, he’s my brother! Pick him up and carry him back to the village. I don’t want to nurse!” Then I said, “I’ll tell Daddy when he comes home!” She said, “You won’t tell him. Now, run back and bring me my digging stick. I’ll bury him so you can nurse again. You’re much too thin.” I didn’t want to go and started to cry. I sat there, my tears falling, crying and crying. But she told me to go, saying she wanted my bones to be strong. So, I left and went back to the village, crying as I walked. (The weaning episode occurs on pgs. 46-57)

Again, this may strike us as cruel, but by threatening her brother’s life, Nisa’s mother succeeded in triggering her natural affection for him, thus tipping the scales of her ambivalence to ensure the protective and loving feelings won out over the bitter and jealous ones. This example was extreme enough that Nisa remembered it well into adulthood, but Boehm sees it as evidence that real life reliably offers up dilemmas parents all over the world can use to instill morals in their children. He writes,

I believe that all hunter-gatherer societies offer such learning experiences, not only in the real-life situations children are involved with, but also in those they merely observe. What the Inuit whom Briggs studied in Cumberland Sound have done is to not leave this up to chance. And the practice would appear to be widespread in the Arctic. Children are systematically exposed to life’s typical stressful moral dilemmas, and often hypothetically, as a training ground that helps to turn them into adults who have internalized the values of their groups. (234)

One of the reasons such dilemmas, whether real or hypothetical or merely observed, are effective as teaching tools is that they bypass the threat to personal autonomy that tends to accompany direct instruction. Imagine Tío Salamanca simply scolding Leonel for wishing his brother dead—it would have only aggravated his resentment and sparked defiance. Leonel would probably also harbor some bitterness toward his uncle for unjustly defending Marco. In any case, he would have been stubbornly resistant to the lesson.

Winston Churchill nailed the sentiment when he said, “Personally, I am always ready to learn, although I don’t always like being taught.” The Inuit-style moral dilemmas force the children to come up with the right answer on their own, a task that requires the integration and balancing of short and long term desires, individual and group interests, and powerful albeit contradictory emotions. The skills that go into solving such dilemmas are indistinguishable from the qualities we recognize as maturity, self-knowledge, generosity, poise, and wisdom.

For the children Briggs witnessed being subjected to these moral tests, the understanding that the dilemmas were in fact only hypothetical developed gradually as they matured. For the youngest ones, the stakes were real and the solutions were never clear at the onset. Briggs explains that

while the interaction between small children and adults was consistently good-humored, benign, and playful on the part of the adults, it taxed the children to—or beyond—the limits of their ability to understand, pushing them to expand their horizons, and testing them to see how much they had grown since the last encounter. (173)

What this suggests is that there isn’t always a simple declarative lesson—a moral to the story, as it were—imparted in these games. Instead, the solutions to the dilemmas can often be open-ended, and the skills the children practice can thus be more general and abstract than some basic law or principle. Briggs goes on,

Adult players did not make it easy for children to thread their way through the labyrinth of tricky proposals, questions, and actions, and they did not give answers to the children or directly confirm the conclusions the children came to. On the contrary, questioning a child’s first facile answers, they turned situations round and round, presenting first one aspect then another, to view. They made children realize their emotional investment in all possible outcomes, and then allowed them to find their own way out of the dilemmas that had been created—or perhaps, to find ways of living with unresolved dilemmas. Since children were unaware that the adults were “only playing,” they could believe that their own decisions would determine their fate. And since the emotions aroused in them might be highly conflicted and contradictory—love as well as jealousy, attraction as well as fear—they did not always know what they wanted to decide. (174-5)

As the children mature, they become more adept at distinguishing between real and hypothetical problems. Indeed, Briggs suggests one of the ways adults recognize children’s budding maturity is that they begin to treat the dilemmas as a game, ceasing to take them seriously, and ceasing to take themselves as seriously as they did when they were younger.

In his book On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd theorizes that the fictional narratives that humans engage one another with in every culture all over the world, be they in the form of religious myths, folklore, or plays and novels, can be thought of as a type of cognitive play—similar to the hypothetical moral dilemmas of the Inuit. He sees storytelling as an adaption that encourages us to train the mental faculties we need to function in complex societies. The idea is that evolution ensures that adaptive behaviors tend to be pleasurable, and thus many animals playfully and joyously engage in activities in low-stakes, relatively safe circumstances that will prepare them to engage in similar activities that have much higher stakes and are much more dangerous. Boyd explains,

The more pleasure that creatures have in play in safe contexts, the more they will happily expend energy in mastering skills needed in urgent or volatile situations, in attack, defense, and social competition and cooperation. This explains why in the human case we particularly enjoy play that develops skills needed in flight (chase, tag, running) and fight (rough-and-tumble, throwing as a form of attack at a distance), in recovery of balance (skiing, surfing, skateboarding), and individual and team games. (92)

The skills most necessary to survive and thrive in human societies are the same ones Inuit adults help children develop with the hypothetical dilemma’s Briggs describes. We should expect fiction then to feature similar types of moral dilemmas. Some stories may be designed to convey simple messages—“Don’t hurt your brother,” “Don’t stray from the path”—but others might be much more complicated; they may not even have any viable solutions at all. “Art prepares minds for open-ended learning and creativity,” Boyd writes; “fiction specifically improves our social cognition and our thinking beyond the here and now” (209).

One of the ways the cognitive play we call novels or TV shows differs from Inuit dilemma games is that the fictional characters take over center stage from the individual audience members. Instead of being forced to decide on a course of action ourselves, we watch characters we’ve become emotionally invested in try to come up with solutions to the dilemmas. When these characters are first introduced to us, our feelings toward them will be based on the same criteria we’d apply to real people who could potentially become a part of our social circles. Boyd explains,

Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch. (130)

We favor characters who are good team players—who communicate honestly, who show concern for others, and who direct aggression toward enemies and cheats—for obvious reasons, but we also assess them in terms of what they might contribute to the group. Characters with exceptional strength, beauty, intelligence, or artistic ability are always especially attention-worthy. Of course, characters with qualities that make them sometimes an asset and sometimes a liability represent a moral dilemma all on their own—it’s no wonder such characters tend to be so compelling.

The most common fictional dilemma pits a character we like against one or more characters we hate—the good team player versus the power- or money-hungry egoist. We can think of the most straightforward plot as an encroachment of chaos on the providential moral order we might otherwise take for granted. When the bad guy is finally defeated, it’s like a toy that was snatched away from us has just been returned. We embrace the moral order all the more vigorously. But of course our stories aren’t limited to this one basic formula. Around the turn of the last century, the French writer Georges Polti, following up on the work of Italian playwright Carlo Gozzi, tried to write a comprehensive list of all the basic plots in plays and novels, and flipping through his book The Thirty-Six Dramatic Situations, you find that with few exceptions (“Daring Enterprise,” “The Enigma,” “Recovery of a Lost One”) the situations aren’t simply encounters between characters with conflicting goals, or characters who run into obstacles in chasing after their desires. The conflicts are nearly all moral, either between a virtuous character and a less virtuous one or between selfish or greedy impulses and more altruistic ones. Polti’s book could be called The Thirty-Odd Moral Dilemmas in Fiction. Hector Salamanca would be happy (not really) to see the thirteenth situation: “Enmity of Kinsmen,” the first example of which is “Hatred of Brothers” (49).

One type of fictional dilemma that seems to be particularly salient in American society today pits our impulse to punish wrongdoers against our admiration for people with exceptional abilities. Characters like Walter White in Breaking Bad win us over with qualities like altruism, resourcefulness, and ingenuity—but then they go on to behave in strikingly, though somehow not obviously, immoral ways. Variations on Conan-Doyle’s Sherlock Holmes abound; he’s the supergenius who’s also a dick (get the double-entendre?): the BBC’s Sherlock (by far the best), the movies starring Robert Downey Jr., the upcoming series featuring an Asian female Watson (Lucy Liu)—plus all the minor variations like The Mentalist and House

Though the idea that fiction is a type of low-stakes training simulation to prepare people cognitively and emotionally to take on difficult social problems in real life may not seem all that earthshattering, conceiving of stories as analogous to Inuit moral dilemmas designed to exercise children’s moral reasoning faculties can nonetheless help us understand why worries about the examples set by fictional characters are so often misguided. Many parents and teachers noisily complain about sex or violence or drug use in media. Academic literary critics condemn the way this or that author portrays women or minorities. Underlying these concerns is the crude assumption that stories simply encourage audiences to imitate the characters, that those audiences are passive receptacles for the messages—implicit or explicit—conveyed through the narrative. To be fair, these worries may be well placed when it comes to children so young they lack the cognitive sophistication necessary for separating their thoughts and feelings about protagonists from those they have about themselves, and are thus prone to take the hero for a simple model of emulation-worthy behavior. But, while Inuit adults communicate to children that they can be trusted to arrive at a right or moral solution, the moralizers in our culture betray their utter lack of faith in the intelligence and conscience of the people they try to protect from the corrupting influence of stories with imperfect or unsavory characters. 

           This type of self-righteous and overbearing attitude toward readers and viewers strikes me as more likely by orders of magnitude to provoke defiant resistance to moral lessons than the North Baffin’s isumaqsayuq approach. In other words, a good story is worth a thousand sermons. But if the moral dilemma at the core of the plot has an easy solution—if you can say precisely what the moral of the story is—it’s probably not a very good story.

Also read

The Criminal Sublime: Walter White's Brutally Plausible Journey to the Heart of Darkness in Breaking Bad

And

SYMPATHIZING WITH PSYCHOS: WHY WE WANT TO SEE ALEX ESCAPE HIS FATE AS A CLOCKWORK ORANGE

And

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

Read More
Dennis Junk Dennis Junk

The Imp of the Underground and the Literature of Low Status

A famous scene in “Notes from the Underground” echoes a famous study comparing people’s responses to an offense. What are the implications for behavior and personality of having low social status, and how does that play out in fiction? Is Poe’s “Imp of the Perverse” really just an example of our inborn defiance, our raging against the machine?

The one overarching theme in literature, and I mean all literature since there’s been any to speak of, is injustice. Does the girl get the guy she deserves? If so, the work is probably commercial, as opposed to literary, fiction. If not, then the reason begs pondering. Maybe she isn’t pretty enough, despite her wit and aesthetic sophistication, so we’re left lamenting the shallowness of our society’s males. Maybe she’s of a lower caste, despite her unassailable virtue, in which case we’re forced to question our complacency before morally arbitrary class distinctions. Or maybe the timing was just off—cursed fate in all her fickleness. Another literary work might be about the woman who ends up without the fulfilling career she longed for and worked hard to get, in which case we may blame society’s narrow conception of femininity, as evidenced by all those damn does-the-girl-get-the-guy stories.

            The prevailing theory of what arouses our interest in narratives focuses on the characters’ goals, which magically, by some as yet undiscovered cognitive mechanism, become our own. But plots often catch us up before any clear goals are presented to us, and our partisanship on behalf of a character easily endures shifting purposes. We as readers and viewers are not swept into stories through the transubstantiation of someone else’s striving into our own, with the protagonist serving as our avatar as we traverse the virtual setting and experience the pre-orchestrated plot. Rather, we reflexively monitor the character for signs of virtue and for a capacity to contribute something of value to his or her community, the same way we, in our nonvirtual existence, would monitor and assess a new coworker, classmate, or potential date. While suspense in commercial fiction hinges on high-stakes struggles between characters easily recognizable as good and those easily recognizable as bad, and comfortably condemnable as such, forward momentum in literary fiction—such as it is—depends on scenes in which the protagonist is faced with temptations, tests of virtue, moral dilemmas.

The strain and complexity of coming to some sort of resolution to these dilemmas often serves as a theme in itself, a comment on the mad world we live in, where it’s all but impossible to discern between right and wrong. Indeed, the most common emotional struggle depicted in literature is that between the informal, even intimate handling of moral evaluation—which comes natural to us owing to our evolutionary heritage as a group-living species—and the official, systematized, legal or institutional channels for determining merit and culpability that became unavoidable as societies scaled up exponentially after the advent of agriculture. These burgeoning impersonal bureaucracies are all too often ill-equipped to properly weigh messy mitigating factors, and they’re all too vulnerable to subversion by unscrupulous individuals who know how to game them. Psychopaths who ought to be in prison instead become CEOs of multinational investment firms, while sensitive and compassionate artists and humanitarians wind up taking lowly day jobs at schools or used book stores. But the feature of institutions and bureaucracies—and of complex societies more generally—that takes the biggest toll on our Pleistocene psyches, the one that strikes us as the most glaring injustice, is their stratification, their arrangement into steeply graded hierarchies.

Unlike our hierarchical ape cousins, all present-day societies still living in small groups as nomadic foragers, like those our ancestors lived in throughout the epoch that gave rise to the suite of traits we recognize as uniquely human, collectively enforce an ethos of egalitarianism. As anthropologist Christopher Boehm explains in his book Hierarchy in the Forest: The Evolution of Egalitarianism,

Even though individuals may be attracted personally to a dominant role, they make a common pact which says that each main political actor will give up his modest chances of becoming alpha in order to be certain that no one will ever be alpha over him. (105)

Since humans evolved from a species that was ancestral to both chimpanzees and gorillas, we carry in us many of the emotional and behavioral capacities that support hierarchies. But, during all those millennia of egalitarianism, we also developed an instinctive distaste for behaviors that undermine an individual’s personal sovereignty. “On their list of serious moral transgressions,” Boehm explains,

hunter-gathers regularly proscribe the enactment of behavior that is politically overbearing. They are aiming at upstarts who threaten the autonomy of other group members, and upstartism takes various forms. An upstart may act the bully simply because he is disposed to dominate others, or he may become selfishly greedy when it is time to share meat, or he may want to make off with another man’s wife by threat or by force. He (or sometimes she) may also be a respected leader who suddenly begins to issue direct orders… An upstart may simply take on airs of superiority, or may aggressively put others down and thereby violate the group’s idea of how its main political actors should be treating one another. (43)

In a band of thirty people, it’s possible to keep a vigilant eye on everyone and head off potential problems. But, as populations grow, encounters with strangers in settings where no one knows one another open the way for threats to individual autonomy and casual insults to personal dignity. And, as professional specialization and institutional complexity increase in pace with technological advancement, power structures become necessary for efficient decision-making. Economic inequality then takes hold as a corollary of professional inequality.

None of this is to suggest that the advance of civilization inevitably leads to increasing injustice. In fact, per capita murder rates are much higher in hunter-gatherer societies. Nevertheless, the impersonal nature of our dealings with others in the modern world often strikes us as overly conducive to perverse incentives and unfair outcomes. And even the most mundane signals of superior status or the most subtle expressions of power, though officially sanctioned, can be maddening. Compare this famous moment in literary history to Boehm’s account of hunter-gatherer political philosophy:

I was standing beside the billiard table, blocking the way unwittingly, and he wanted to pass; he took me by the shoulders and silently—with no warning or explanation—moved me from where I stood to another place, and then passed by as if without noticing. I could have forgiven a beating, but I simply could not forgive his moving me and in the end just not noticing me. (49)

The billiard player's failure to acknowledge his autonomy outrages the narrator, who then considers attacking the man who has treated him with such disrespect. But he can’t bring himself to do it. He explains,

I turned coward not from cowardice, but from the most boundless vanity. I was afraid, not of six-foot-tallness, nor of being badly beaten and chucked out the window; I really would have had physical courage enough; what I lacked was sufficient moral courage. I was afraid that none of those present—from the insolent marker to the last putrid and blackhead-covered clerk with a collar of lard who was hanging about there—would understand, and that they would all deride me if I started protesting and talking to them in literary language. Because among us to this day it is impossible to speak of a point of honor—that is, not honor, but a point of honor (point d’honneur) otherwise than in literary language. (50)

The languages of law and practicality are the only ones whose legitimacy is recognized in modern societies. The language of morality used to describe sentiments like honor has been consigned to literature. This man wants to exact his revenge for the slight he suffered, but that would require his revenge to be understood by witnesses as such. The derision he can count on from all the bystanders would just compound the slight. In place of a close-knit moral community, there is only a loose assortment of strangers. And so he has no recourse.

            The character in this scene could be anyone. Males may be more keyed into the physical dimension of domination and more prone to react with physical violence, but females likewise suffer from slights and belittlements, and react aggressively, often by attacking their tormenter's reputation through gossip. Treating a person of either gender as an insensate obstacle is easier when that person is a stranger you’re unlikely ever to encounter again. But another dynamic is at play in the scene which makes it still easier—almost inevitable. After being unceremoniously moved aside, the narrator becomes obsessed with the man who treated him so dismissively. Desperate to even the score, he ends up stalking the man, stewing resentfully, trying to come up with a plan. He writes,

And suddenly… suddenly I got my revenge in the simplest, the most brilliant way! The brightest idea suddenly dawned on me. Sometimes on holidays I would go to Nevsky Prospect between three and four, and stroll along the sunny side. That is, I by no means went strolling there, but experienced countless torments, humiliations and risings of bile: that must have been just what I needed. I darted like an eel among the passers-by, in a most uncomely fashion, ceaselessly giving way now to generals, now to cavalry officers and hussars, now to ladies; in those moments I felt convulsive pains in my heart and a hotness in my spine at the mere thought of the measliness of my attire and the measliness and triteness of my darting little figure. This was a torment of torments, a ceaseless, unbearable humiliation from the thought, which would turn into a ceaseless and immediate sensation, of my being a fly before that whole world, a foul, obscene fly—more intelligent, more developed, more noble than everyone else—that went without saying—but a fly, ceaselessly giving way to everyone, humiliated by everyone, insulted by everyone. (52)

So the indignity, it seems, was not borne of being moved aside like a piece of furniture so much as it was of being afforded absolutely no status. That’s why being beaten would have been preferable; a beating implies a modicum of worthiness in that it demands recognition, effort, even risk, no matter how slight.

            The idea that occurs to the narrator for the perfect revenge requires that he first remedy the outward signals of his lower social status, “the measliness of my attire and the measliness… of my darting little figure,” as he calls them. The catch is that to don the proper attire for leveling a challenge, he has to borrow money from a man he works with—which only adds to his daily feelings of humiliation. Psychologists Derek Rucker and Adam Galinsky have conducted experiments demonstrating that people display a disturbing readiness to compensate for feelings of powerlessness and low status by making pricy purchases, even though in the long run such expenditures only serve to perpetuate their lowly economic and social straits. The irony is heightened in the story when the actual revenge itself, the trappings for which were so dearly purchased, turns out to be so bathetic.

Suddenly, within three steps of my enemy, I unexpectedly decided, closed my eyes, and—we bumped solidly shoulder against shoulder! I did not yield an inch and passed by on perfectly equal footing! He did not even look back and pretended not to notice: but he only pretended, I’m sure of that. To this day I’m sure of it! Of course, I got the worst of it; he was stronger, but that was not the point. The point was that I had achieved my purpose, preserved my dignity, yielded not a step, and placed myself publicly on an equal social footing with him. I returned home perfectly avenged for everything. (55)

But this perfect vengeance has cost him not only the price of a new coat and hat; it has cost him a full two years of obsession, anguish, and insomnia as well. The implication is that being of lowly status is a constant psychological burden, one that makes people so crazy they become incapable of making rational decisions.

            Literature buffs will have recognized these scenes from Dostoevsky’s Notes from Underground (as translated by Richard Prevear and Larissa Volokhnosky), which satirizes the idea of a society based on the principle of “rational egotism” as symbolized by N.G. Chernyshevsky’s image of a “crystal palace” (25), a well-ordered utopia in which every citizen pursues his or her own rational self-interests. Dostoevsky’s underground man hates the idea because regardless of how effectively such a society may satisfy people’s individual needs the rigid conformity it would demand would be intolerable. The supposed utopia, then, could never satisfy people’s true interests. He argues,

That’s just the thing, gentlemen, that there may well exist something that is dearer for almost every man than his very best profit, or (so as not to violate logic) that there is this one most profitable profit (precisely the omitted one, the one we were just talking about), which is chiefer and more profitable than all other profits, and for which a man is ready, if need be, to go against all laws, that is, against reason, honor, peace, prosperity—in short, against all these beautiful and useful things—only so as to attain this primary, most profitable profit which is dearer to him than anything else. (22)

The underground man cites examples of people behaving against their own best interests in this section, which serves as a preface to the story of his revenge against the billiard player who so blithely moves him aside. The way he explains this “very best profit” which makes people like himself behave in counterproductive, even self-destructive ways is to suggest that nothing else matters unless everyone’s freedom to choose how to behave is held inviolate. He writes,

One’s own free and voluntary wanting, one’s own caprice, however wild, one’s own fancy, though chafed sometimes to the point of madness—all this is that same most profitable profit, the omitted one, which does not fit into any classification, and because of which all systems and theories are constantly blown to the devil… Man needs only independent wanting, whatever this independence may cost and wherever it may lead. (25-6)

Notes from Underground was originally published in 1864. But the underground man echoes, wittingly or not, the narrator of Edgar Allan Poe’s story from almost twenty years earlier, "The Imp of the Perverse," who posits an innate drive to perversity, explaining,

Through its promptings we act without comprehensible object. Or if this shall be understood as a contradiction in terms, we may so far modify the proposition as to say that through its promptings we act for the reason that we should not. In theory, no reason can be more unreasonable, but in reality there is none so strong. With certain minds, under certain circumstances, it becomes absolutely irresistible. I am not more sure that I breathe, than that the conviction of the wrong or impolicy of an action is often the one unconquerable force which impels us, and alone impels us, to its prosecution. Nor will this overwhelming tendency to do wrong for the wrong’s sake, admit of analysis, or resolution to ulterior elements. (403)

This narrator’s suggestion of the irreducibility of the impulse notwithstanding, it’s noteworthy how often the circumstances that induce its expression include the presence of an individual of higher status.

            The famous shoulder bump in Notes from Underground has an uncanny parallel in experimental psychology. In 1996, Dov Cohen, Richard Nisbett, and their colleagues published the research article, “Insult, Aggression, and the Southern Culture of Honor: An ‘Experimental Ethnography’,” in which they report the results of a comparison between the cognitive and physiological responses of southern males to being bumped in a hallway and casually called an asshole to those of northern males. The study showed that whereas men from northern regions were usually amused by the run-in, southern males were much more likely to see it as an insult and a threat to their manhood, and they were much more likely to respond violently. The cortisol and testosterone levels of southern males spiked—the clever experimental setup allowed meaures before and after—and these men reported believing physical confrontation was the appropriate way to redress the insult. The way Cohen and Nisbett explain the difference is that the “culture of honor” that emerges in southern regions originally developed as a safeguard for men who lived as herders. Cultures that arise in farming regions place less emphasis on manly honor because farmland is difficult to steal. But if word gets out that a herder is soft then his livelihood is at risk. Cohen and Nisbett write,

Such concerns might appear outdated for southern participants now that the South is no longer a lawless frontier based on a herding economy. However, we believe these experiments may also hint at how the culture of honor has sustained itself in the South. It is possible that the culture-of-honor stance has become “functionally autonomous” from the material circumstances that created it. Culture of honor norms are now socially enforced and perpetuated because they have become embedded in social roles, expectations, and shared definitions of manhood. (958)

            More recently, in a 2009 article titled “Low-Status Compensation: A Theory for Understanding the Role of Status in Cultures of Honor,” psychologist P.J. Henry takes another look at Cohen and Nisbett’s findings and offers another interpretation based on his own further experimentation. Henry’s key insight is that herding peoples are often considered to be of lower status than people with other professions and lifestyles. After establishing that the southern communities with a culture of honor are often stigmatized with negative stereotypes—drawling accents signaling low intelligence, high incidence of incest and drug use, etc.—both in the minds of outsiders and those of the people themselves, Henry suggests that a readiness to resort to violence probably isn’t now and may not ever have been adaptive in terms of material benefits.

An important perspective of low-status compensation theory is that low status is a stigma that brings with it lower psychological worth and value. While it is true that stigma also often accompanies lower economic worth and, as in the studies presented here, is sometimes defined by it (i.e., those who have lower incomes in a society have more of a social stigma compared with those who have higher incomes), low-status compensation theory assumes that it is psychological worth that is being protected, not economic or financial worth. In other words, the compensation strategies used by members of low-status groups are used in the service of psychological self-protection, not as a means of gaining higher status, higher income, more resources, etc. (453)

And this conception of honor brings us closer to the observations of the underground man and Poe’s boastful murderer. If psychological worth is what’s being defended, then economic considerations fall by the wayside. Unfortunately, since our financial standing tends to be so closely tied to our social standing, our efforts to protect our sense of psychological worth have a nasty tendency to backfire in the long run.

            Henry found evidence for the importance of psychological reactance, as opposed to cultural norms, in causing violence when he divided participants of his study into either high or low status categories and then had them respond to questions about how likely they would be to respond to insults with physical aggression. But before being asked about the propriety of violent reprisals half of the members of each group were asked to recall as vividly as they could a time in their lives when they felt valued by their community. Henry describes the findings thus:

When lower status participants were given the opportunity to validate their worth, they were less likely to endorse lashing out aggressively when insulted or disrespected. Higher status participants were unaffected by the manipulation. (463)

The implication is that people who feel less valuable than others, a condition that tends to be associated with low socioeconomic status, are quicker to retaliate because they are almost constantly on-edge, preoccupied at almost every moment with assessments of their standing in relation to others. Aside from a readiness to engage in violence, this type of obsessive vigilance for possible slights, and the feeling of powerlessness that attends it, can be counted on to keep people in a constant state of stress. The massive longitudinal study of British Civil Service employees called the Whitehall Study, which tracks the health outcomes of people at the various levels of the bureaucratic hierarchy, has found that the stress associated with low status also has profound effects on our physical well-being.  

            Though it may seem that violence-prone poor people occupying lowly positions on societal and professional totem poles are responsible for aggravating and prolonging their own misery because they tend to spend extravagantly and lash out at their perceived overlords with nary a concern for the consequences, the regularity with which low status leads to self-defeating behavior suggests the impulses are much more deeply rooted than some lazily executed weighing of pros and cons. If the type of wealth or status inequality the underground man finds himself on the short end of would have begun to take root in societies like the ones Christopher Boehm describes, a high-risk attempt at leveling the playing field would not only have been understandable—it would have been morally imperative. In a group of nomadic foragers, though, a man endeavoring to knock a would-be alpha down a few pegs would be able to count on the endorsement of most of the other group members. And the success rate for re-establishing and maintaining egalitarianism would have been heartening. Today, we are forced to live with inequality, even though beyond a certain point most people (regardless of political affiliation) see it as an injustice. 

            Some of the functions of literature, then, are to help us imagine just how intolerable life on the bottom can be, sympathize with those who get trapped in downward spirals of self-defeat, and begin to imagine what a more just and equitable society might look like. The catch is that we will be put off by characters who mistreat others or simply show a dearth of redeeming qualities.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

and

CAN’T WIN FOR LOSING: WHY THERE ARE SO MANY LOSERS IN LITERATURE AND WHY IT HAS TO CHANGE

Read More
Dennis Junk Dennis Junk

Sympathizing with Psychos: Why We Want to See Alex Escape His Fate as A Clockwork Orange

Especially in this age where everything, from novels to social media profiles, are scrutinized for political wrong think, it’s important to ask how so many people can enjoy stories with truly reprehensible protagonists. Anthony Burgess’s “A Clockwork Orange” provides a perfect test case. How can readers possible sympathize with Alex?

            Phil Connors, the narcissistic weatherman played by Bill Murray in Groundhog Day, is, in the words of Larry, the cameraman played by Chris Elliott, a “prima donna,” at least at the beginning of the movie. He’s selfish, uncharitable, and condescending. As the plot progresses, however, Phil undergoes what is probably the most plausible transformation in all of cinema—having witnessed what he’s gone through over the course of the movie, we’re more than willing to grant the possibility that even the most narcissistic of people might be redeemed through such an ordeal. The odd thing, though, is that when you watch Groundhog Day you don’t exactly hate Phil at the beginning of the movie. Somehow, even as we take note of his most off-putting characteristics, we’re never completely put off. As horrible as he is, he’s not really even unpleasant. The pleasure of watching the movie must to some degree stem from our desire to see Phil redeemed. We want him to learn his lesson so we don’t have to condemn him or write him off. 

            In a recent article for the New Yorker, Jonathan Franzen explores what he calls “the problem of sympathy” by considering his own responses to the novels of Edith Wharton, who herself strikes him as difficult to sympathize with. Lily Bart, the protagonist of The House of Mirth, is similar to Wharton in many respects, the main difference being that Lily is beautiful (and of course Franzen was immediately accused of misogyny for pointing this out). Of Lily, Franzen writes,

She is, basically, the worst sort of party girl, and Wharton, in much the same way that she didn’t even try to be soft or charming in her personal life, eschews the standard novelistic tricks for warming or softening Lily’s image—the book is devoid of pet-the-dog moments. So why is it so hard to stop reading Lily’s story? (63)

Franzen weighs several hypotheses: her beauty, her freedom to act on impulses we would never act on, her financial woes, her aging. But ultimately he settles on the conclusion that all of these factors are incidental.

What determines whether we sympathize with a fictional character, according to Franzen, is the strength and immediacy of his or her desire. What propels us through the story then is our curiosity about whether or not the character will succeed in satisfying that desire. He explains,

One of the great perplexities of fiction—and the quality that makes the novel the quintessentially liberal art form—is that we experience sympathy so readily for characters we wouldn’t like in real life. Becky Sharp may be a soulless social climber, Tom Ripley may be a sociopath, the Jackal may want to assassinate the French President, Mickey Sabbath may be a disgustingly self-involved old goat, and Raskolnikov may want to get away with murder, but I find myself rooting for each of them. This is sometimes, no doubt, a function of the lure of the forbidden, the guilty pleasure of imagining what it would be like to be unburdened by scruples. In every case, though, the alchemical agent by which fiction transmutes my secret envy or my ordinary dislike of “bad” people into sympathy is desire. Apparently, all a novelist has to do is give a character a powerful desire (to rise socially, to get away with murder) and I, as a reader, become helpless not to make that desire my own. (63)

While I think Franzen here highlights a crucial point about the intersection between character and plot, namely that it is easier to assess how well characters fare at the story’s end if we know precisely what they want—and also what they dread—it’s clear nonetheless that he’s being flip in his dismissal of possible redeeming qualities. Emily Gould, writing for The Awl, expostulates in a parenthetical to her statement that her response to Lily was quite different from Franzen’s that “she was so trapped! There were no right choices! How could anyone find watching that ‘delicious!’ I cry every time!”

            Focusing on any single character in a story the way Franzen does leaves out important contextual cues about personality. In a story peopled with horrible characters, protagonists need only send out the most modest of cues signaling their altruism or redeemability for readers to begin to want to see them prevail. For Milton’s Satan to be sympathetic, readers have to see God as significantly less so. In Groundhog Day, you have creepy and annoying characters like Larry and Ned Ryerson to make Phil look slightly better. And here is Franzen on the denouement of House of Mirth, describing his response to Lily reflecting on the timestamp placed on her youthful beauty:

But only at the book’s very end, when Lily finds herself holding another woman’s baby and experiencing a host of unfamiliar emotions, does a more powerful sort of urgency crash into view. The financial potential of her looks is revealed to have been an artificial value, in contrast to their authentic value in the natural scheme of human reproduction. What has been simply a series of private misfortunes for Lily suddenly becomes something larger: the tragedy of a New York City social world whose priorities are so divorced from nature that they kill the emblematically attractive female who ought, by natural right, to thrive. The reader is driven to search for an explanation of the tragedy in Lily’s appallingly deforming social upbringing—the kind of upbringing that Wharton herself felt deformed by—and to pity her for it, as, per Aristotle, a tragic protagonist must be pitied. (63)

As Gould points out, though, Franzen is really late in coming to an appreciation of the tragedy, even though his absorption with Lily’s predicament suggests he feels sympathy for her all along. Launching into a list of all the qualities that supposedly make the character unsympathetic, he writes, “The social height that she’s bent on securing is one that she herself acknowledges is dull and sterile” (62), a signal of ambivalence that readers like Gould take as a hopeful sign that she might eventually be redeemed. In any case, few of the other characters seem willing to acknowledge anything of the sort.

            Perhaps the most extreme instance in which a bad character wins the sympathy of readers and viewers by being cast with a character or two who are even worse is that of Alex in Anthony Burgess’s novella A Clockwork Orange and the Stanley Kubrick film based on it. (Patricia Highsmith’s Mr. Ripley is another clear contender.) How could we possibly like Alex? He’s a true sadist who matter-of-factly describes the joyous thrill he gets from committing acts of “ultraviolence” against his victims, and he’s a definite candidate for a clinical diagnosis of antisocial personality disorder.

He’s also probably the best evidence for Franzen’s theory that sympathy is reducible to desire. It should be noted, however, that, in keeping with William Flesch’s theory of narrative interest, A Clockwork Orange is nothing if not a story of punishment. In his book Comeuppance, Flesch suggests that when we become emotionally enmeshed with stories we’re monitoring the characters for evidence of either altruism or selfishness and henceforth attending to the plot, anxious to see the altruists rewarded and the selfish get their comeuppance. Alex seems to strain the theory, though, because all he seems to want to do is hurt people, and yet audiences tend to be more disturbed than gratified by his drawn-out, torturous punishment. For many, there’s even some relief at the end of the movie and the original American version of the book when Alex makes it through all of his ordeals with his taste for ultraviolence restored.  

            Many obvious factors mitigate the case against Alex, perhaps foremost among them the whimsical tone of his narration, along with the fictional dialect which lends to the story a dream-like quality, which is also brilliantly conveyed in the film. There’s something cartoonish about all the characters who suffer at the hands of Alex and his droogs, and almost all of them return to the story later to exact their revenge. You might even say there’s a Groundhogesque element of repetition in the plot. The audience quickly learns too that all the characters who should be looking out for Alex—he’s only fifteen, we find out after almost eighty pages—are either feckless zombies like his parents, who have been sapped of all vitality by their clockwork occupations, or only see him as a means to furthering their own ambitions. “If you have no consideration for your own horrible self you at least might have some for me, who have sweated over you,” his Post-Corrective Advisor P.R. Deltoid says to him. “A big black mark, I tell you in confidence, for every one we don’t reclaim, a confession of failure for every one of you that ends up in the stripy hole” (42). Even the prison charlie (he’s a chaplain, get it?) who serves as a mouthpiece to deliver Burgess’s message treats him as a means to an end. Alex explains,

The idea was, I knew, that this charlie was after becoming a very great holy chelloveck in the world of Prison Religion, and he wanted a real horrorshow testimonial from the Governor, so he would go and govoreet quietly to the Governor now and then about what dark plots were brewing among the plennies, and he would get a lot of this cal from me. (91)

Alex ends up receiving his worst punishment at the hands of the man against whom he’s committed his worst crime. F. Alexander is the author of the metafictionally titled A Clockwork Orange, a treatise against the repressive government, and in the first part of the story Alex and his droogs, wearing masks, beat him mercilessly before forcing him to watch them gang rape his wife, who ends up dying from wounds she sustains in the attack. Later, when Alex gets beaten up himself and inadvertently stumbles back to the house that was the scene of the crime, F. Alexander recognizes him only as the guinea pig for a government experiment in brainwashing criminals he’s read about in newspapers. He takes Alex in and helps him, saying, “I think you can be used, poor boy. I think you can help dislodge this overbearing Government” (175). After he recognizes Alex from his nadsat dialect as the ringleader of the gang who killed his wife, he decides the boy will serve as a better propaganda tool if he commits suicide. Locking him in a room and blasting the Beethoven music he once loved but was conditioned in prison to find nauseating to the point of wishing for death, F. Alexander leaves Alex no escape but to jump out of a high window.

The desire for revenge is understandable, but before realizing who it is he’s dealing with F. Alexander reveals himself to be conniving and manipulative, like almost every other adult Alex knows. When he wakes up in the hospital after his suicide attempt, he discovers that the Minister of the Inferior, as he calls him, has had the conditioning procedure he originally ordered be carried out on Alex reversed and is now eager for Alex to tell everyone how F. Alexander and his fellow conspirators tried to kill him. Alex is nothing but a pawn to any of them. That’s why it’s possible to be relieved when his capacity for violent behavior has been restored.

Of course, the real villain of A Clockwork Orange is the Ludovico Technique, the treatment used to cure Alex of his violent impulses. Strapped into a chair with his head locked in place and his glazzies braced open, Alex is forced to watch recorded scenes of torture, murder, violence, and rape, the types of things he used to enjoy. Only now he’s been given a shot that makes him feel so horrible he wants to snuff it (kill himself), and over the course of the treatment sessions he becomes conditioned to associate his precious ultraviolence with this dreadful feeling. Next to the desecration of a man’s soul—the mechanistic control obviating his free will—the antisocial depredations of a young delinquent are somewhat less horrifying. As the charlie says to Alex, addressing him by his prison ID number,

It may not be nice to be good, little 6655321. It may be horrible to be good. And when I say that to you I realize how self-contradictory that sounds. I know I shall have many sleepless nights about this. What does God want? Does God want goodness or the choice of goodness? Is a man who chooses the bad perhaps in some way better than a man who has the good imposed upon him? Deep and hard questions, little 6655321. (107)

At the same time, though, one of the consequences of the treatment is that Alex becomes not just incapable of preying on others but also of defending himself. Immediately upon his release from prison, he finds himself at the mercy of everyone he’s wronged and everyone who feels justified in abusing or exploiting him owing to his past crimes. Before realizing who Alex is, F. Alexander says to him,

You’ve sinned, I suppose, but your punishment has been out of all proportion. They have turned you into something other than a human being. You have no power of choice any longer. You’re committed to socially acceptable acts, a little machine capable only of good. (175)

            To tally the mitigating factors: Alex is young (though the actor in the movie was twenty-eight), he’s surrounded by other bizarre and unlikable characters, and he undergoes dehumanizing torture. But does this really make up for his participating in gang rape and murder? Personally, as strange and unsavory as F. Alexander seems, I have to say I can’t fault him in the least for taking revenge on Alex. As someone who believes all behaviors are ultimately determined by countless factors outside the  individual’s control, from genes to education to social norms, I don’t have that much of a problem with the Ludovico Technique either. Psychopathy is a primarily genetic condition that makes people incapable of experiencing moral emotions such as would prevent them from harming others. If aversion therapy worked to endow psychopaths with negative emotions similar to those the rest of us feel in response to Alex’s brand of ultraviolence, then it doesn’t seem like any more of a desecration than, say, a brain operation to remove a tumor with deleterious effects on moral reasoning. True, the prospect of a corrupt government administering the treatment is unsettling, but this kid was going around beating, raping, and killing people.

            And yet, I also have to admit (confess?), my own response to Alex, even at the height of his delinquency, before his capture and punishment, was to like him and root for him—this despite the fact that, contra Franzen, I couldn’t really point to any one thing he desires more than anything else.

            For those of us who sympathize with Alex, every instance in which he does something unconscionable induces real discomfort, like when he takes two young ptitsas back to his room after revealing they “couldn’t have been more than ten” (47) (but he earlier says the girl Billyboy's gang is "getting ready to perform on" is "not more than ten" [18] - is he serious?). We don’t like him, in other words, because he does bad things but in spite of it. At some point near the beginning of the story, Alex must give some convincing indications that by the end he will have learned the error of his ways. He must provide readers with some evidence that he is at least capable of learning to empathize with other people’s suffering and willing to behave in such a way as to avoid it, so when we see him doing something horrible we view it as an anxiety-inducing setback rather than a deal-breaking harbinger of his true life trajectory. But what is it exactly that makes us believe this psychopath is redeemable?

            Phil Connors in Groundhog Day has one obvious saving grace. When viewers are first introduced to him, he’s doing his weather report—and he has a uniquely funny way of doing it. “Uh-oh, look out. It’s one of these big blue things!” he jokes when the graphic of a storm front appears on the screen. “Out in California they're going to have some warm weather tomorrow, gang wars, and some very overpriced real estate,” he says drolly. You could argue he’s only being funny in an attempt to further his career, but he continues trying to make people laugh, usually at the expense of weird or annoying characters, even when the cameras are off (not those cameras). Successful humor requires some degree of social acuity, and the effort that goes into it suggests at least a modicum of generosity. You could say, in effect, Phil goes out of his way to give the other characters, and us, a few laughs. Alex, likewise, offers us a laugh before the end of the first page, as he describes how the Korova Milkbar, where he and his droogs hang out, doesn’t have a liquor license but can sell moloko with drugs added to it “which would give you a nice quiet horrorshow fifteen minutes admiring Bog And All His Holy Angels And Saints in your left shoe with lights bursting all over your mozg” (3-4). Even as he’s assaulting people, Alex keeps up his witty banter and dazzling repartee. He’s being cruel, but he’s having fun. Moreover, he seems to be inviting us to have fun with him.

            Probably the single most important factor behind our desire (and I understand “our” here doesn’t include everyone in the audience) to see Alex redeemed is the fact that he’s being kind enough to tell us his story, to invite us into his life, as it were. This is the magic of first person narration. And like most magic it’s based on a psychological principle describing a mental process most of us go about our lives completely oblivious to. The Jewish psychologist Henri Tajfel was living in France at the beginning of World War II, and he was in a German prisoner-of-war camp for most of its duration. Afterward, he went to college in England, where in the 1960s and 70s he would conduct a series of experiments that are today considered classics in social psychology. Many other scientists at the time were trying to understand how an atrocity like the Holocaust could have happened. One theory was that the worst barbarism was committed by a certain type of individual who had what was called an authoritarian personality. Others, like Muzafer Sherif, pointed to a universal human tendency to form groups and discriminate on their behalf.

            Tajfel knew about Sherif’s Robber’s Cave Experiment in which groups of young boys were made to compete with each other in sports and over territory. Under those conditions, the groups of boys quickly became antagonistic toward one another, so much so that the experiment had to be moved into its reconciliation phase earlier than planned to prevent violence. But Tajfel suspected that group rivalries could be sparked even without such an elaborate setup. To test his theory, he developed what is known as the minimal group paradigm, in which test subjects engage in some task or test of their preferences and are subsequently arranged into groups based on the outcome. In the original experiments, none of the participants knew anything about their groupmates aside from the fact that they’d been assigned to the same group. And yet, even when the group assignments were based on nothing but a coin toss, subjects asked how much money other people in the experiment deserved as a reward for their participation suggested much lower dollar amounts for people in rival groups. “Apparently,” Tajfel writes in a 1970 Scientific American article about his experiments, “the mere fact of division into groups is enough to trigger discriminatory behavior” (96).

            Once divisions into us and them have been established, considerations of fairness are reserved for members of the ingroup. While the subjects in Tajfel’s tests aren’t displaying fully developed tribal animosity, they do demonstrate that the seeds of tribalism are disturbingly easily to sow. As he explains,

Unfortunately it is only too easy to think of examples in real life where fairness would go out the window, since groupness is often based on criteria much more weighty than either preferring a painter one has never heard of before or resembling someone else in one's way of counting dots. (102)

I’m unaware of any studies on the effects of various styles of narration on perceptions of group membership, but I hypothesize that we can extrapolate the minimal group paradigm into the realm of first-person narrative accounts of violence. The reason some of us like Alex despite his horrendous behavior is that he somehow manages to make us think of him as a member of our tribe—or rather as ourselves as a member of his—while everyone he wrongs belongs to a rival group. Even as he’s telling us about all the horrible things he’s done to other people, he takes time out to to introduce us to his friends, describe places like the Korova Milkbar and the Duke of York, even the flat at Municipal Flatblock 18A where he and his parents live. He tells us jokes. He shares with us his enthusiasm for classical music. Oh yeah, he also addresses us, “Oh my brothers,” beginning seven lines down on the first page and again at intervals throughout the book, making us what anthropologists call his fictive kin.

            There’s something altruistic, or at least generous, about telling jokes or stories. Alex really is our humble narrator, as he frequently refers to himself. Beyond that, though, most stories turn on some moral point, so when we encounter a narrator who immediately begins recounting his crimes we can’t help but anticipate the juncture in the story at which he experiences some moral enlightenment. In the twenty-first and last chapter of A Clockwork Orange, Alex does indeed undergo just this sort of transformation. But American publishers, along with Stanley Kubrick, cut this part of the book because it struck them as a somewhat cowardly turning away from the reality of human evil. Burgess defends the original version in an introduction to the 1986 edition,

The twenty-first chapter gives the novel the quality of genuine fiction, an art founded on the principle that human beings change. There is, in fact, not much point in writing a novel unless you can show the possibility of moral transformation, or an increase in wisdom, operating in your chief character or characters. Even trashy bestsellers show people changing. When a fictional work fails to show change, when it merely indicates that human character is set, stony, unregenerable, then you are out of the field of the novel and into that of the fable or the allegory. (xii)

Indeed, it’s probably this sense of the story being somehow unfinished or cut off in the middle that makes the film so disturbing and so nightmarishly memorable. With regard to the novel, readers could be forgiven for wondering what the hell Alex’s motivation was in telling his story in the first place if there was no lesson or no intuitive understanding he thought he could convey with it.

            But is the twenty-first chapter believable? Would it have been possible for Alex to transform into a good man? The Nobel Prize-winning psychologist Daniel Kahneman, in his book Thinking, Fast and Slow, shares with his own readers an important lesson from his student days that bears on Alex’s case:

As a graduate student I attended some courses on the art and science of psychotherapy. During one of these lectures our teacher imparted a morsel of clinical wisdom. This is what he told us: “You will from time to time meet a patient who shares a disturbing tale of multiple mistakes in his previous treatment. He has been seen by several clinicians, and all failed him. The patient can lucidly describe how his therapists misunderstood him, but he has quickly perceived that you are different. You share the same feeling, are convinced that you understand him, and will be able to help.” At this point my teacher raised his voice as he said, “Do not even think of taking on this patient! Throw him out of the office! He is most likely a psychopath and you will not be able to help him.” (27-28)

Also read

SABBATH SAYS: PHILIP ROTH AND THE DILEMMAS OF IDEOLOGICAL CASTRATION

And

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And
HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

Why Shakespeare Nauseated Darwin: A Review of Keith Oatley's "Such Stuff as Dreams"

Does practicing science rob one of humanity? Why is it that, if reading fiction trains us to take the perspective of others, English departments are rife with pettiness and selfishness? Keith Oately is trying to make the study of literature more scientific, and he provides hints to these riddles and many others in his book “Such Stuff as Dreams.”

Late in his life, Charles Darwin lost his taste for music and poetry. “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he laments in his autobiography, and for many of us the temptation to place all men and women of science into a category of individuals whose minds resemble machines more than living and emotionally attuned organs of feeling and perceiving is overwhelming. In the 21st century, we even have a convenient psychiatric diagnosis for people of this sort. Don’t we just assume Sheldon in The Big Bang Theory has autism, or at least the milder version of it known as Asperger’s? It’s probably even safe to assume the show’s writers had the diagnostic criteria for the disorder in mind when they first developed his character. Likewise, Dr. Watson in the BBC’s new and obscenely entertaining Sherlock series can’t resist a reference to the quintessential evidence-crunching genius’s own supposed Asperger’s.

In Darwin’s case, however, the move away from the arts couldn’t have been due to any congenital deficiency in his finer human sentiments because it occurred only in adulthood. He writes,

I have said that in one respect my mind has changed during the last twenty or thirty years. Up to the age of thirty, or beyond it, poetry of many kinds, such as the works of Milton, Gray, Byron, Wordsworth, Coleridge, and Shelley, gave me great pleasure, and even as a schoolboy I took intense delight in Shakespeare, especially in the historical plays. I have also said that formerly pictures gave me considerable, and music very great delight. But now for many years I cannot endure to read a line of poetry: I have tried lately to read Shakespeare, and found it so intolerably dull that it nauseated me. I have also almost lost my taste for pictures or music. Music generally sets me thinking too energetically on what I have been at work on, instead of giving me pleasure.

We could interpret Darwin here as suggesting that casting his mind too doggedly into his scientific work somehow ruined his capacity to appreciate Shakespeare. But, like all thinkers and writers of great nuance and sophistication, his ideas are easy to mischaracterize through selective quotation (or, if you’re Ben Stein or any of the other unscrupulous writers behind creationist propaganda like the pseudo-documentary Expelled, you can just lie about what he actually wrote).

One of the most charming things about Darwin is that his writing is often more exploratory than merely informative. He writes in search of answers he has yet to discover. In a wider context, the quote about his mind becoming a machine, for instance, reads,

This curious and lamentable loss of the higher aesthetic tastes is all the odder, as books on history, biographies, and travels (independently of any scientific facts which they may contain), and essays on all sorts of subjects interest me as much as ever they did. My mind seems to have become a kind of machine for grinding general laws out of large collections of facts, but why this should have caused the atrophy of that part of the brain alone, on which the higher tastes depend, I cannot conceive. A man with a mind more highly organised or better constituted than mine, would not, I suppose, have thus suffered; and if I had to live my life again, I would have made a rule to read some poetry and listen to some music at least once every week; for perhaps the parts of my brain now atrophied would thus have been kept active through use. The loss of these tastes is a loss of happiness, and may possibly be injurious to the intellect, and more probably to the moral character, by enfeebling the emotional part of our nature.

His concern for his lost aestheticism notwithstanding, Darwin’s humanism, his humanity, radiates in his writing with a warmth that belies any claim about thinking like a machine, just as the intelligence that shows through it gainsays his humble deprecations about the organization of his mind.

           In this excerpt, Darwin, perhaps inadvertently, even manages to put forth a theory of the function of art. Somehow, poetry and music not only give us pleasure and make us happy—enjoying them actually constitutes a type of mental exercise that strengthens our intellect, our emotional awareness, and even our moral character. Novelist and cognitive psychologist Keith Oatley explores this idea of human betterment through aesthetic experience in his book Such Stuff as Dreams: The Psychology of Fiction. This subtitle is notably underwhelming given the long history of psychoanalytic theorizing about the meaning and role of literature. However, whereas psychoanalysis has fallen into disrepute among scientists because of its multiple empirical failures and a general methodological hubris common among its practitioners, the work of Oatley and his team at the University of Toronto relies on much more modest, and at the same time much more sophisticated, scientific protocols. One of the tools these researchers use, The Reading the Mind in the Eyes Test, was in fact first developed to research our new category of people with machine-like minds. What the researchers find bolsters Darwin’s impression that art, at least literary art, functions as a kind of exercise for our faculty of understanding and relating to others.

           Reasoning that “fiction is a kind of simulation of selves and their vicissitudes in the social world” (159), Oatley and his colleague Raymond Mar hypothesized that people who spent more time trying to understand fictional characters would be better at recognizing and reasoning about other, real-world people’s states of mind. So they devised a test to assess how much fiction participants in their study read based on how well they could categorize a long list of names according to which ones belonged to authors of fiction, which to authors of nonfiction, and which to non-authors. They then had participants take the Mind-in-the-Eyes Test, which consists of matching close-up pictures of peoples’ eyes with terms describing their emotional state at the time they were taken. The researchers also had participants take the Interpersonal Perception Test, which has them answer questions about the relationships of people in short video clips featuring social interactions. An example question might be “Which of the two children, or both, or neither, are offspring of the two adults in the clip?”  (Imagine Sherlock Holmes taking this test.) As hypothesized, Oatley writes, “We found that the more fiction people read, the better they were at the Mind-in-the-Eyes Test. A similar relationship held, though less strongly, for reading fiction and the Interpersonal Perception Test” (159).

            One major shortcoming of this study is that it fails to establish causality; people who are naturally better at reading emotions and making sound inferences about social interactions may gravitate to fiction for some reason. So Mar set up an experiment in which he had participants read either a nonfiction article from an issue of the New Yorker or a work of short fiction chosen to be the same length and require the same level of reading skills. When the two groups then took a test of social reasoning, the ones who had read the short story outperformed the control group. Both groups also took a test of analytic reasoning as a further control; on this variable there was no difference in performance between the groups. The outcome of this experiment, Oatley stresses, shouldn’t be interpreted as evidence that reading one story will increase your social skills in any meaningful and lasting way. But reading habits established over long periods likely explain the more significant differences between individuals found in the earlier study. As Oatley explains,

Readers of fiction tend to become more expert at making models of others and themselves, and at navigating the social world, and readers of non-fiction are likely to become more expert at genetics, or cookery, or environmental studies, or whatever they spend their time reading. Raymond Mar’s experimental study on reading pieces from the New Yorker is probably best explained by priming. Reading a fictional piece puts people into a frame of mind of thinking about the social world, and this is probably why they did better at the test of social reasoning. (160)

Connecting these findings to real-world outcomes, Oatley and his team also found that “reading fiction was not associated with loneliness,” as the stereotype suggests, “but was associated with what psychologists call high social support, being in a circle of people whom participants saw a lot, and who were available to them practically and emotionally” (160).

            These studies by the University of Toronto team have received wide publicity, but the people who should be the most interested in them have little or no idea how to go about making sense of them. Most people simply either read fiction or they don’t. If you happen to be of the tribe who studies fiction, then you were probably educated in a way that engendered mixed feelings—profound confusion really—about science and how it works. In his review of The Storytelling Animal, a book in which Jonathan Gottschall incorporates the Toronto team’s findings into the theory that narrative serves the adaptive function of making human social groups more cooperative and cohesive, Adam Gopnik sneers,

Surely if there were any truth in the notion that reading fiction greatly increased our capacity for empathy then college English departments, which have by far the densest concentration of fiction readers in human history, would be legendary for their absence of back-stabbing, competitive ill-will, factional rage, and egocentric self-promoters; they’d be the one place where disputes are most often quickly and amiably resolved by mutual empathetic engagement. It is rare to see a thesis actually falsified as it is being articulated.

Oatley himself is well aware of the strange case of university English departments. He cites a report by Willie van Peer on a small study he did comparing students in the natural sciences to students in the humanities. Oatley explains,

There was considerable scatter, but on average the science students had higher emotional intelligence than the humanities students, the opposite of what was expected; van Peer indicts teaching in the humanities for often turning people away from human understanding towards technical analyses of details. (160)

Oatley suggests in a footnote that an earlier study corroborates van Peer’s indictment. It found that high school students who show more emotional involvement with short stories—the type of connection that would engender greater empathy—did proportionally worse on standard academic assessments of English proficiency. The clear implication of these findings is that the way literature is taught in universities and high schools is long overdue for an in-depth critical analysis.

            The idea that literature has the power to make us better people is not new; indeed, it was the very idea on which the humanities were originally founded. We have to wonder what people like Gopnik believe the point of celebrating literature is if not to foster greater understanding and empathy. If you either enjoy it or you don’t, and it has no beneficial effects on individuals or on society in general, why bother encouraging anyone to read? Why bother writing essays about it in the New Yorker? Tellingly, many scholars in the humanities began doubting the power of art to inspire greater humanity around the same time they began questioning the value and promise of scientific progress. Oatley writes,

Part of the devastation of World War II was the failure of German citizens, one of the world’s most highly educated populations, to prevent their nation’s slide into Nazism. George Steiner has famously asserted: “We know that a man can read Goethe or Rilke in the evening, that he can play Bach and Schubert, and go to his day’s work at Auschwitz in the morning.” (164)

Postwar literary theory and criticism has, perversely, tended toward the view that literature and language in general serve as a vessel for passing on all the evils inherent in our western, patriarchal, racist, imperialist culture. The purpose of literary analysis then becomes to shift out these elements and resist them. Unfortunately, such accusatory theories leave unanswered the question of why, if literature inculcates oppressive ideologies, we should bother reading it at all. As van Peer muses in the report Oatley cites, “The Inhumanity of the Humanities,”

Consider the ills flowing from postmodern approaches, the “posthuman”: this usually involves the hegemony of “race/class/gender” in which literary texts are treated with suspicion. Here is a major source of that loss of emotional connection between student and literature. How can one expect a certain humanity to grow in students if they are continuously instructed to distrust authors and texts? (8)

           Oatley and van Peer point out, moreover, that the evidence for concentration camp workers having any degree of literary or aesthetic sophistication is nonexistent. According to the best available evidence, most of the greatest atrocities were committed by soldiers who never graduated high school. The suggestion that some type of cozy relationship existed between Nazism and an enthusiasm for Goethe runs afoul of recorded history. As Oatley points out,

Apart from propensity to violence, nationalism, and anti-Semitism, Nazism was marked by hostility to humanitarian values in education. From 1933 onwards, the Nazis replaced the idea of self-betterment through education and reading by practices designed to induce as many as possible into willing conformity, and to coerce the unwilling remainder by justified fear. (165)

Oatley also cites the work of historian Lynn Hunt, whose book Inventing Human Rights traces the original social movement for the recognition of universal human rights to the mid-1700s, when what we recognize today as novels were first being written. Other scholars like Steven Pinker have pointed out too that, while it’s hard not to dwell on tragedies like the Holocaust, even atrocities of that magnitude are resoundingly overmatched by the much larger post-Enlightenment trend toward peace, freedom, and the wider recognition of human rights. It’s sad that one of the lasting legacies of all the great catastrophes of the 20th Century is a tradition in humanities scholarship that has the people who are supposed to be the custodians of our literary heritage hell-bent on teaching us all the ways that literature makes us evil.

            Because Oatley is a central figure in what we can only hope is a movement to end the current reign of self-righteous insanity in literary studies, it pains me not to be able to recommend Such Stuff as Dreams to anyone but dedicated specialists. Oatley writes in the preface that he has “imagined the book as having some of the qualities of fiction. That is to say I have designed it to have a narrative flow” (x), and it may simply be that this suggestion set my expectations too high. But the book is poorly edited, the prose is bland and often roles over itself into graceless tangles, and a couple of the chapters seem like little more than haphazardly collated reports of studies and theories, none exactly off-topic, none completely without interest, but all lacking any central progression or theme. The book often reads more like an annotated bibliography than a story. Oatley’s scholarly range is impressive, however, bearing not just on cognitive science and literature through the centuries but extending as well to the work of important literary theorists. The book is never unreadable, never opaque, but it’s not exactly a work of art in its own right.

            Insofar as Such Stuff as Dreams is organized around a central idea, it is that fiction ought be thought of not as “a direct impression of life,” as Henry James suggests in his famous essay “The Art of Fiction,” and as many contemporary critics—notably James Wood—seem to think of it. Rather, Oatley agrees with Robert Louis Stevenson’s response to James’s essay, “A Humble Remonstrance,” in which he writes that

Life is monstrous, infinite, illogical, abrupt and poignant; a work of art in comparison is neat, finite, self-contained, rational, flowing, and emasculate. Life imposes by brute energy, like inarticulate thunder; art catches the ear, among the far louder noises of experience, like an air artificially made by a discreet musician. (qtd on pg 8)

Oatley theorizes that stories are simulations, much like dreams, that go beyond mere reflections of life to highlight through defamiliarization particular aspects of life, to cast them in a new light so as to deepen our understanding and experience of them. He writes,

Every true artistic expression, I think, is not just about the surface of things. It always has some aspect of the abstract. The issue is whether, by a change of perspective or by a making the familiar strange, by means of an artistically depicted world, we can see our everyday world in a deeper way. (15)

Critics of high-brow literature like Wood appreciate defamiliarization at the level of description; Oatley is suggesting here though that the story as a whole functions as a “metaphor-in-the-large” (17), a way of not just making us experience as strange some object or isolated feeling, but of reconceptualizing entire relationships, careers, encounters, biographies—what we recognize in fiction as plots. This is an important insight, and it topples verisimilitude from its ascendant position atop the hierarchy of literary values while rendering complaints about clichéd plots potentially moot. Didn’t Shakespeare recycle plots after all?

            The theory of fiction as a type of simulation to improve social skills and possibly to facilitate group cooperation is emerging as the frontrunner in attempts to explain narrative interest in the context of human evolution. It is to date, however, impossible to rule out the possibility that our interest in stories is not directly adaptive but instead emerges as a byproduct of other traits that confer more immediate biological advantages. The finding that readers track actions in stories with the same brain regions that activate when they witness similar actions in reality, or when they engage in them themselves, is important support for the simulation theory. But the function of mirror neurons isn’t well enough understood yet for us to determine from this study how much engagement with fictional stories depends on the reader's identifying with the protagonist. Oatley’s theory is more consonant with direct and straightforward identification. He writes,

A very basic emotional process engages the reader with plans and fortunes of a protagonist. This is what often drives the plot and, perhaps, keeps us turning the pages, or keeps us in our seat at the movies or at the theater. It can be enjoyable. In art we experience the emotion, but with it the possibility of something else, too. The way we see the world can change, and we ourselves can change. Art is not simply taking a ride on preoccupations and prejudices, using a schema that runs as usual. Art enables us to experience some emotions in contexts that we would not ordinarily encounter, and to think of ourselves in ways that usually we do not. (118)

Much of this change, Oatley suggests, comes from realizing that we too are capable of behaving in ways that we might not like. “I am capable of this too: selfishness, lack of sympathy” (193), is what he believes we think in response to witnessing good characters behave badly.

            Oatley’s theory has a lot to recommend it, but William Flesch’s theory of narrative interest, which suggests we don’t identify with fictional characters directly but rather track them and anxiously hope for them to get whatever we feel they deserve, seems much more plausible in the context of our response to protagonists behaving in surprisingly selfish or antisocial ways. When I see Ed Norton as Tyler Durden beating Angel Face half to death in Fight Club, for instance, I don’t think, hey, that’s me smashing that poor guy’s face with my fists. Instead, I think, what the hell are you doing? I had you pegged as a good guy. I know you’re trying not to be as much of a pushover as you used to be but this is getting scary. I’m anxious that Angel Face doesn’t get too damaged—partly because I imagine that would be devastating to Tyler. And I’m anxious lest this incident be a harbinger of worse behavior to come.

            The issue of identification is just one of several interesting questions that can lend itself to further research. Oatley and Mar’s studies are not enormous in terms of sample size, and their subjects were mostly young college students. What types of fiction work the best to foster empathy? What types of reading strategies might we encourage students to apply to reading literature—apart from trying to remove obstacles to emotional connections with characters? But, aside from the Big-Bad-Western Empire myth that currently has humanities scholars grooming successive generations of deluded ideologues to be little more than culture vultures presiding over the creation and celebration of Loser Lit, the other main challenge to transporting literary theory onto firmer empirical grounds is the assumption that the arts in general and literature in particular demand a wholly different type of thinking to create and appreciate than the type that goes into the intricate mechanics and intensely disciplined practices of science.

As Oatley and the Toronto team have shown, people who enjoy fiction tend to have the opposite of autism. And people who do science are, well, Sheldon. Interestingly, though, the writers of The Big Bang Theory, for whatever reason, included some contraindications for a diagnosis of autism or Asperger’s in Sheldon’s character. Like the other scientists in the show, he’s obsessed with comic books, which require at least some understanding of facial expression and body language to follow. As Simon Baron-Cohen, the autism researcher who designed the Mind-in-the-Eyes test, explains, “Autism is an empathy disorder: those with autism have major difficulties in 'mindreading' or putting themselves into someone else’s shoes, imagining the world through someone else’s feelings” (137). Baron-Cohen has coined the term “mindblindness” to describe the central feature of the disorder, and many have posited that the underlying cause is abnormal development of the brain regions devoted to perspective taking and understanding others, what cognitive psychologists refer to as our Theory of Mind.

            To follow comic book plotlines, Sheldon would have to make ample use of his own Theory of Mind. He’s also given to absorption in various science fiction shows on TV. If he were only interested in futuristic gadgets, as an autistic would be, he could just as easily get more scientifically plausible versions of them in any number of nonfiction venues. By Baron-Cohen’s definition, Sherlock Holmes can’t possibly have Asperger’s either because his ability to get into other people’s heads is vastly superior to pretty much everyone else’s. As he explains in “The Musgrave Ritual,”

You know my methods in such cases, Watson: I put myself in the man’s place, and having first gauged his intelligence, I try to imagine how I should myself have proceeded under the same circumstances.

            What about Darwin, though, that demigod of science who openly professed to being nauseated by Shakespeare? Isn’t he a prime candidate for entry into the surprisingly unpopulated ranks of heartless, data-crunching scientists whose thinking lends itself so conveniently to cooptation by oppressors and committers of wartime atrocities? It turns out that though Darwin held many of the same racist views as nearly all educated men of his time, his ability to empathize across racial and class divides was extraordinary. Darwin was not himself a Social Darwinist, a theory devised by Herbert Spencer to justify inequality (which has currency still today among political conservatives). And Darwin was also a passionate abolitionist, as is clear in the following excerpts from The Voyage of the Beagle:

On the 19th of August we finally left the shores of Brazil. I thank God, I shall never again visit a slave-country. To this day, if I hear a distant scream, it recalls with painful vividness my feelings, when passing a house near Pernambuco, I heard the most pitiable moans, and could not but suspect that some poor slave was being tortured, yet knew that I was as powerless as a child even to remonstrate.

Darwin is responding to cruelty in a way no one around him at the time would have. And note how deeply it pains him, how profound and keenly felt his sympathy is.

I was present when a kind-hearted man was on the point of separating forever the men, women, and little children of a large number of families who had long lived together. I will not even allude to the many heart-sickening atrocities which I authentically heard of;—nor would I have mentioned the above revolting details, had I not met with several people, so blinded by the constitutional gaiety of the negro as to speak of slavery as a tolerable evil.

            The question arises, not whether Darwin had sacrificed his humanity to science, but why he had so much more humanity than many other intellectuals of his day.

It is often attempted to palliate slavery by comparing the state of slaves with our poorer countrymen: if the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin; but how this bears on slavery, I cannot see; as well might the use of the thumb-screw be defended in one land, by showing that men in another land suffered from some dreadful disease.

And finally we come to the matter of Darwin’s Theory of Mind, which was quite clearly in no way deficient.

Those who look tenderly at the slave owner, and with a cold heart at the slave, never seem to put themselves into the position of the latter;—what a cheerless prospect, with not even a hope of change! picture to yourself the chance, ever hanging over you, of your wife and your little children—those objects which nature urges even the slave to call his own—being torn from you and sold like beasts to the first bidder! And these deeds are done and palliated by men who profess to love their neighbours as themselves, who believe in God, and pray that His Will be done on earth! It makes one's blood boil, yet heart tremble, to think that we Englishmen and our American descendants, with their boastful cry of liberty, have been and are so guilty; but it is a consolation to reflect, that we at least have made a greater sacrifice than ever made by any nation, to expiate our sin. (530-31)

            I suspect that Darwin’s distaste for Shakespeare was borne of oversensitivity. He doesn't say music failed to move him; he didn’t like it because it made him think “too energetically.” And as aesthetically pleasing as Shakespeare is, existentially speaking, his plays tend to be pretty harsh, even the comedies. When Prospero says, "We are such stuff / as dreams are made on" in Act 4 of The Tempest, he's actually talking not about characters in stories, but about how ephemeral and insignificant real human lives are. But why, beyond some likely nudge from his inherited temperament, was Darwin so sensitive? Why was he so empathetic even to those so vastly different from him? After admitting he’d lost his taste for Shakespeare, paintings, and music, he goes to say,

On the other hand, novels which are works of the imagination, though not of a very high order, have been for years a wonderful relief and pleasure to me, and I often bless all novelists. A surprising number have been read aloud to me, and I like all if moderately good, and if they do not end unhappily—against which a law ought to be passed. A novel, according to my taste, does not come into the first class unless it contains some person whom one can thoroughly love, and if a pretty woman all the better.

Also read

STORIES, SOCIAL PROOF, & OUR TWO SELVES

And:

LET'S PLAY KILL YOUR BROTHER: FICTION AS A MORAL DILEMMA GAME

And:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

[Check out the Toronto group's blog at onfiction.ca]

Read More
Dennis Junk Dennis Junk

The Storytelling Animal: a Light Read with Weighty Implications

The Storytelling Animal is not groundbreaking. But the style of the book contributes something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams, through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be. The effect is that we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe.

A review of Jonathan Gottschall's The Storytelling Animal: How Stories Make Us Human

Vivian Paley, like many other preschool and kindergarten teachers in the 1970s, was disturbed by how her young charges always separated themselves by gender at playtime. She was further disturbed by how closely the play of each gender group hewed to the old stereotypes about girls and boys. Unlike most other teachers, though, Paley tried to do something about it. Her 1984 book Boys and Girls: Superheroes in the Doll Corner demonstrates in microcosm how quixotic social reforms inspired by the assumption that all behaviors are shaped solely by upbringing and culture can be. Eventually, Paley realized that it wasn’t the children who needed to learn new ways of thinking and behaving, but herself. What happened in her classrooms in the late 70s, developmental psychologists have reliably determined, is the same thing that happens when you put kids together anywhere in the world. As Jonathan Gottschall explains,

Dozens of studies across five decades and a multitude of cultures have found essentially what Paley found in her Midwestern classroom: boys and girls spontaneously segregate themselves by sex; boys engage in more rough-and-tumble play; fantasy play is more frequent in girls, more sophisticated, and more focused on pretend parenting; boys are generally more aggressive and less nurturing than girls, with the differences being present and measurable by the seventeenth month of life. (39)

Paley’s study is one of several you probably wouldn’t expect to find discussed in a book about our human fascination with storytelling. But, as Gottschall makes clear in The Storytelling Animal: How Stories Make Us Human, there really aren’t many areas of human existence that aren’t relevant to a discussion of the role stories play in our lives. Those rowdy boys in Paley’s classes were playing recognizable characters from current action and sci-fi movies, and the fantasies of the girls were right out of Grimm’s fairy tales (it’s easy to see why people might assume these cultural staples were to blame for the sex differences). And the play itself was structured around one of the key ingredients—really the key ingredient—of any compelling story, trouble, whether in the form of invading pirates or people trying to poison babies.

The Storytelling Animal is the book to start with if you have yet to cut your teeth on any of the other recent efforts to bring the study of narrative into the realm of cognitive and evolutionary psychology. Gottschall covers many of the central themes of this burgeoning field without getting into the weedier territories of game theory or selection at multiple levels. While readers accustomed to more technical works may balk at wading through all the author’s anecdotes about his daughters, Gottschall’s keen sense of measure and the light touch of his prose keep the book from getting bogged down in frivolousness. This applies as well to the sections in which he succumbs to the temptation any writer faces when trying to explain one or another aspect of storytelling by making a few forays into penning abortive, experimental plots of his own.

None of the central theses of The Storytelling Animal is groundbreaking. But the style and layout of the book contribute something both surprising and important. Gottschall could simply tell his readers that stories almost invariably feature come kind of conflict or trouble and then present evidence to support the assertion, the way most science books do. Instead, he takes us on a tour from children’s highly gendered, highly trouble-laden play scenarios, through an examination of the most common themes enacted in dreams—which contra Freud are seldom centered on wish-fulfillment—through some thought experiments on how intensely boring so-called hyperrealism, or the rendering of real life as it actually occurs, in fiction would be (or actually is, if you’ve read any of D.F.Wallace’s last novel about an IRS clerk). The effect is that instead of simply having a new idea to toss around we actually feel how odd it is to devote so much of our lives to obsessing over anxiety-inducing fantasies fraught with looming catastrophe. And we appreciate just how integral story is to almost everything we do.

This gloss of Gottschall’s approach gives a sense of what is truly original about The Storytelling Animal—it doesn’t seal off narrative as discrete from other features of human existence but rather shows how stories permeate every aspect of our lives, from our dreams to our plans for the future, even our sense of our own identity. In a chapter titled “Life Stories,” Gottschall writes,

This need to see ourselves as the striving heroes of our own epics warps our sense of self. After all, it’s not easy to be a plausible protagonist. Fiction protagonists tend to be young, attractive, smart, and brave—all of the things that most of us aren’t. Fiction protagonists usually live interesting lives that are marked by intense conflict and drama. We don’t. Average Americans work retail or cubicle jobs and spend their nights watching protagonists do interesting things on television, while they eat pork rinds dipped in Miracle Whip. (171)

If you find this observation a tad unsettling, imagine it situated on a page underneath a mug shot of John Wayne Gacy with a caption explaining how he thought of himself “more as a victim than as a perpetrator.” For the most part, though, stories follow an easily identifiable moral logic, which Gottschall demonstrates with a short plot of his own based on the hypothetical situations Jonathan Haidt designed to induce moral dumbfounding. This almost inviolable moral underpinning of narratives suggests to Gottschall that one of the functions of stories is to encourage a sense of shared values and concern for the wider community, a role similar to the one D.S. Wilson sees religion as having played, and continuing to play in human evolution.

Though Gottschall stays away from the inside baseball stuff for the most part, he does come down firmly on one issue in opposition to at least one of the leading lights of the field. Gottschall imagines a future “exodus” from the real world into virtual story realms that are much closer to the holodecks of Star Trek than to current World of Warcraft interfaces. The assumption here is that people’s emotional involvement with stories results from audience members imagining themselves to be the protagonist. But interactive videogames are probably much closer to actual wish-fulfillment than the more passive approaches to attending to a story—hence the god-like powers and grandiose speechifying.

William Flesch challenges the identification theory in his own (much more technical) book Comeuppance. He points out that films that have experimented with a first-person approach to camera work failed to capture audiences (think of the complicated contraption that filmed Will Smith’s face as he was running from the zombies in I am Legend). Flesch writes, “If I imagined I were a character, I could not see her face; thus seeing her face means I must have a perspective on her that prevents perfect (naïve) identification” (16). One of the ways we sympathize with one another, though, is to mirror them—to feel, at least to some degree, their pain. That makes the issue a complicated one. Flesch believes our emotional involvement comes not from identification but from a desire to see virtuous characters come through the troubles of the plot unharmed, vindicated, maybe even rewarded. Attending to a story therefore entails tracking characters' interactions to see if they are in fact virtuous, then hoping desperately to see their virtue rewarded.

Gottschall does his best to avoid dismissing the typical obsessive Larper (live-action role player) as the “stereotypical Dungeons and Dragons player” who “is a pimply, introverted boy who isn’t cool and can’t play sports or attract girls” (190). And he does his best to end his book on an optimistic note. But the exodus he writes about may be an example of another phenomenon he discusses. First the optimism:

Humans evolved to crave story. This craving has, on the whole, been a good thing for us. Stories give us pleasure and instruction. They simulate worlds so we can live better in this one. They help bind us into communities and define us as cultures. Stories have been a great boon to our species. (197)

But he then makes an analogy with food cravings, which likewise evolved to serve a beneficial function yet in the modern world are wreaking havoc with our health. Just as there is junk food, so there is such a thing as “junk story,” possibly leading to what Brian Boyd, another luminary in evolutionary criticism, calls a “mental diabetes epidemic” (198). In the context of America’s current education woes, and with how easy it is to conjure images of glazy-eyed zombie students, the idea that video games and shows like Jersey Shore are “the story equivalent of deep-fried Twinkies” (197) makes an unnerving amount of sense.

Here, as in the section on how our personal histories are more fictionalized rewritings than accurate recordings, Gottschall manages to achieve something the playful tone and off-handed experimentation don't prepare you for. The surprising accomplishment of this unassuming little book (200 pages) is that it never stops being a light read even as it takes on discoveries with extremely weighty implications. The temptation to eat deep-fried Twinkies is only going to get more powerful as story-delivery systems become more technologically advanced. Might we have already begun the zombie apocalypse without anyone noticing—and, if so, are there already heroes working to save us we won’t recognize until long after the struggle has ended and we’ve begun weaving its history into a workable narrative, a legend?

Also read:

WHAT IS A STORY? AND WHAT ARE YOU SUPPOSED TO DO WITH ONE?

And:

HOW TO GET KIDS TO READ LITERATURE WITHOUT MAKING THEM HATE IT

Read More
Dennis Junk Dennis Junk

HUNGER GAME THEORY: Post-Apocalyptic Fiction and the Rebirth of Humanity

We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds?

The appeal of post-apocalyptic stories stems from the joy of experiencing anew the birth of humanity. The renaissance never occurs in M.T. Anderson’s Feed, in which the main character is rendered hopelessly complacent by the entertainment and advertising beamed directly into his brain. And it is that very complacency, the product of our modern civilization's unfathomable complexity, that most threatens our sense of our own humanity. There was likely a time, though, when small groups composed of members of our species were beset by outside groups composed of individuals of a different nature, a nature that when juxtaposed with ours left no doubt as to who the humans were. 

      In Suzanne Collins’ The Hunger Games, Katniss Everdeen reflects on how the life-or-death stakes of the contest she and her fellow “tributes” are made to participate in can transform teenage boys and girls into crazed killers. She’s been brought to a high-tech mega-city from District 12, a mining town as quaint as the so-called Capitol is futuristic. Peeta Mellark, who was chosen by lottery as the other half of the boy-girl pair of tributes from the district, has just said to her, “I want to die as myself…I don’t want them to change me in there. Turn me into some kind of monster that I’m not.” Peeta also wants “to show the Capitol they don’t own me. That I’m more than just a piece in their Games.” The idea startles Katniss, who at this point is thinking of nothing but surviving the games—knowing full well that there are twenty-two more tributes and only one will be allowed to leave the arena alive. Annoyed by Peeta’s pronouncement of a higher purpose, she thinks,

We will see how high and mighty he is when he’s faced with life and death. He’ll probably turn into one of those raging beast tributes, the kind who tries to eat someone’s heart after they’ve killed them. There was a guy like that a few years ago from District 6 called Titus. He went completely savage and the Gamemakers had to have him stunned with electric guns to collect the bodies of the players he’d killed before he ate them. There are no rules in the arena, but cannibalism doesn’t play well with the Capitol audience, so they tried to head it off. (141-3)

Cannibalism is the ultimate relinquishing of the mantle of humanity because it entails denying the humanity of those being hunted for food. It’s the most basic form of selfishness: I kill you so I can live.

The threat posed to humanity by hunger is also the main theme of Cormac McCarthy’s The Road, the story of a father and son wandering around the ruins of a collapsed civilization. The two routinely search abandoned houses for food and supplies, and in one they discover a bunch of people locked in a cellar. The gruesome clue to the mystery of why they’re being kept is that some have limbs amputated. The men keeping them are devouring the living bodies a piece at a time. After a harrowing escape, the boy, understandably disturbed, asks, “They’re going to kill those people, arent they?” His father, trying to protect him from the harsh reality, answers yes, but tries to be evasive, leading to this exchange:

Why do they have to do that?

I dont know.

Are they going to eat them?

I dont know.

They’re going to eat them, arent they?

Yes.

And we couldnt help them because then they’d eat us too.

Yes.

And that’s why we couldnt help them.

Yes.

Okay.

But of course it’s not okay. After they’ve put some more distance between them and the human abattoir, the boy starts to cry. His father presses him to explain what’s wrong:

Just tell me.

We wouldnt ever eat anybody, would we?

No. Of course not.

Even if we were starving?

We’re starving now.

You said we werent.

I said we werent dying. I didn’t say we werent starving.

But we wouldnt.

No. We wouldnt.

No matter what.

No. No matter what.

Because we’re the good guys.

Yes.

And we’re carrying the fire.

And we’re carrying the fire. Yes.

Okay. (127-9)

And this time it actually is okay because the boy, like Peeta Mellark, has made it clear that if the choice is between dying and becoming a monster he wants to die.

This preference for death over depredation of others is one of the hallmarks of humanity, and it poses a major difficulty for economists and evolutionary biologists alike. How could this type of selflessness possibly evolve?

John von Neumann, one of the founders of game theory, served an important role in developing the policies that have so far prevented the real life apocalypse from taking place. He is credited with the strategy of Mutually Assured Destruction, or MAD (he liked amusing acronyms), that prevailed during the Cold War. As the name implies, the goal was to assure the Soviets that if they attacked us everyone would die. Since the U.S. knew the same was true of any of our own plans to attack the Soviets, a tense peace, or Cold War, was the inevitable result. But von Neumann was not at all content with this peace. He devoted his twilight years to pushing for the development of Intercontinental Ballistic Missiles (ICBMs) that would allow the U.S. to bomb Russia without giving the Soviets a chance to respond. In 1950, he made the infamous remark that inspired Dr. Strangelove: “If you say why not bomb them tomorrow, I say, why not today. If you say today at five o’clock, I say why not one o’clock?”

           Von Neumann’s eagerness to hit the Russians first was based on the logic of game theory, and that same logic is at play in The Hunger Games and other post-apocalyptic fiction. The problem with cooperation, whether between rival nations or between individual competitors in a game of life-or-death, is that it requires trust—and once one player begins to trust the other he or see becomes vulnerable to exploitation, the proverbial stab in the back from the person who’s supposed to be watching it. Game theorists model this dynamic with a thought experiment called the Prisoner’s Dilemma. Imagine two criminals are captured and taken to separate interrogation rooms. Each criminal has the option of either cooperating with the other criminal by remaining silent or betraying him or her by confessing. Here’s a graph of the possible outcomes:

No matter what the other player does, each of them achieves a better outcome by confessing. Von Neumann saw the standoff between the U.S. and the Soviets as a Prisoner’s Dilemma; by not launching nukes, each side was cooperating with the other. Eventually, though, one of them had to realize that the only rational thing to do was be the first to defect.

But the way humans play games is a bit different. As it turned out, von Neumann was wrong about the game theory implications of the Cold War—neither side ever did pull the trigger; both prisoners kept their mouth shut. In Collins' novel, Katniss faces a Prisoner's Dilemma every time she encounters another tribute who may be willing to team up with her in the hunger game. The graph for her and Peeta looks like this:

In the context of the hunger games, then, it makes sense to team up with rivals as long as they have useful skills, knowledge, or strength. Each tribute knows, furthermore, that as long as he or she is useful to a teammate, it would be irrational for that teammate to defect.

The Prisoner’s Dilemma logic gets much more complicated when you start having players try to solve it over multiple rounds of play. Game theorists refer to each time a player has to make a choice as an iteration. And to model human cooperative behavior you have to not only have multiple iterations but also find a way to factor in each player’s awareness of how rivals have responded to the dilemma in the past. Humans have reputations. Katniss, for instance, doesn’t trust the Career tributes because they have a reputation for being ruthless. She even begins to suspect Peeta when she sees that he’s teamed up with the Careers. (His knowledge of Katniss is a resource to them, but he’s using that knowledge in an irrational way—to protect her instead of himself.) On the other hand, Katniss trusts Rue because she's young and dependent—and because she comes from an adjacent district not known for sending tributes who are cold-blooded.

When you have multiple iterations and reputations, you also open the door for punishments and rewards. At the most basic level, people reward those who they witness cooperating by being more willing to cooperate with them. As we read or watch The Hunger Games, we can actually experience the emotional shift that occurs in ourselves as we witness Katniss’s cooperative behavior.

People punish those who defect by being especially reluctant to trust them. At this point, the analysis is still within the realm of the purely selfish and rational. But you can’t stay in that realm for very long when you’re talking about the ways humans respond to one another.

            Each time Katniss encounters another tribute in the games she faces a Prisoner’s Dilemma. Until the final round, the hunger games are not a zero-sum contest—which means that a gain for one doesn’t necessarily mean a loss for the other. Ultimately, of course, Katniss and Peeta are playing a zero-sum game; since only one tribute can win, one of any two surviving players at the end will have to kill the other (or let him die). Every time one tribute kills another, the math of the Prisoner’s Dilemma has to be adjusted. Peeta, for instance, wouldn’t want to betray Katniss early on, while there are still several tributes trying to kill them, but he would want to balance the benefits of her resources with whatever advantage he could gain from her unsuspecting trust—so as they approach the last few tributes, his temptation to betray her gets stronger. Of course, Katniss knows this too, and so the same logic applies for her.

            As everyone who’s read the novel or seen the movie knows, however, this isn’t how either Peeta or Katniss plays in the hunger games. And we already have an idea of why that is: Peeta has said he doesn’t want to let the games turn him into a monster. Figuring out the calculus of the most rational decisions is well and good, but humans are often moved by their emotions—fear, affection, guilt, indebtedness, love, rage—to behave in ways that are completely irrational—at least in the near term. Peeta is in love with Katniss, and though she doesn’t really quite trust him at first, she proves willing to sacrifice herself in order to help him survive. This goes well beyond cooperation to serve purely selfish interests.

Many evolutionary theorists believe that at some point in our evolutionary history, humans began competing with each other to see who could be the most cooperative. This paradoxical idea emerges out of a type of interaction between and among individuals called costly signaling. Many social creatures must decide who among their conspecifics would make the best allies. And all sexually reproducing animals have to have some way to decide with whom to mate. Determining who would make the best ally or who would be the fittest mate is so important that only the most reliable signals are given any heed. What makes the signals reliable is their cost—only the fittest can afford to engage in costly signaling. Some animals have elaborate feathers that are conspicuous to predators; others have massive antlers. This is known as the handicap principle. In humans, the theory goes, altruism somehow emerged as a costly signal, so that the fittest demonstrate their fitness by engaging in behaviors that benefit others to their own detriment. The boy in The Road, for instance, isn’t just upset by the prospect of having to turn to canibalism himself; he’s sad that he and his father weren’t able to help the other people they found locked in the cellar.

We can’t help feeling strong positive emotions toward altruists. Katniss wins over readers and viewers the moment she volunteers to serve as tribute in place of her younger sister, whose name was picked in the lottery. What’s interesting, though, is that at several points in the story Katniss actually does engage in purely rational strategizing. She doesn’t attempt to help Peeta for a long time after she finds out he’s been wounded trying to protect her—why would she when they’re only going to have to fight each other in later rounds? But when it really comes down to it, when it really matters most, both Katniss and Peeta demonstrate that they’re willing to protect one another even at a cost to themselves.

The birth of humanity occurred, somewhat figuratively, when people refused to play the game of me versus you and determined instead to play us versus them. Humans don’t like zero-sum games, and whenever possible they try to change to the rules so there can be more than one winner. To do that, though, they have to make it clear that they would rather die than betray their teammates. In The Road, the father and his son continue to carry the fire, and in The Hunger Games Peeta gets his chance to show he’d rather die than be turned into a monster. By the end of the story, it’s really no surprise what Katniss choses to do either. Saving her sister may not have been purely altruistic from a genetic standpoint. But Peeta isn’t related to her, nor is he her only—or even her most eligible—suitor. Still, her moments of cold strategizing notwithstanding, we've had her picked as an altruist all along.

Of course, humanity may have begun with the sense that it’s us versus them, but as it’s matured the us has grown to encompass an ever wider assortment of people and the them has receded to include more and more circumscribed groups of evil-doers. Unfortunately, there are still all too many people who are overly eager to treat unfamiliar groups as rival tribes, and all too many people who believe that the best governing principle for society is competition—the war of all against all. Altruism is one of the main hallmarks of humanity, and yet some people are simply more altruistic than others. Let’s just hope that it doesn’t come down to us versus them…again.

Also read

THE PEOPLE WHO EVOLVED OUR GENES FOR US: CHRISTOPHER BOEHM ON MORAL ORIGINS – PART 3 OF A CRASH COURSE IN MULTILEVEL SELECTION THEORY

And:

HOW VIOLENT FICTION WORKS: ROHAN WILSON’S “THE ROVING PARTY” AND JAMES WOOD’S SANGUINARY SUBLIME FROM CONRAD TO MCCARTHY

Read More
Dennis Junk Dennis Junk

The Adaptive Appeal of Bad Boys

From the intro to my master’s thesis where I explore the evolved psychological dynamics of storytelling and witnessing, with a special emphasis on the paradox that the most compelling characters are often less than perfect human beings. Why do audiences like Milton’s Satan, for instance? Why did we all fall in love with Tyler Durden from Fight Club? It turns out both of these characters give indications that they just may be more altruistic than they appear at first.

Excerpt from Hierarchies in Hell and Leaderless Fight ClubsAltruism, Narrative Interest, and the Adaptive Appeal of Bad Boys, my master’s thesis

            In a New York Times article published in the spring of 2010, psychologist Paul Bloom tells the story of a one-year-old boy’s remarkable response to a puppet show. The drama the puppets enacted began with a central character’s demonstration of a desire to play with a ball. After revealing that intention, the character roles the ball to a second character who likewise wants to play and so rolls the ball back to the first. When the first character rolls the ball to a third, however, this puppet snatches it up and quickly absconds. The second, nice puppet and the third, mean one are then placed before the boy, who’s been keenly attentive to their doings, and they both have placed before them a few treats. The boy is now instructed by one of the adults in the room to take a treat away from one of the puppets. Most children respond to the instructions by taking the treat away from the mean puppet, and this particular boy is no different. He’s not content with such a meager punishment, though, and after removing the treat he proceeds to reach out and smack the mean puppet on the head.

            Brief stage shows like the one featuring the nice and naughty puppets are part of an ongoing research program lead by Karen Wynn, Bloom’s wife and colleague, and graduate student Kiley Hamlin at Yale University’s Infant Cognition Center. An earlier permutation of the study was featured on PBS’s Nova series The Human Spark(jump to chapter 5), which shows host Alan Alda looking on as an infant named Jessica attends to a puppet show with the same script as the one that riled the boy Bloom describes. Jessica is so tiny that her ability to track and interpret the puppets’ behavior on any level is impressive, but when she demonstrates a rudimentary capacity for moral judgment by reaching with unchecked joy for the nice puppet while barely glancing at the mean one, Alda—and Nova viewers along with him—can’t help but demonstrate his own delight. Jessica shows unmistakable signs of positive emotion in response to the nice puppet’s behaviors, and Alda in turn feels positive emotions toward Jessica. Bloom attests that “if you watch the older babies during the experiments, they don’t act like impassive judges—they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events” (6). Any adult witnessing the children’s reactions can be counted on to mirror these expressions and to feel delight at the babies’ incredible precocity.

            The setup for these experiments with children is very similar to experiments with adult participants that assess responses to anonymously witnessed exchanges. In their research report, “Third-Party Punishment and Social Norms,” Ernst Fehr and Urs Fischbacher describe a scenario inspired by economic game theory called the Dictator Game. It begins with an experimenter giving a first participant, or player, a sum of money. The experimenter then explains to the first player that he or she is to propose a cut of the money to the second player. In the Dictator Game—as opposed to other similar game theory scenarios—the second player has no choice but to accept the cut from the first player, the dictator. The catch is that the exchange is being witnessed by a third party, the analogue of little Jessica or the head-slapping avenger in the Yale experiments.  This third player is then given the opportunity to reward or punish the dictator. As Fehr and Fischbacher explain, “Punishment is, however, costly for the third party so a selfish third party will never punish” (3).

It turns out, though, that adults, just like the infants in the Yale studies, are not selfish—at least not entirely. Instead, they readily engage in indirect, or strong, reciprocity. Evolutionary literary theorist William Flesch explains that “the strong reciprocator punishes and rewards others for their behavior toward any member of the social group, and not just or primarily for their interactions with the reciprocator” (21-2). According to Flesch, strong reciprocity is the key to solving what he calls “the puzzle of narrative interest,” the mystery of why humans so readily and eagerly feel “anxiety on behalf of and about the motives, actions, and experiences of fictional characters” (7). The human tendency toward strong reciprocity reaches beyond any third party witnessing an exchange between two others; as Alda, viewers of Nova, and even readers of Bloom’s article in the Times watch or read about Wynn and Hamlin’s experiments, they have no choice but to become participants in the experiments themselves, because their own tendency to reward good behavior with positive emotion and to punish bad behavior with negative emotion is automatically engaged. Audiences’ concern, however, is much less with the puppets’ behavior than with the infants’ responses to it.

The studies of social and moral development conducted at the Infant Cognition Center pull at people’s heartstrings because they demonstrate babies’ capacity to behave in a way that is expected of adults. If Jessica had failed to discern between the nice and the mean puppets, viewers probably would have readily forgiven her. When older people fail to make moral distinctions, however, those in a position to witness and appreciate that failure can be counted on to withdraw their favor—and may even engage in some type of sanctioning, beginning with unflattering gossip and becoming more severe if the immorality or moral complacency persists. Strong reciprocity opens the way for endlessly branching nth–order reciprocation, so not only will individuals be considered culpable for offenses they commit but also for offenses they passively witness. Flesch explains,

Among the kinds of behavior that we monitor through tracking or through report, and that we have a tendency to punish or reward, is the way others monitor behavior through tracking or through report, and the way they manifest a tendency to punish and reward. (50)

Failing to signal disapproval makes witnesses complicit. On the other hand, signaling favor toward individuals who behave altruistically simultaneously signals to others the altruism of the signaler. What’s important to note about this sort of indirect signaling is that it does not necessarily require the original offense or benevolent act to have actually occurred. People take a proclivity to favor the altruistic as evidence of altruism—even if the altruistic character is fictional. 

        That infants less than a year old respond to unfair or selfish behavior with negative emotions—and a readiness to punish—suggests that strong reciprocity has deep evolutionary roots in the human lineage. Humans’ profound emotional engagement with fictional characters and fictional exchanges probably derives from a long history of adapting to challenges whose Darwinian ramifications were far more serious than any attempt to while away some idle afternoons. Game theorists and evolutionary anthropologists have a good idea what those challenges might have been: for cooperativeness or altruism to be established and maintained as a norm within a group of conspecifics, some mechanism must be in place to prevent the exploitation of cooperative or altruistic individuals by selfish and devious ones. Flesch explains,

Darwin himself had proposed a way for altruism to evolve through the mechanism of group selection. Groups with altruists do better as a group than groups without. But it was shown in the 1960s that, in fact, such groups would be too easily infiltrated or invaded by nonaltruists—that is, that group boundaries are too porous—to make group selection strong enough to overcome competition at the level of the individual or the gene. (5)

If, however, individuals given to trying to take advantage of cooperative norms were reliably met with slaps on the head—or with ostracism in the wake of spreading gossip—any benefits they (or their genes) might otherwise count on to redound from their selfish behavior would be much diminished. Flesch’s theory is “that we have explicitly evolved the ability and desire to track others and to learn their stories precisely in order to punish the guilty (and somewhat secondarily to reward the virtuous)” (21). Before strong reciprocity was driving humans to bookstores, amphitheaters, and cinemas, then, it was serving the life-and-death cause of ensuring group cohesion and sealing group boundaries against neighboring exploiters. 

Game theory experiments that have been conducted since the early 1980s have consistently shown that people are willing, even eager to punish others whose behavior strikes them as unfair or exploitative, even when administering that punishment involves incurring some cost for the punisher. Like the Dictator Game, the Ultimatum Game involves two people, one of whom is given a sum of money and told to offer the other participant a cut. The catch in this scenario is that the second player must accept the cut or neither player gets to keep any money. “It is irrational for the responder not to accept any proposed split from the proposer,” Flesch writes. “The responder will always come out better by accepting than vetoing” (31). What the researchers discovered, though, was that a line exists beneath which responders will almost always refuse the cut. “This means they are paying to punish,” Flesch explains. “They are giving up a sure gain in order to punish the selfishness of the proposer” (31). Game theorists call this behavior altruistic punishment because “the punisher’s willingness to pay this cost may be an important part in enforcing norms of fairness” (31). In other words, the punisher is incurring a cost to him or herself in order to ensure that selfish actors don’t have a chance to get a foothold in the larger, cooperative group. 

The economic logic notwithstanding, it seems natural to most people that second players in Ultimatum Game experiments should signal their disapproval—or stand up for themselves, as it were—by refusing to accept insultingly meager proposed cuts. The cost of the punishment, moreover, can be seen as a symbol of various other types of considerations that might prevent a participant or a witness from stepping up or stepping in to protest. Discussing the Three-Player Dictator Game experiments conducted by Fehr and Fischbacher, Flesch points out that strong reciprocity is even more starkly contrary to any selfish accounting:

Note that the third player gets nothing out of paying to reward or punish except the power or agency to do just that. It is highly irrational for this player to pay to reward or punish, but again considerations of fairness trump rational self-interest. People do pay, and pay a substantial amount, when they think that someone has been treated notably unfairly, or when they think someone has evinced marked generosity, to affect what they have observed. (33)

Neuroscientists have even zeroed in on the brain regions that correspond to our suppression of immediate self-interest in the service of altruistic punishment, as well as those responsible for the pleasure we take in anticipating—though not in actually witnessing—free riders meeting with their just deserts (Knoch et al. 829Quevain et al. 1254). Outside of laboratories, though, the cost punishers incur can range from the risks associated with a physical confrontation to time and energy spent convincing skeptical peers a crime has indeed been committed.

Flesch lays out his theory of narrative interest in a book aptly titled Comeuppance:Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. A cursory survey of mainstream fiction, in both blockbuster movies and best-selling novels, reveals the good guys versus bad guys dynamic as preeminent in nearly every plot, and much of the pleasure people get from the most popular narratives can quite plausibly be said to derive from the goodie prevailing—after a long, harrowing series of close calls and setbacks—while the baddie simultaneously gets his or her comeuppance. Audiences love to see characters get their just deserts. When the plot fails to deliver on this score, they walk away severely disturbed. That disturbance can, however, serve the author’s purposes, particularly when the goal is to bring some danger or injustice to readers’ or viewers’ attention, as in the case of novels like Orwell’s 1984. Plots, of course, seldom feature simple exchanges with meager stakes on the scale of game theory experiments, and heroes can by no means count on making it to the final scene both vindicated and rewarded—even in stories designed to give audiences exactly what they want. The ultimate act of altruistic punishment, and hence the most emotionally poignant behavior a character can engage in, is martyrdom. It’s no coincidence that the hero dies in the act of vanquishing the villain in so many of the most memorable books and movies.

            If narrative interest really does emerge out of a propensity to monitor each other’s behaviors for signs of a capacity for cooperation and to volunteer affect on behalf of altruistic individuals and against selfish ones they want to see get their comeuppance, the strong appeal of certain seemingly bad characters emerges as a mystery calling for explanation.  From England’s tradition of Byronic heroes like Rochester to America’s fascination with bad boys like Tom Sawyer, these characters win over audiences and stand out as perennial favorites even though at first blush they seem anything but eager to establish their nice guy bone fides. On the other hand, Rochester was eventually redeemed in Jane Eyre, and Tom Sawyer, though naughty to be sure, shows no sign whatsoever of being malicious. Tellingly, though, these characters, and a long list of others like them, also demonstrate a remarkable degree of cleverness: Rochester passing for a gypsy woman, for instance, or Tom Sawyer making fence painting out to be a privilege. One hypothesis that could account for the appeal of bad boys is that their badness demonstrates undeniably their ability to escape the negative consequences most people expect to result from their own bad behavior.

This type of demonstration likely functions in a way similar to another mechanism that many evolutionary biologists theorize must have been operating for cooperation to have become established in human societies, a process referred to as the handicap principle, or costly signaling. A lone altruist in any group is unlikely to fare well in terms of survival and reproduction. So the question arises as to how the minimum threshold of cooperators in a population was first surmounted. Flesch’s fellow evolutionary critic, Brian Boyd, in his book On the Origin of Stories, traces the process along a path from mutualism, or coincidental mutual benefits, to inclusive fitness, whereby organisms help others who are likely to share their genes—primarily family members—to reciprocal altruism, a quid pro quo arrangement in which one organism will aid another in anticipation of some future repayment (54-57). However, a few individuals in our human ancestry must have benefited from altruism that went beyond familial favoritism and tit-for-tat bartering.

In their classic book The Handicap Principal, Amotz and Avishag Zahavi suggest that altruism serves a function in cooperative species similar to the one served by a peacock’s feathers. The principle could also help account for the appeal of human individuals who routinely risk suffering consequences which deter most others. The idea is that conspecifics have much to gain from accurate assessments of each other’s fitness when choosing mates or allies. Many species have thus evolved methods for honestly signaling their fitness, and as the Zahavis explain, “in order to be effective, signals have to be reliable; in order to be reliable, signals have to be costly” (xiv). Peacocks, the iconic examples of the principle in action, signal their fitness with cumbersome plumage because their ability to survive in spite of the handicap serves as a guarantee of their strength and resourcefulness. Flesch and Boyd, inspired by evolutionary anthropologists, find in this theory of costly signaling the solution the mystery of how altruism first became established; human altruism is, if anything, even more elaborate than the peacock’s display. 

Humans display their fitness in many ways. Not everyone can be expected to have the wherewithal to punish free-riders, especially when doing so involves physical conflict. The paradoxical result is that humans compete for the status of best cooperator. Altruism is a costly signal of fitness. Flesch explains how this competition could have emerged in human populations:

If there is a lot of between-group competition, then those groups whose modes of costly signaling take the form of strong reciprocity, especially altruistic punishment, will outcompete those whose modes yield less secondary gain, especially less secondary gain for the group as a whole. (57)

Taken together, the evidence Flesch presents suggests the audiences of narratives volunteer affect on behalf of fictional characters who show themselves to be altruists and against those who show themselves to be selfish actors or exploiters, experiencing both frustration and delight in the unfolding of the plot as they hope to see the altruists prevail and the free-riders get their comeuppance. This capacity for emotional engagement with fiction likely evolved because it serves as a signal to anyone monitoring individuals as they read or view the story, or as they discuss it later, that they are disposed either toward altruistic punishment or toward third-order free-riding themselves—and altruism is a costly signal of fitness.

The hypothesis emerging from this theory of social monitoring and volunteered affect to explain the appeal of bad boy characters is that their bad behavior will tend to redound to the detriment of still worse characters. Bloom describes the results of another series of experiments with eight-month-old participants:

When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior. (5)

These characters’ bad behavior will also likely serve an obvious function as costly signaling; they’re bad because they’re good at getting away with it. Evidence that the bad boy characters are somehow truly malicious—for instance, clear signals of a wish to harm innocent characters—or that they’re irredeemable would severely undermine the theory. As the first step toward a preliminary survey, the following sections examine two infamous instances in which literary characters whose creators intended audiences to recognize as bad nonetheless managed to steal the show from the supposed good guys.

(Watch Hamlin discussing the research in an interview from earlier today.)

And check out this video of the experiments.

Read More