Teaching

Too Psyched for Sherlock: A Review of Maria Konnikova’s “Mastermind: How to Think like Sherlock Holmes”—with Some Thoughts on Science Education

            Whenever he gets really drunk, my brother has the peculiar habit of reciting the plot of one or another of his favorite shows or books. His friends and I like to tease him about it—“Watch out, Dan’s drunk, nobody mention The Wire!”—and the quirk can certainly be annoying, especially if you’ve yet to experience the story first-hand. But I have to admit, given how blotto he usually is when he first sets out on one of his grand retellings, his ability to recall intricate plotlines right down to their minutest shifts and turns is extraordinary. One recent night, during a timeout in an epic shellacking of Notre Dame’s football team, he took up the tale of Django Unchained, which incidentally I’d sat next to him watching just the week before. Tuning him out, I let my thoughts shift to a post I’d read on The New Yorker’s cinema blog The Front Row.

            In “The Riddle of Tarantino,” film critic Richard Brody analyzes the director-screenwriter’s latest work in an attempt to tease out the secrets behind the popular appeal of his creations and to derive insights into the inner workings of his mind. The post is agonizingly—though also at points, I must admit, exquisitely—overwritten, almost a parody of the grandiose type of writing one expects to find within the pages of the august weekly. Bemused by the lavish application of psychoanalytic jargon, I finished the essay pitying Brody for, in all his writerly panache, having nothing of real substance to say about the movie or the mind behind it. I wondered if he knows the scientific consensus on Freud is that his influence is less in the line of, say, a Darwin or an Einstein than of an L. Ron Hubbard.
            
            What Brody and my brother have in common is that they were both moved enough by their cinematic experience to feel an urge to share their enthusiasm, complicated though that enthusiasm may have been. Yet they both ended up doing the story a disservice, succeeding less in celebrating the work than in blunting its impact. Listening to my brother’s rehearsal of the plot with Brody’s essay in mind, I wondered what better field there could be than psychology for affording enthusiasts discussion-worthy insights to help them move beyond simple plot references. How tragic, then, that the only versions of psychology on offer in educational institutions catering to those who would be custodians of art, whether in academia or on the mastheads of magazines like The New Yorker, are those in thrall to Freud’s cultish legacy.
There’s just something irresistibly seductive about the promise of a scientific paradigm that allows us to know more about another person than he knows about himself. In this spirit of privileged knowingness, Brody faults Django for its lack of moral complexity before going on to make a silly accusation. Watching the movie, you know who the good guys are, who the bad guys are, and who you want to see prevail in the inevitably epic climax. “And yet,” Brody writes,
the cinematic unconscious shines through in moments where Tarantino just can’t help letting loose his own pleasure in filming pain. In such moments, he never seems to be forcing himself to look or to film, but, rather, forcing himself not to keep going. He’s not troubled by representation but by a visual superego that restrains it. The catharsis he provides in the final conflagration is that of purging the world of miscreants; it’s also a refining fire that blasts away suspicion of any peeping pleasure at misdeeds and fuses aesthetic, moral, and political exultation in a single apotheosis.
The strained stateliness of the prose provides a ready distraction from the stark implausibility of the assessment. Applying Occam’s Razor rather than Freud’s at once insanely elaborate and absurdly reductionist ideology, we might guess that what prompted Tarantino to let the camera linger discomfortingly long on the violent misdeeds of the black hats is that he knew we in the audience would be anticipating that “final conflagration.”  The more outrageous the offense, the more pleasurable the anticipation of comeuppance—but the experimental findings that support this view aren’t covered in film or literary criticism curricula, mired as they are in century-old pseudoscience.
Maria Konnikova
            I’ve been eagerly awaiting the day when scientific psychology supplants psychoanalysis (as well as other equally, if not more, absurd ideologies) in academic and popular literary discussions. Coming across the blog Literally Psyched on Scientific American’s website about a year ago gave me a great sense of hope. The tagline, “Conceived in literature, tested in psychology,” as well as the credibility conferred by the host site, promised that the most fitting approach to exploring the resonance and beauty of stories might be undergoing a long overdue renaissance, liberated at last from the dominion of crackpot theorists. So when the author, Maria Konnikova, a doctoral candidate at Columbia, released her first book, I made a point to have Amazon deliver it as early as possible. Mastermind: How to Think Like Sherlock Holmes does indeed follow the conceived-in-literature-tested-in-psychology formula, taking the principles of sound reasoning expounded by what may be the most recognizable fictional character in history and attempting to show how modern psychology proves their soundness. In what she calls a “Prelude” to her book, Konnikova explains that she’s been a Holmes fan since her father read Conan Doyle’s stories to her and her siblings as children.
     The one demonstration of the detective’s abilities that stuck with Konnikova the most comes when he explains to his companion and chronicler Dr. Watson the difference between seeing and observing, using as an example the number of stairs leading up to their famous flat at 221B Baker Street. Watson, naturally, has no idea how many stairs there are because he isn’t in the habit of observing. Holmes, preternaturally, knows there are seventeen steps. Ever since being made aware of Watson’s—and her own—cognitive limitations through this vivid illustration (which had a similar effect on me when I first read “A Scandal in Bohemia” as a teenager), Konnikova has been trying to find the secret to becoming a Holmesian observer as opposed to a mere Watsonian seer. Already in these earliest pages, we encounter some of the principle shortcomings of the strategy behind the book. Konnikova wastes no time on the question of whether or not a mindset oriented toward things like the number of stairs in your building has any actual advantages—with regard to solving crimes or to anything else—but rather assumes old Sherlock is saying something instructive and profound.
            Mastermind is, for the most part, an entertaining read. Its worst fault in the realm of simple page-by-page enjoyment is that Konnikova often belabors points that upon reflection expose themselves as mere platitudes. The overall theme is the importance of mindfulness—an important message, to be sure, in this age of rampant multitasking. But readers get more endorsement than practical instruction. You can only be exhorted to pay attention to what you’re doing so many times before you stop paying attention to the exhortations. The book’s problems in both the literary and psychological domains, however, are much more serious. I came to the book hoping it would hold some promise for opening the way to more scientific literary discussions by offering at least a glimpse of what they might look like, but while reading I came to realize there’s yet another obstacle to any substantive analysis of stories. Call it the TED effect. For anything to be read today, or for anything to get published for that matter, it has to promise to uplift readers, reveal to them some secret about how to improve their lives, help them celebrate the horizonless expanse of human potential.
Naturally enough, with the cacophony of competing information outlets, we all focus on the ones most likely to offer us something personally useful. Though self-improvement is a worthy endeavor, the overlooked corollary to this trend is that the worthiness intrinsic to enterprises and ideas is overshadowed and diminished. People ask what’s in literature for me, or what can science do for me, instead of considering them valuable in their own right—and instead of thinking, heaven forbid, we may have a duty to literature and science as institutions serving as essential parts of the foundation of civilized society.
            In trying to conceive of a book that would operate as a vehicle for her two passions, psychology and Sherlock Holmes, while at the same time catering to readers’ appetite for life-enhancement strategies and spiritual uplift, Konnikova has produced a work in the grip of a bewildering and self-undermining identity crisis. The organizing conceit of Mastermind is that, just as Sherlock explains to Watson in the second chapter of A Study in Scarlet, the brain is like an attic. For Konnikova, this means the mind is in constant danger of becoming cluttered and disorganized through carelessness and neglect. That this interpretation wasn’t what Conan Doyle had in mind when he put the words into Sherlock’s mouth—and that the meaning he actually had in mind has proven to be completely wrong—doesn’t stop her from making her version of the idea the centerpiece of her argument. “We can,” she writes,
learn to master many aspects of our attic’s structure, throwing out junk that got in by mistake (as Holmes promises to forget Copernicus at the earliest opportunity), prioritizing those things we want to and pushing back those that we don’t, learning how to take the contours of our unique attic into account so that they don’t unduly influence us as they otherwise might. (27)
This all sounds great—a little too great—from a self-improvement perspective, but the attic metaphor is Sherlock’s explanation for why he doesn’t know the earth revolves around the sun and not the other way around. He states quite explicitly that he believes the important point of similarity between attics and brains is their limited capacity. “Depend upon it,” he insists, “there comes a time when for every addition of knowledge you forget something that you knew before.” Note here his topic is knowledge, not attention.
            It is possible that a human mind could reach and exceed its storage capacity, but the way we usually avoid this eventuality is that memories that are seldom referenced are forgotten. Learning new facts may of course exhaust our resources of time and attention. But the usual effect of acquiring knowledge is quite the opposite of what Sherlock suggests. In the early 1990’s, a research team led by Patricia Alexander demonstrated that having background knowledge in a subject area actually increased participants’ interest in and recall for details in an unfamiliar text. One of the most widely known replications of this finding was a study showing that chess experts have much better recall for the positions of pieces on a board than novices. However, Sherlock was worried about information outside of his area of expertise. Might he have a point there?
The problem is that Sherlock’s vocation demands a great deal of creativity, and it’s never certain at the outset of a case what type of knowledge may be useful in solving it. In the story “The Lion’s Mane,” he relies on obscure information about a rare species of jellyfish to wrap up the mystery. Konnikova cites this as an example of “The Importance of Curiosity and Play.” She goes on to quote Sherlock’s endorsement for curiosity in The Valley of Fear: “Breadth of view, my dear Mr. Mac, is one of the essentials of our profession. The interplay of ideas and the oblique uses of knowledge are often of extraordinary interest” (151). How does she account for the discrepancy? Could Conan Doyle’s conception of the character have undergone some sort of evolution? Alas, Konnikova isn’t interested in questions like that. “As with most things,” she writes about the earlier reference to the attic theory, “it is safe to assume that Holmes was exaggerating for effect” (150). I’m not sure what other instances she may have in mind—it seems to me that the character seldom exaggerates for effect. In any case, he was certainly not exaggerating his ignorance of Copernican theory in the earlier story.
If Konnikova were simply privileging the science at the expense of the literature, the measure of Mastermind’s success would be in how clearly the psychological theories and findings are laid out. Unfortunately, her attempt to stitch science together with pronouncements from the great detective often leads to confusing tangles of ideas. Following her formula, she prefaces one of the few example exercises from cognitive research provided in the book with a quote from “The Crooked Man.” After outlining the main points of the case, she writes, 
How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision. (169)
So far she hasn’t gone beyond the obvious. But she does go on to cite a truly remarkable finding that emerged from research by Amos Tversky and Daniel Kahneman in the early 1980’s. People who read a description of a man named Bill suggesting he lacks imagination tended to feel it was less likely that Bill was an accountant than that he was an accountant who plays jazz for a hobby—even though the two points of information in that second description make in inherently less likely than the one point of information in the first. The same result came when people were asked whether it was more likely that a woman named Linda was a bank teller or both a bank teller and an active feminist. People mistook the two-item choice as more likely. Now, is this experimental finding an example of how people fail to sift crucial from incidental facts?
Daniel Kahneman
            The findings of this study are now used as evidence of a general cognitive tendency known as the conjunction fallacy. In his book Thinking, Fast and Slow, Kahneman explains how more detailed descriptions (referring to Tom instead of Bill) can seem more likely, despite the actual probabilities, than shorter ones. He writes,
The judgments of probability that our respondents offered, both in the Tom W and Linda problems, corresponded precisely to judgments of representativeness (similarity to stereotypes). Representativeness belongs to a cluster of closely related basic assessments that are likely to be generated together. The most representative outcomes combine with the personality description to produce the most coherent stories. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary. (159)
So people are confused because the less probable version is actually easier to imagine. But here’s how Konnikova tries to explain the point by weaving it together with Sherlock’s ideas:
Holmes puts it this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories in our minds. (173)
But Sherlock is not referring to our minds’ tendency to mistake coherence for probability, the tendency that has us seeing more detailed and hence less probable stories as more likely. How could he have been? Instead, he’s talking about the importance of independently assessing the facts instead of passively accepting the assessments of others. Konnikova is fudging, and in doing so she’s shortchanging the story and obfuscating the science.
            As the subtitle implies, though, Mastermind is about how to think; it is intended as a self-improvement guide. The book should therefore be judged based on the likelihood that readers will come away with a greater ability to recognize and avoid cognitive biases, as well as the ability to sustain the conviction to stay motivated and remain alert. Konnikova emphasizes throughout that becoming a better thinker is a matter of determinedly forming better habits of thought. And she helpfully provides countless illustrative examples from the Holmes canon, though some of these precepts and examples may not be as apt as she’d like. You must have clear goals, she stresses, to help you focus your attention. But the overall purpose of her book provides a great example of a vague and unrealistic end-point. Think better? In what domain? She covers examples from countless areas, from buying cars and phones, to sizing up strangers we meet at a party. Sherlock, of course, is a detective, so he focuses his attention of solving crimes. As Konnikova dutifully points out, in domains other than his specialty, he’s not such a mastermind.
            What Mastermind works best as is a fun introduction to modern psychology. But it has several major shortcomings in that domain, and these same shortcomings diminish the likelihood that reading the book will lead to any lasting changes in thought habits. Concepts are covered too quickly, organized too haphazardly, and no conceptual scaffold is provided to help readers weigh or remember the principles in context. Konnikova’s strategy is to take a passage from Conan Doyle’s stories that seems to bear on noteworthy findings in modern research, discuss that research with sprinkled references back to the stories, and wrap up with a didactic and sententious paragraph or two. Usually, the discussion begins with one of Watson’s errors, moves on to research showing we all tend to make similar errors, and then ends admonishing us not to be like Watson. Following Kahneman’s division of cognition into two systems—one fast and intuitive, the other slower and demanding of effort—Konnikova urges us to get out of our “System Watson” and rely instead on our “System Holmes.” “But how do we do this in practice?” she asks near the end of the book,
How do we go beyond theoretically understanding this need for balance and open-mindedness and applying it practically, in the moment, in situations where we might not have as much time to contemplate our judgments as we do in the leisure of our reading?
The answer she provides: “It all goes back to the very beginning: the habitual mindset that we cultivate, the structure that we try to maintain for our brain attic no matter what” (240). Unfortunately, nowhere in her discussion of built-in biases and the correlates to creativity did she offer any step-by-step instruction on how to acquire new habits. Konnikova is running us around in circles to hide the fact that her book makes an empty promise.
Tellingly, Kahneman, whose work on biases Konnikova cites on several occasions, is much more pessimistic about our prospects for achieving Holmesian thought habits. In the introduction to Thinking, Fast and Slow, he says his goal is merely to provide terms and labels for the regular pitfalls of thinking to facilitate more precise gossiping. He writes,
Why be concerned with gossip? Because it is much easier, as well as far more enjoyable, to identify and label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated judgments therefore matters. The expectation of intelligent gossip is a powerful motive for serious self-criticism, more powerful than New Year resolutions to improve one’s decision making at work and home. (3)
The worshipful attitude toward Sherlock in Mastermind is designed to pander to our vanity, and so the suggestion that we need to rely on others to help us think is too mature to appear in its pages. The closest Konnikova comes to allowing for the importance of input and criticism from other people is when she suggests that Watson is an indispensable facilitator of Sherlock’s process because he “serves as a constant reminder of what errors are possible” (195), and because in walking him through his reasoning Sherlock is forced to be more mindful. “It may be that you are not yourself luminous,” Konnikova quotes from The Hound of the Baskervilles, “but you are a conductor of light. Some people without possessing genius have a remarkable power of stimulating it. I confess, my dear fellow, that I am very much in your debt” (196).
            That quote shows one of the limits of Sherlock’s mindfulness that Konnikova never bothers to address. At times throughout Mastermind, it’s easy to forget that we probably wouldn’t want to live the way Sherlock is described as living. Want to be a great detective? Abandon your spouse and your kids, move into a cheap flat, work full-time reviewing case histories of past crimes, inject some cocaine, shoot holes in the wall of your flat where you’ve drawn a smiley face, smoke a pipe until the air is unbreathable, and treat everyone, including your best (only?) friend with casual contempt. Conan Doyle made sure his character casts a shadow. The ideal character Konnikova holds up, with all his determined mindfulness, often bears more resemblance to Kwai Chang Caine from Kung Fu. This isn’t to say that Sherlock isn’t morally complex—readers love him because he’s so clearly a good guy, as selfish and eccentric as he may be. Konnikova cites an instance in which he holds off on letting the police know who committed a crime. She quotes:
Once that warrant was made out, nothing on earth would save him. Once or twice in my career I feel that I have done more real harm by my discovery of the criminal than ever he had done by his crime. I have learned caution now, and I had rather play tricks with the law of England than with my own conscience. Let us know more before we act.
But Konnikova isn’t interested in morality, complex or otherwise, no matter how central moral intuitions are to our enjoyment of fiction. The lesson she draws from this passage shows her at her most sententious and platitudinous:
You don’t mindlessly follow the same preplanned set of actions that you had determined early on. Circumstances change, and with them so does the approach. You have to think before you leap to act, or judge someone, as the case may be. Everyone makes mistakes, but some may not be mistakes as such, when taken in the context of the time and the situation. (243)
Hard to disagree, isn’t it?
            To be fair, Konnikova does mention some of Sherlock’s peccadilloes in passing. And she includes a penultimate chapter titled “We’re Only Human,” in which she tells the story of how Conan Doyle was duped by a couple of young girls into believing they had photographed some real fairies. She doesn’t, however, take the opportunity afforded by this episode in the author’s life to explore the relationship between the man and his creation. She effectively says he got tricked because he didn’t do what he knew how to do, it can happen to any of us, so be careful you don’t let it happen to you. Aren’t you glad that’s cleared up? She goes on to end the chapter with an incongruous lesson about how you should think like a hunter. Maybe we should, but how exactly, and when, and at what expense, we’re never told.
            Konnikova clearly has a great deal of genuine enthusiasm for both literature and science, and despite my disappointment with her first book I plan to keep following her blog. I’m even looking forward to her next book—confident she’ll learn from the negative reviews she’s bound to get on this one. The tragic blunder she made in eschewing nuanced examinations of how stories work, how people relate to characters, or how authors create them for a shallow and one-dimensional attempt at suggesting a 100 year-old fictional character somehow divined groundbreaking research findings from the end of the Twentieth and beginning of the Twenty-First Centuries calls to mind an exchange you can watch on YouTube between Neil Degrasse Tyson and Richard Dawkins. Tyson, after hearing Dawkins speak in the way he’s known to, tries to explain why many scientists feel he’s not making the most of his opportunities to reach out to the public.
You’re professor of the public understanding of science, not the professor of delivering truth to the public. And these are two different exercises. One of them is putting the truth out there and they either buy your book or they don’t. That’s not being an educator; that’s just putting it out there. Being an educator is not only getting the truth right; there’s got to be an act of persuasion in there as well. Persuasion isn’t “Here’s the facts—you’re either an idiot or you’re not.” It’s “Here are the facts—and here is a sensitivity to your state of mind.” And it’s the facts and the sensitivity when convolved together that creates impact. And I worry that your methods, and how articulately barbed you can be, ends up being simply ineffective when you have much more power of influence than is currently reflected in your output.
Dawkins begins his response with an anecdote that shows that he’s not the worst offender when it comes to simple and direct presentations of the facts.
A former and highly successful editor of New Scientist Magazine, who actually built up New Scientist to great new heights, was asked “What is your philosophy at New Scientist?” And he said, “Our philosophy at New Scientist is this: science is interesting, and if you don’t agree you can fuck off.”
I know the issue is a complicated one, but I can’t help thinking Tyson-style persuasion too often has the opposite of its intended impact, conveying as it does the implicit message that science has to somehow be sold to the masses, that it isn’t intrinsically interesting. At any rate, I wish that Konnikova hadn’t dressed up her book with false promises and what she thought would be cool cross-references. Sherlock Holmes is interesting. Psychology is interesting. If you don’t agree, you can fuck off.

Why Shakespeare Nauseated Darwin: A Review of Keith Oatley's "Such Stuff as Dreams"

Review of Such Stuff as Dreams: The Psychology of Fiction by Keith Oatley
            Late in his life, Charles Darwin lost his taste for music and poetry. “My mind seems to have become a kind of machine for grinding general laws out of large collections of facts,” he laments in his autobiography, and for many of us the temptation to place all men and women of science into a category of individuals whose minds resemble machines more than living and emotionally attuned organs of feeling and perceiving is overwhelming. In the 21st century, we even have a convenient psychiatric diagnosis for people of this sort. Don’t we just assume Sheldon in The Big Bang Theory has autism, or at least the milder version of it known as Asperger’s? It’s probably even safe to assume the show’s writers had the diagnostic criteria for the disorder in mind when they first developed his character. Likewise, Dr. Watson in the BBC’s new and obscenely entertaining Sherlock series can’t resist a reference to the quintessential evidence-crunching genius’s own supposed Asperger’s. In Darwin’s case, however, the move away from the arts couldn’t have been due to any congenital deficiency in his finer human sentiments because it occurred only in adulthood. He writes,

I have said that in one respect my mind has changed during the last twenty or thirty years. Up to the age of thirty, or beyond it, poetry of many kinds, such as the works of Milton, Gray, Byron, Wordsworth, Coleridge, and Shelley, gave me great pleasure, and even as a schoolboy I took intense delight in Shakespeare, especially in the historical plays. I have also said that formerly pictures gave me considerable, and music very great delight. But now for many years I cannot endure to read a line of poetry: I have tried lately to read Shakespeare, and found it so intolerably dull that it nauseated me. I have also almost lost my taste for pictures or music. Music generally sets me thinking too energetically on what I have been at work on, instead of giving me pleasure.

We could interpret Darwin here as suggesting that casting his mind too doggedly into his scientific work somehow ruined his capacity to appreciate Shakespeare. But, like all thinkers and writers of great nuance and sophistication, his ideas are easy to mischaracterize through selective quotation (or, if you’re Ben Stein or any of the other unscrupulous writers behind creationist propaganda like the pseudo-documentary Expelled, you can just lie about what he actually wrote). One of the most charming things about Darwin is that his writing is often more exploratory than merely informative. He writes in search of answers he has yet to discover. In a wider context, the quote about his mind becoming a machine, for instance, reads,

This curious and lamentable loss of the higher aesthetic tastes is all the odder, as books on history, biographies, and travels (independently of any scientific facts which they may contain), and essays on all sorts of subjects interest me as much as ever they did. My mind seems to have become a kind of machine for grinding general laws out of large collections of facts, but why this should have caused the atrophy of that part of the brain alone, on which the higher tastes depend, I cannot conceive. A man with a mind more highly organised or better constituted than mine, would not, I suppose, have thus suffered; and if I had to live my life again, I would have made a rule to read some poetry and listen to some music at least once every week; for perhaps the parts of my brain now atrophied would thus have been kept active through use. The loss of these tastes is a loss of happiness, and may possibly be injurious to the intellect, and more probably to the moral character, by enfeebling the emotional part of our nature.

His concern for his lost aestheticism notwithstanding, Darwin’s humanism, his humanity, radiates in his writing with a warmth that belies any claim about thinking like a machine, just as the intelligence that shows through it gainsays his humble deprecations about the organization of his mind.

           In this excerpt, Darwin, perhaps inadvertently, even manages to put forth a theory of the function of art. Somehow, poetry and music not only give us pleasure and make us happy—enjoying them actually constitutes a type of mental exercise that strengthens our intellect, our emotional awareness, and even our moral character. Novelist and cognitive psychologist Keith Oatley explores this idea of human betterment through aesthetic experience in his book Such Stuff as Dreams: The Psychology of Fiction. This subtitle is notably underwhelming given the long history of psychoanalytic theorizing about the meaning and role of literature. However, whereas psychoanalysis has fallen into disrepute among scientists because of its multiple empirical failures and a general methodological hubris common among its practitioners, the work of Oatley and his team at the University of Toronto relies on much more modest, and at the same time much more sophisticated, scientific protocols. One of the tools these researchers use, The Reading the Mind in the Eyes Test, was in fact first developed to research our new category of people with machine-like minds. What the researchers find bolsters Darwin’s impression that art, at least literary art, functions as a kind of exercise for our faculty of understanding and relating to others.
Keith Oatley

           Reasoning that “fiction is a kind of simulation of selves and their vicissitudes in the social world” (159), Oatley and his colleague Raymond Mar hypothesized that people who spent more time trying to understand fictional characters would be better at recognizing and reasoning about other, real-world people’s states of mind. So they devised a test to assess how much fiction participants in their study read based on how well they could categorize a long list of names according to which ones belonged to authors of fiction, which to authors of nonfiction, and which to non-authors. They then had participants take the Mind-in-the-Eyes Test, which consists of matching close-up pictures of peoples’ eyes with terms describing their emotional state at the time they were taken. The researchers also had participants take the Interpersonal Perception Test, which has them answer questions about the relationships of people in short video clips featuring social interactions. An example question might be “Which of the two children, or both, or neither, are offspring of the two adults in the clip?”  (Imagine Sherlock Holmes taking this test.) As hypothesized, Oatley writes, “We found that the more fiction people read, the better they were at the Mind-in-the-Eyes Test. A similar relationship held, though less strongly, for reading fiction and the Interpersonal Perception Test” (159).
Raymond Mar

            One major shortcoming of this study is that it fails to establish causality; people who are naturally better at reading emotions and making sound inferences about social interactions may gravitate to fiction for some reason. So Mar set up an experiment in which he had participants read either a nonfiction article from an issue of the New Yorker or a work of short fiction chosen to be the same length and require the same level of reading skills. When the two groups then took a test of social reasoning, the ones who had read the short story outperformed the control group. Both groups also took a test of analytic reasoning as a further control; on this variable there was no difference in performance between the groups. The outcome of this experiment, Oatley stresses, shouldn’t be interpreted as evidence that reading one story will increase your social skills in any meaningful and lasting way. But reading habits established over long periods likely explain the more significant differences between individuals found in the earlier study. As Oatley explains,

Readers of fiction tend to become more expert at making models of others and themselves, and at navigating the social world, and readers of non-fiction are likely to become more expert at genetics, or cookery, or environmental studies, or whatever they spend their time reading. Raymond Mar’s experimental study on reading pieces from the New Yorker is probably best explained by priming. Reading a fictional piece puts people into a frame of mind of thinking about the social world, and this is probably why they did better at the test of social reasoning. (160)

Connecting these findings to real-world outcomes, Oatley and his team also found that “reading fiction was not associated with loneliness,” as the stereotype suggests, “but was associated with what psychologists call high social support, being in a circle of people whom participants saw a lot, and who were available to them practically and emotionally” (160).

            These studies by the University of Toronto team have received wide publicity, but the people who should be the most interested in them have little or no idea how to go about making sense of them. Most people simply either read fiction or they don’t. If you happen to be of the tribe who studies fiction, then you were probably educated in a way that engendered mixed feelings—profound confusion really—about science and how it works. In his review of The Storytelling Animal, a book in which Jonathan Gottschall incorporates the Toronto team’s findings into the theory that narrative serves the adaptive function of making human social groups more cooperative and cohesive, Adam Gopnik sneers,

Surely if there were any truth in the notion that reading fiction greatly increased our capacity for empathy then college English departments, which have by far the densest concentration of fiction readers in human history, would be legendary for their absence of back-stabbing, competitive ill-will, factional rage, and egocentric self-promoters; they’d be the one place where disputes are most often quickly and amiably resolved by mutual empathetic engagement. It is rare to see a thesis actually falsified as it is being articulated.

Oatley himself is well aware of the strange case of university English departments. He cites a report by Willie van Peer on a small study he did comparing students in the natural sciences to students in the humanities. Oatley explains,

There was considerable scatter, but on average the science students had higher emotional intelligence than the humanities students, the opposite of what was expected; van Peer indicts teaching in the humanities for often turning people away from human understanding towards technical analyses of details. (160)

Oatley suggests in a footnote that an earlier study corroborates van Peer’s indictment. It found that high school students who show more emotional involvement with short stories—the type of connection that would engender greater empathy—did proportionally worse on standard academic assessments of English proficiency. The clear implication of these findings is that the way literature is taught in universities and high schools is long overdue for an in-depth critical analysis.

            The idea that literature has the power to make us better people is not new; indeed, it was the very idea on which the humanities were originally founded. We have to wonder what people like Gopnik believe the point of celebrating literature is if not to foster greater understanding and empathy. If you either enjoy it or you don’t, and it has no beneficial effects on individuals or on society in general, why bother encouraging anyone to read? Why bother writing essays about it in the New Yorker? Tellingly, many scholars in the humanities began doubting the power of art to inspire greater humanity around the same time they began questioning the value and promise of scientific progress. Oatley writes,

Part of the devastation of World War II was the failure of German citizens, one of the world’s most highly educated populations, to prevent their nation’s slide into Nazism. George Steiner has famously asserted: “We know that a man can read Goethe or Rilke in the evening, that he can play Bach and Schubert, and go to his day’s work at Auschwitz in the morning.” (164)
Willie van Peer

Postwar literary theory and criticism has, perversely, tended toward the view that literature and language in general serve as a vessel for passing on all the evils inherent in our western, patriarchal, racist, imperialist culture. The purpose of literary analysis then becomes to shift out these elements and resist them. Unfortunately, such accusatory theories leave unanswered the question of why, if literature inculcates oppressive ideologies, we should bother reading it at all. As van Peer muses in the report Oatley cites, “The Inhumanity of the Humanities,”

Consider the ills flowing from postmodern approaches, the “posthuman”: this usually involves the hegemony of “race/class/gender” in which literary texts are treated with suspicion. Here is a major source of that loss of emotional connection between student and literature. How can one expect a certain humanity to grow in students if they are continuously instructed to distrust authors and texts? (8)

           Oatley and van Peer point out, moreover, that the evidence for concentration camp workers having any degree of literary or aesthetic sophistication is nonexistent. According to the best available evidence, most of the greatest atrocities were committed by soldiers who never graduated high school. The suggestion that some type of cozy relationship existed between Nazism and an enthusiasm for Goethe runs afoul of recorded history. As Oatley points out,

Apart from propensity to violence, nationalism, and anti-Semitism, Nazism was marked by hostility to humanitarian values in education. From 1933 onwards, the Nazis replaced the idea of self-betterment through education and reading by practices designed to induce as many as possible into willing conformity, and to coerce the unwilling remainder by justified fear. (165)
Lynn Hunt

Oatley also cites the work of historian Lynn Hunt, whose book Inventing Human Rights traces the original social movement for the recognition of universal human rights to the mid-1700s, when what we recognize today as novels were first being written. Other scholars like Steven Pinker have pointed out too that, while it’s hard not to dwell on tragedies like the Holocaust, even atrocities of that magnitude are resoundingly overmatched by the much larger post-Enlightenment trend toward peace, freedom, and the wider recognition of human rights. It’s sad that one of the lasting legacies of all the great catastrophes of the 20th Century is a tradition in humanities scholarship that has the people who are supposed to be the custodians of our literary heritage hell-bent on teaching us all the ways that literature makes us evil.

            Because Oatley is a central figure in what we can only hope is a movement to end the current reign of self-righteous insanity in literary studies, it pains me not to be able to recommend Such Stuff as Dreams to anyone but dedicated specialists. Oatley writes in the preface that he has “imagined the book as having some of the qualities of fiction. That is to say I have designed it to have a narrative flow” (x), and it may simply be that this suggestion set my expectations too high. But the book is poorly edited, the prose is bland and often roles over itself into graceless tangles, and a couple of the chapters seem like little more than haphazardly collated reports of studies and theories, none exactly off-topic, none completely without interest, but all lacking any central progression or theme. The book often reads more like an annotated bibliography than a story. Oatley’s scholarly range is impressive, however, bearing not just on cognitive science and literature through the centuries but extending as well to the work of important literary theorists. The book is never unreadable, never opaque, but it’s not exactly a work of art in its own right.

            Insofar as Such Stuff as Dreams is organized around a central idea, it is that fiction ought be thought of not as “a direct impression of life,” as Henry James suggests in his famous essay “The Art of Fiction,” and as many contemporary critics—notably James Wood—seem to think of it. Rather, Oatley agrees with Robert Louis Stevenson’s response to James’s essay, “A Humble Remonstrance,” in which he writes that

Life is monstrous, infinite, illogical, abrupt and poignant; a work of art in comparison is neat, finite, self-contained, rational, flowing, and emasculate. Life imposes by brute energy, like inarticulate thunder; art catches the ear, among the far louder noises of experience, like an air artificially made by a discreet musician. (qtd on pg 8)

Oatley theorizes that stories are simulations, much like dreams, that go beyond mere reflections of life to highlight through defamiliarization particular aspects of life, to cast them in a new light so as to deepen our understanding and experience of them. He writes,

Every true artistic expression, I think, is not just about the surface of things. It always has some aspect of the abstract. The issue is whether, by a change of perspective or by a making the familiar strange, by means of an artistically depicted world, we can see our everyday world in a deeper way. (15)

Critics of high-brow literature like Wood appreciate defamiliarization at the level of description; Oatley is suggesting here though that the story as a whole functions as a “metaphor-in-the-large” (17), a way of not just making us experience as strange some object or isolated feeling, but of reconceptualizing entire relationships, careers, encounters, biographies—what we recognize in fiction as plots. This is an important insight, and it topples verisimilitude from its ascendant position atop the hierarchy of literary values while rendering complaints about clichéd plots potentially moot. Didn’t Shakespeare recycle plots after all?

            The theory of fiction as a type of simulation to improve social skills and possibly to facilitate group cooperation is emerging as the frontrunner in attempts to explain narrative interest in the context of human evolution. It is to date, however, impossible to rule out the possibility that our interest in stories is not directly adaptive but instead emerges as a byproduct of other traits that confer more immediate biological advantages. The finding that readers track actions in stories with the same brain regions that activate when they witness similar actions in reality, or when they engage in them themselves, is important support for the simulation theory. But the function of mirror neurons isn’t well enough understood yet for us to determine from this study how much engagement with fictional stories depends on the reader's identifying with the protagonist. Oatley’s theory is more consonant with direct and straightforward identification. He writes,

A very basic emotional process engages the reader with plans and fortunes of a protagonist. This is what often drives the plot and, perhaps, keeps us turning the pages, or keeps us in our seat at the movies or at the theater. It can be enjoyable. In art we experience the emotion, but with it the possibility of something else, too. The way we see the world can change, and we ourselves can change. Art is not simply taking a ride on preoccupations and prejudices, using a schema that runs as usual. Art enables us to experience some emotions in contexts that we would not ordinarily encounter, and to think of ourselves in ways that usually we do not. (118)

Much of this change, Oatley suggests, comes from realizing that we too are capable of behaving in ways that we might not like. “I am capable of this too: selfishness, lack of sympathy” (193), is what he believes we think in response to witnessing good characters behave badly.

            Oatley’s theory has a lot to recommend it, but William Flesch’s theory of narrative interest, which suggests we don’t identify with fictional characters directly but rather track them and anxiously hope for them to get whatever we feel they deserve, seems much more plausible in the context of our response to protagonists behaving in surprisingly selfish or antisocial ways. When I see Ed Norton as Tyler Durden beating Angel Face half to death in Fight Club, for instance, I don’t think, hey, that’s me smashing that poor guy’s face with my fists. Instead, I think, what the hell are you doing? I had you pegged as a good guy. I know you’re trying not to be as much of a pushover as you used to be but this is getting scary. I’m anxious that Angel Face doesn’t get too damaged—partly because I imagine that would be devastating to Tyler. And I’m anxious lest this incident be a harbinger of worse behavior to come.

            The issue of identification is just one of several interesting questions that can lend itself to further research. Oatley and Mar’s studies are not enormous in terms of sample size, and their subjects were mostly young college students. What types of fiction work the best to foster empathy? What types of reading strategies might we encourage students to apply to reading literature—apart from trying to remove obstacles to emotional connections with characters? But, aside from the Big-Bad-Western Empire myth that currently has humanities scholars grooming successive generations of deluded ideologues to be little more than culture vultures presiding over the creation and celebration of Loser Lit, the other main challenge to transporting literary theory onto firmer empirical grounds is the assumption that the arts in general and literature in particular demand a wholly different type of thinking to create and appreciate than the type that goes into the intricate mechanics and intensely disciplined practices of science.
Simon Baron-Cohen

As Oatley and the Toronto team have shown, people who enjoy fiction tend to have the opposite of autism. And people who do science are, well, Sheldon. Interestingly, though, the writers of The Big Bang Theory, for whatever reason, included some contraindications for a diagnosis of autism or Asperger’s in Sheldon’s character. Like the other scientists in the show, he’s obsessed with comic books, which require at least some understanding of facial expression and body language to follow. As Simon Baron-Cohen, the autism researcher who designed the Mind-in-the-Eyes test, explains, “Autism is an empathy disorder: those with autism have major difficulties in 'mindreading' or putting themselves into someone else’s shoes, imagining the world through someone else’s feelings” (137). Baron-Cohen has coined the term “mindblindness” to describe the central feature of the disorder, and many have posited that the underlying cause is abnormal development of the brain regions devoted to perspective taking and understanding others, what cognitive psychologists refer to as our Theory of Mind.

            To follow comic book plotlines, Sheldon would have to make ample use of his own Theory of Mind. He’s also given to absorption in various science fiction shows on TV. If he were only interested in futuristic gadgets, as an autistic would be, he could just as easily get more scientifically plausible versions of them in any number of nonfiction venues. By Baron-Cohen’s definition, Sherlock Holmes can’t possibly have Asperger’s either because his ability to get into other people’s heads is vastly superior to pretty much everyone else’s. As he explains in “The Musgrave Ritual,” “You know my methods in such cases, Watson: I put myself in the man’s place, and having first gauged his intelligence, I try to imagine how I should myself have proceeded under the same circumstances.”

            What about Darwin, though, that demigod of science who openly professed to being nauseated by Shakespeare? Isn’t he a prime candidate for entry into the surprisingly unpopulated ranks of heartless, data-crunching scientists whose thinking lends itself so conveniently to cooptation by oppressors and committers of wartime atrocities? It turns out that though Darwin held many of the same racist views as nearly all educated men of his time, his ability to empathize across racial and class divides was extraordinary. Darwin was not himself a Social Darwinist, a theory devised by Herbert Spencer to justify inequality (which has currency still today among political conservatives). And Darwin was also a passionate abolitionist, as is clear in the following excerpts from The Voyage of the Beagle:

On the 19th of August we finally left the shores of Brazil. I thank God, I shall never again visit a slave-country. To this day, if I hear a distant scream, it recalls with painful vividness my feelings, when passing a house near Pernambuco, I heard the most pitiable moans, and could not but suspect that some poor slave was being tortured, yet knew that I was as powerless as a child even to remonstrate.

Darwin is responding to cruelty in a way no one around him at the time would have. And note how deeply it pains him, how profound and keenly felt his sympathy is.

I was present when a kind-hearted man was on the point of separating forever the men, women, and little children of a large number of families who had long lived together. I will not even allude to the many heart-sickening atrocities which I authentically heard of;—nor would I have mentioned the above revolting details, had I not met with several people, so blinded by the constitutional gaiety of the negro as to speak of slavery as a tolerable evil.

            The question arises, not whether Darwin had sacrificed his humanity to science, but why he had so much more humanity than many other intellectuals of his day.

It is often attempted to palliate slavery by comparing the state of slaves with our poorer countrymen: if the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin; but how this bears on slavery, I cannot see; as well might the use of the thumb-screw be defended in one land, by showing that men in another land suffered from some dreadful disease.

And finally we come to the matter of Darwin’s Theory of Mind, which was quite clearly in no way deficient.

Those who look tenderly at the slave owner, and with a cold heart at the slave, never seem to put themselves into the position of the latter;—what a cheerless prospect, with not even a hope of change! picture to yourself the chance, ever hanging over you, of your wife and your little children—those objects which nature urges even the slave to call his own—being torn from you and sold like beasts to the first bidder! And these deeds are done and palliated by men who profess to love their neighbours as themselves, who believe in God, and pray that His Will be done on earth! It makes one's blood boil, yet heart tremble, to think that we Englishmen and our American descendants, with their boastful cry of liberty, have been and are so guilty; but it is a consolation to reflect, that we at least have made a greater sacrifice than ever made by any nation, to expiate our sin. (530-31)

            I suspect that Darwin’s distaste for Shakespeare was borne of oversensitivity. He doesn't say music failed to move him; he didn’t like it because it made him think “too energetically.” And as aesthetically pleasing as Shakespeare is, existentially speaking, his plays tend to be pretty harsh, even the comedies. When Prospero says, "We are such stuff / as dreams are made on" in Act 4 of The Tempest, he's actually talking not about characters in stories, but about how ephemeral and insignificant real human lives are. But why, beyond some likely nudge from his inherited temperament, was Darwin so sensitive? Why was he so empathetic even to those so vastly different from him? After admitting he’d lost his taste for Shakespeare, paintings, and music, he goes to say,

On the other hand, novels which are works of the imagination, though not of a very high order, have been for years a wonderful relief and pleasure to me, and I often bless all novelists. A surprising number have been read aloud to me, and I like all if moderately good, and if they do not end unhappily—against which a law ought to be passed. A novel, according to my taste, does not come into the first class unless it contains some person whom one can thoroughly love, and if a pretty woman all the better.
[Check out the Toronto group's blog at onfiction.ca]


What's the Point of Difficult Reading?

James Joyce

          You sit reading the first dozen or so pages of some celebrated classic and gradually realize that having to sort out how the ends of the long sentences fix to their beginnings is taking just enough effort to distract you entirely from the setting or character you’re supposed to be getting to know. After a handful of words you swear are made up and a few tangled metaphors you find yourself riddling over with nary a resolution, the dread sinks in. Is the whole book going to be like this? Is it going to be one of those deals where you get to what’s clearly meant to be a crucial turning point in the plot but for you is just another riddle without a solution, sending you paging back through the forest of verbiage in search of some key succession of paragraphs you spaced out while reading the first time through? Then you wonder if you’re missing some other kind of key, like maybe the story’s an allegory, a reference to some historical event like World War II or some Revolution you once had to learn about but have since lost all recollection of. Maybe the insoluble similes are allusions to some other work you haven’t read or can’t recall. In any case, you’re not getting anything out of this celebrated classic but frustration leading to the dual suspicion that you’re too ignorant or stupid to enjoy great literature and that the whole “great literature” thing is just a conspiracy to trick us into feeling dumb so we’ll defer to the pseudo-wisdom of Ivory Tower elites.

            If enough people of sufficient status get together and agree to extol a work of fiction, they can get almost everyone else to agree. The readers who get nothing out of it but frustration and boredom assume that since their professors or some critic in a fancy-pants magazine or the judges of some literary award committee think it’s great they must simply be missing something. They dutifully continue reading it, parrot a few points from a review that sound clever, and afterward toe the line by agreeing that it is indeed a great work of literature, clearly, even if it doesn’t speak to them personally. For instance, James Joyce’s Ulysses, utterly nonsensical to anyone without at least a master’s degree, tops the Modern Library’s list of 100 best novels in the English language. Responding to the urging of his friends to write out an explanation of the novel, Joyce scoffed, boasting, “I’ve put in so many enigmas and puzzles that it will keep the professors busy for centuries arguing over what I meant, and that’s the only way of ensuring one’s immortality.” He was right. To this day, professors continue to love him even as Ulysses and the even greater monstrosity Finnegan’s Wake do nothing but bore and befuddle everyone else—or else, more fittingly, sit inert or unchecked-out on the shelf, gathering well-deserved dust.

Jonathan Franzen-Courtesy of Frank Bauer
            Joyce’s later novels are not literature; they are lengthy collections of loosely connected literary puzzles. But at least his puzzles have actual solutions—or so I’m told. Ulysses represents the apotheosis of the tradition in literature called modernism. What came next, postmodernism, is even more disconnected from the universal human passion for narrative. Even professors aren’t sure what to do with it, so they simply throw their hands up, say it’s great, and explain that the source of its greatness is its very resistance to explanation. Jonathan Franzen, whose 2001 novel The Corrections represented a major departure from the postmodernism he began his career experimenting with, explained the following year in The New Yorker how he’d turned away from the tradition. He’d been reading the work of William Gaddis “as a kind of penance” (101) and not getting any meaning out of it. Of the final piece in the celebrated author’s oeuvre, Franzen writes,

The novel is an example of the particular corrosiveness of literary postmodernism. Gaddis began his career with a Modernist epic about the forgery of masterpieces. He ended it with a pomo romp that superficially resembles a masterpiece but punishes the reader who tries to stay with it and follow its logic. When the reader finally says, Hey, wait a minute, this is a mess, not a masterpiece, the book instantly morphs into a performance-art prop: its fraudulence is the whole point! And the reader is out twenty hours of good-faith effort. (111)

In other words, reading postmodern fiction means not only forgoing the rewards of narratives, having them replaced by the more taxing endeavor of solving multiple riddles in succession, but those riddles don’t even have answers. What’s the point of reading this crap? Exactly. Get it?

            You can dig deeper into the meaningless meanderings of pomos and discover there is in fact an ideology inspiring all the infuriating inanity. The super smart people who write and read this stuff point to the willing, eager complicity of the common reader in the propagation of all the lies that sustain our atrociously unjust society (but atrociously unjust compared to what?). Franzen refers to this as the Fallacy of the Stupid Reader,

wherein difficulty is a “strategy” to protect art from cooptation and the purpose of art is to “upset” or “compel” or “challenge” or “subvert” or “scar” the unsuspecting reader; as if the writer’s audience somehow consisted, again and again, of Charlie Browns running to kick Lucy’s football; as if it were a virtue in a novelist to be the kind of boor who propagandizes at friendly social gatherings. (109)

But if the author is worried about art becoming a commodity does making the art shitty really amount to a solution? And if the goal is to make readers rethink something they take for granted why not bring the matter up directly, or have a character wrestle with it, or have a character argue with another character about it? The sad fact is that these authors probably just suck, that, as Franzen suspects, “literary difficulty can operate as a smoke screen for an author who has nothing interesting, wise, or entertaining to say” (111).

            Not all difficulty in fiction is a smoke screen though. Not all the literary emperors are naked. Franzen writes that “there is no headache like the headache you get from working harder on deciphering a text than the author, by all appearances, has worked on assembling it.” But the essay, titled “Mr. Difficult,” begins with a reader complaint sent not to Gaddis but to Franzen himself. And the reader, a Mrs. M. from Maryland, really gives him the business:

Who is it that you are writing for? It surely could not be the average person who enjoys a good read… The elite of New York, the elite who are beautiful, thin, anorexic, neurotic, sophisticated, don’t smoke, have abortions tri-yearly, are antiseptic, live in penthouses, this superior species of humanity who read Harper’s and The New Yorker. (100)

In this first part of the essay, Franzen introduces a dilemma that sets up his explanation of why he turned away from postmodernism—he’s an adherent of the “Contract model” of literature, whereby the author agrees to share, on equal footing, an entertaining or in some other way gratifying experience, as opposed to the “Status model,” whereby the author demonstrates his or her genius and if you don’t get it, tough. But his coming to a supposed agreement with Mrs. M. about writers like Gaddis doesn’t really resolve Mrs. M.’s conflict with him. The Corrections, after all, the novel she was responding to, represents his turning away from the tradition Gaddis wrote in. (It must be said, though, that Freedom, Franzen’s next novel, is written in a still more accessible style.)

            The first thing we must do to respond properly to Mrs. M. is break down each of Franzen’s models into two categories. The status model includes writers like Gaddis whose difficulty serves no purpose but to frustrate and alienate readers. But Franzen’s own type specimen for this model is Flaubert, much of whose writing, though difficult at first, rewards any effort to re-read and further comprehend with a more profound connection. So it is for countless other writers, the one behind number two on the Modern Library’s ranking for instance—Fitzgerald and Gatsby. As for the contract model, Franzen admits,

Taken to its free-market extreme, Contract stipulates that if a product is disagreeable to you the fault must be the product’s. If you crack a tooth on a hard word in a novel, you sue the author. If your professor puts Dreiser on your reading list, you write a harsh student evaluation… You’re the consumer; you rule. (100)

Franzen, in declaring himself a “Contract kind of person,” assumes that the free-market extreme can be dismissed for its extremity. But Mrs. M. would probably challenge him on that. For many, particularly right-leaning readers, the market not only can but should be relied on to determine which books are good and which ones belong in some tiny niche. When the Modern Library conducted a readers' poll to create a popular ranking to balance the one made by experts, the ballot was stuffed by Ayn Rand acolytes and scientologists. Mrs. M. herself leaves little doubt as to her political sympathies. For her and her fellow travelers, things like literature departments, National Book Awards—like the one The Corrections won—Nobels and Pulitzers are all an evil form of intervention into the sacred workings of the divine free market, un-American, sacrilegious, communist. According to this line of thinking, authors aren’t much different from whores—except of course literal whoring is condemned in the bible (except when it isn’t).

            A contract with readers who score high on the personality dimension of openness to new ideas and experiences (who tend to be liberal), those who have spent a lot of time in the past reading books like The Great Gatsby or Heart of Darkness or Lolita (the horror!), those who read enough to have developed finely honed comprehension skills—that contract is going to look quite a bit different from one with readers who attend Beck University, those for whom Atlas Shrugged is the height of literary excellence. At the same time, though, the cult of self-esteem is poisoning schools and homes with the idea that suggesting that a student or son or daughter is anything other than a budding genius is a form of abuse. Heaven forbid a young person feel judged or criticized while speaking or writing. And if an author makes you feel the least bit dumb or ignorant, well, it’s an outrage—heroes like Mrs. M. to the rescue.

            One of the problems with the cult of self-esteem is that anticipating criticism tends to make people more, not less creative. And the link between low self-esteem and mental disorders is almost purely mythical. High self-esteem is correlated with school performance, but as far as researchers can tell it’s the performance causing the esteem, not the other way around. More invidious, though, is the tendency to view anything that takes a great deal of education or intelligence to accomplish as an affront to everyone less educated or intelligent. Conservatives complain endlessly about class warfare and envy of the rich—the financially elite—but they have no qualms about decrying intellectual elites and condemning them for flaunting their superior literary achievements. They see the elitist mote in the eye of Nobel laureates without noticing the beam in their own.

         What’s the point of difficult reading? Well, what’s the point of running five or ten miles? What’s the point of eating vegetables as opposed to ice cream or Doritos? Difficulty need not preclude enjoyment. And discipline in the present is often rewarded in the future. It very well may be that the complexity of the ideas you’re capable of understanding is influenced by how many complex ideas you attempt to understand. No matter how vehemently true believers in the magic of markets insist otherwise, markets don’t have minds. And though an individual’s intelligence need not be fixed a good way to ensure children never get any smarter than they already are is to make them feel fantastically wonderful about their mediocrity. We just have to hope that despite these ideological traps there are enough people out there determined to wrap their minds around complex situations depicted in complex narratives about complex people told in complex language, people who will in the process develop the types of minds and intelligence necessary to lead the rest of our lazy asses into a future that’s livable and enjoyable. For every John Galt, Tony Robbins, and Scheherazade, we may need at least half a Proust. We are still, however, left with quite a dilemma. Some authors really are just assholes who write worthless tomes designed to trick you into wasting your time. But some books that seem impenetrable on the first attempt will reward your efforts to decipher them. How do we get the rewards without wasting our time?

Also read "Can't Win for Losing: Why There are so many Losers in Literature and Why It has to Change."

And: "Life's White Machine: James Wood and What doesn't Happen in Fiction."

And: Stories, Social Proof, & Our Two Selves

Can’t Win for Losing: Why There Are So Many Losers in Literature and Why It Has to Change

Doris Lessing
            Ironically, the author of The Golden Notebook, celebrating its 50th anniversary this year, and considered by many a “feminist bible,” happens to be an outspoken critic of feminism. When asked in a 1982 interview with Lesley Hazelton about her response to readers who felt some of her later works were betrayals of the women whose cause she once championed, Doris Lessing replied,

What the feminists want of me is something they haven't examined because it comes from religion. They want me to bear witness. What they would really like me to say is, ‘Ha, sisters, I stand with you side by side in your struggle toward the golden dawn where all those beastly men are no more.’ Do they really want people to make oversimplified statements about men and women? In fact, they do. I've come with great regret to this conclusion.

Lessing has also been accused of being overly harsh—“castrating”—to men, too many of whom she believes roll over a bit too easily when challenged by women aspiring to empowerment. As a famous novelist, however, who would go on to win the Nobel prize in literature in 2007, she got to visit a lot of schools, and it gradually dawned on her that it wasn’t so much that men were rolling over but rather that they were being trained from childhood to be ashamed of their maleness. In a lecture she gave to the Edinburgh book festival in 2001, she said,

Great things have been achieved through feminism. We now have pretty much equality at least on the pay and opportunities front, though almost nothing has been done on child care, the real liberation. We have many wonderful, clever, powerful women everywhere, but what is happening to men? Why did this have to be at the cost of men? I was in a class of nine- and 10-year-olds, girls and boys, and this young woman was telling these kids that the reason for wars was the innately violent nature of men. You could see the little girls, fat with complacency and conceit while the little boys sat there crumpled, apologising for their existence, thinking this was going to be the pattern of their lives.

Lessing describes how the teacher kept casting glances expectant of her approval as she excoriated these impressionable children. 

           Elaine Blair, in “Great American Losers,” an essay that’s equal parts trenchant and infuriatingly obtuse, describes a dynamic in contemporary fiction that’s similar to the one Lessing saw playing out in the classroom.

The man who feels himself unloved and unlovable—this is a character that we know well from the latest generation or two of American novels. His trials are often played for sympathetic laughs. His loserdom is total: it extends to his stunted career, his squalid living quarters, his deep unease in the world.

At the heart of this loserdom is his auto-manifesting knowledge that women don’t like him. As opposed to men of earlier generations who felt entitled to a woman’s respect and admiration, Blair sees this modern male character as being “the opposite of entitled: he approaches women cringingly, bracing for a slap.” This desperation on the part of male characters to avoid offending women, to prove themselves capable of sublimating their own masculinity so they can be worthy of them, finds its source in the authors themselves. Blair writes,

Our American male novelists, I suspect, are worried about being unloved as writers—specifically by the female reader. This is the larger humiliation looming behind the many smaller fictional humiliations of their heroes, and we can see it in the way the characters’ rituals of self-loathing are tacitly performed for the benefit of an imagined female audience.
D.F. Wallace courtesy of
infinitesummer.org

           Blair quotes a review David Foster Wallace wrote of a John Updike novel to illustrate how conscious males writing literature today are of their female readers’ hostility toward men who write about sex and women without apologizing for liking sex and women—sometimes even outside the bounds of caring, committed relationships. Labeling Updike as a “Great Male Narcissist,” a distinction he shares with writers like Philip Roth and Norman Mailer, Wallace writes,

Most of the literary readers I know personally are under forty, and a fair number are female, and none of them are big admirers of the postwar GMNs. But it’s John Updike in particular that a lot of them seem to hate. And not merely his books, for some reason—mention the poor man himself and you have to jump back:
“Just a penis with a thesaurus.”
“Has the son of a bitch ever had one unpublished thought?”
“Makes misogyny seem literary the same way Rush 
[Limbaugh] makes fascism seem funny.”
And trust me: these are actual quotations, and I’ve heard even
worse ones, and they’re all usually accompanied by the sort of
facial expressions where you can tell there’s not going to be any profit in appealing to the intentional fallacy or talking about the sheer aesthetic pleasure of Updike’s prose.

Since Wallace is ready to “jump back” at the mere mention of Updike’s name, it’s no wonder he’s given to writing about characters who approach women “cringingly, bracing for a slap.”

Blair goes on to quote from Jonathan Franzen’s novel The Corrections, painting a plausible picture of male writers who fear not only that their books will be condemned if too misogynistic—a relative term which has come to mean "not as radically feminist as me"—but they themselves will be rejected. In Franzen’s novel, Chip Lambert has written a screenplay and asked his girlfriend Julia to give him her opinion. She holds off doing so, however, until after she breaks up with him and is on her way out the door. “For a woman reading it,” she says, “it’s sort of like the poultry department. Breast, breast, breast, thigh, leg” (26). Franzen describes his character’s response to the critique:

It seemed to Chip that Julia was leaving him because “The Academy Purple” had too many breast references and a draggy opening, and that if he could correct these few obvious problems, both on Julia’s copy of the script and, more important, on the copy he’d specially laser-printed on 24-pound ivory bond paper for [the film producer] Eden Procuro, there might be hope not only for his finances but also for his chances of ever again unfettering and fondling Julia’s own guileless, milk-white breasts. Which by this point in the day, as by late morning of almost every day in recent months, was one of the last activities on earth in which he could still reasonably expect to take solace for his failures. (28)

If you’re reading a literary work like The Corrections, chances are you’ve at some point sat in a literature class—or even a sociology or culture studies class—and been instructed that the proper way to fulfill your function as a reader is to critically assess the work in terms of how women (or minorities) are portrayed. Both Chip and Julia have sat through such classes. And you’re encouraged to express disapproval, even outrage if something like a traditional role is enacted—or, gasp, objectification occurs. Blair explains how this affects male novelists:

When you see the loser-figure in a novel, what you are seeing is a complicated bargain that goes something like this: yes, it is kind of immature and boorish to be thinking about sex all the time and ogling and objectifying women, but this is what we men sometimes do and we have to write about it. We fervently promise, however, to avoid the mistake of the late Updike novels: we will always, always, call our characters out when they’re being self-absorbed jerks and louts. We will make them comically pathetic, and punish them for their infractions a priori by making them undesirable to women, thus anticipating what we imagine will be your judgments, female reader. Then you and I, female reader, can share a laugh at the characters’ expense, and this will bring us closer together and forestall the dreaded possibility of your leaving me.

In other words, these male authors are the grownup versions of those poor school boys Lessing saw forced to apologize for their own existence. Indeed, you can feel this dynamic, this bargain, playing out when you’re reading these guys’ books. Blair’s description of the problem is spot on. Her theory of what caused it, however, is laughable.

Because of the GMNs, these two tendencies—heroic virility and sexist condescension—have lingered in our minds as somehow yoked together, and the succeeding generations of American male novelists have to some degree accepted the dyad as truth. Behind their skittishness is a fearful suspicion that if a man gets what he wants, sexually speaking, he is probably exploiting someone.

The dread of slipping down the slope from attraction to exploitation has nothing to do with John Updike. Rather, it is embedded in terms at the very core of feminist ideology. Misogyny, for instance, is frequently deemed an appropriate label for men who indulge in lustful gazing, even in private. And the term objectification implies that the female whose subjectivity isn’t being properly revered is the victim of oppression. The main problem with this idea—and there are several—is that the term objectification is synonymous with attraction. The deluge of details about the female body in fiction by male authors can just as easily be seen as a type of confession, an unburdening of guilt by the offering up of sins. The female readers respond by assigning the writers some form of penance, like never writing, never thinking like that again without flagellating themselves.

           The conflict between healthy male desire and disapproving feminist prudery doesn’t just play out in the tortured psyches of geeky American male novelists. A.S. Byatt, in her Booker prize-winning novel Possession, satirizes the plight of scholars steeped in literary theories for being “papery” and sterile. But the novel ends with a male scholar named Roland overcoming his theory-induced self-consciousness to initiate sex with another scholar named Maud. Byatt describes the encounter:

And very slowly and with infinite gentle delays and delicate diversions and variations of indirect assault Roland finally, to use an outdated phrase, entered and took possession of all her white coolness that grew warm against him, so that there seemed to be no boundaries, and he heard, towards dawn, from a long way off, her clear voice crying out, uninhibited, unashamed, in pleasure and triumph. (551)

The literary critic Monica Flegel cites this passage as an example of how Byatt’s old-fashioned novel features “such negative qualities of the form as its misogyny and its omission of the lower class.” Flegel is particularly appalled by how “stereotypical gender roles are reaffirmed” in the sex scene. “Maud is reduced in the end,” Flegel alleges, “to being taken possession of by her lover…and assured that Roland will ‘take care of her.’” How, we may wonder, did a man assuring a woman he would take care of her become an act of misogyny?
Martin Amis

            Perhaps critics like Flegel occupy some radical fringe; Byatt’s book was after all a huge success with audiences and critics alike, and it did win Byatt the Booker. The novelist Martin Amis, however, isn’t one to describe his assaults as indirect. He routinely dares to feature men who actually do treat women poorly in his novels—without any authorial condemnation. Martin Goff, the non-intervening director of the Booker Prize committee, tells the story of the 1989 controversy over whether or not Amis’s London Fields should be on the shortlist. Maggie Gee, a novelist, and Helen McNeil, a professor, simply couldn’t abide Amis’s treatment of his women characters. “It was an incredible row,” says Goff.

Maggie and Helen felt that Amis treated women appallingly in the book. That is not to say they thought books which treated women badly couldn't be good, they simply felt that the author should make it clear he didn't favour or bless that sort of treatment. Really, there was only two of them and they should have been outnumbered as the other three were in agreement, but such was the sheer force of their argument and passion that they won. David [Lodge] has told me he regrets it to this day, he feels he failed somehow by not saying, “It's two against three, Martin's on the list”.

In 2010, Amis explained his career-spanning failure to win a major literary award, despite enjoying robust book sales, thus:

There was a great fashion in the last century, and it's still with us, of the unenjoyable novel. And these are the novels which win prizes, because the committee thinks, “Well it's not at all enjoyable, and it isn't funny, therefore it must be very serious.”

Brits like Hilary Mantel, and especially Ian McEwan are working to turn this dreadful trend around. But when McEwan dared to write a novel about a neurosurgeon who prevails in the end over an afflicted, less privileged tormenter he was condemned by critic Jennifer Szalai in the pages of Harper’s Magazine for his “blithe, bourgeois sentiments.” If you’ve read Saturday, you know the sentiments are anything but blithe, and if you read Szalai’s review you’ll be taken aback by her articulate blindness.

           Amis is probably right in suggesting that critics and award committees have a tendency to mistake misery for profundity. But his own case, along with several others like it, hint at something even more disturbing, a shift in the very idea of what role fictional narratives play in our lives. The sad new reality is that, owing to the growing influence of ideologically extreme and idiotically self-righteous activist professors, literature is no longer read for pleasure and enrichment—it’s no longer even read as a challenging exercise in outgroup empathy. Instead, reading literature is supposed by many to be a ritual of male western penance. Prior to taking an interest in literary fiction, you must first be converted to the proper ideologies, made to feel sufficiently undeserving yet privileged, the beneficiary of a long history of theft and population displacement, the scion and gene-carrier of rapists and genocidaires—the horror, the horror. And you must be taught to systematically overlook and remain woefully oblivious of all the evidence that the Enlightenment was the best fucking thing that ever happened to the human species. Once you’re brainwashed into believing that so-called western culture is evil and that you’ve committed the original sin of having been born into it, you’re ready to perform your acts of contrition by reading horrendously boring fiction that forces you to acknowledge and reflect upon your own fallen state.
"In his new self-lacerating 'Memoir', J.M. Coetzee portrays
himself as a loser with no sexual presence." Here he is at the
Nobel ceremony.

           Fittingly, the apotheosis of this new literary tradition won the Booker in 1999, and its author, like Lessing, is a Nobel laureate. J.M. Coetzee’s Disgrace chronicles in exquisite free indirect discourse the degradation of David Lurie, a white professor in Cape Town, South Africa, beginning with his somewhat pathetic seduction of black student, a crime for which he pays with the loss of his job, his pension, and his reputation, and moving on to the aftermath of his daughter’s rape at the hands of three black men who proceed to rob her, steal his car, douse him with spirits and light him on fire. What’s unsettling about the novel—and it is a profoundly unsettling novel—is that its structure implies that everything that David and Lucy suffer flows from his original offense of lusting after a young black woman. This woman, Melanie, is twenty years old, and though she is clearly reluctant at first to have sex with her teacher there’s never any force involved. At one point, she shows up at David’s house and asks to stay with him. It turns out she has a boyfriend who is refusing to let her leave him without a fight. It’s only after David unheroically tries to wash his hands of the affair to avoid further harassment from this boyfriend—while stooping so low as to insist that Melanie make up a test she missed in his class—that she files a complaint against him.

            David immediately comes clean to university officials and admits to taking advantage of his position of authority. But he stalwartly refuses to apologize for his lust, or even for his seduction of the young woman. This refusal makes him complicit, the novel suggests, in all the atrocities of colonialism. As he’s awaiting a hearing to address Melanie’s complaint, David gets a message:

On campus it is Rape Awareness Week. Women Against Rape, WAR, announces a twenty-four-hour vigil in solidarity with “recent victims”. A pamphlet is slipped under his door: ‘WOMEN SPEAK OUT.’ Scrawled in pencil at the bottom is a message: ‘YOUR DAYS ARE OVER, CASANOVA.’ (43)

During the hearing, David confesses to doctoring the attendance ledgers and entering a false grade for Melanie. As the attendees become increasingly frustrated with what they take to be evasions, he goes on to confess to becoming “a servant of Eros” (52). But this confession only enrages the social sciences professor Farodia Rassool:

Yes, he says, he is guilty; but when we try to get specificity, all of a sudden it is not abuse of a young woman he is confessing to, just an impulse he could not resist, with no mention of the pain he has caused, no mention of the long history of exploitation of which this is part. (53)

There’s also no mention, of course, of the fact that already David has gone through more suffering than Melanie has, or that her boyfriend deserves a great deal of the blame, or that David is an individual, not a representative of his entire race who should be made to answer for the sins of his forefathers.
From the movie version of Disgrace

            After resigning from his position in disgrace, David moves out to the country to live with his daughter on a small plot of land. The attack occurs only days after he’s arrived. David wants Lucy to pursue some sort of justice, but she refuses. He wants her to move away because she’s clearly not safe, but she refuses. She even goes so far as to accuse him of being in the wrong for believing he has any right to pronounce what happened an injustice—and for thinking it is his place to protect his daughter. And if there’s any doubt about the implication of David’s complicity she clears it up. As he’s pleading with her to move away, they begin talking about the rapists’ motivation. Lucy says to her father,

When it comes to men and sex, David, nothing surprises me anymore. Maybe, for men, hating the woman makes sex more exciting. You are a man, you ought to know. When you have sex with someone strange—when you trap her, hold her down, get her under you, put all your weight on her—isn’t it a bit like killing? Pushing the knife in; exiting afterwards, leaving the body behind covered in blood—doesn’t it feel like murder, like getting away with murder? (158)

The novel is so engrossing and so disturbing that it’s difficult to tell what the author’s position is vis à vis his protagonist’s degradation or complicity. You can’t help sympathizing with him and feeling his treatment at the hands of Melanie, Farodia, and Lucy is an injustice. But are you supposed to question that feeling in light of the violence Melanie is threatened with and Lucy is subjected to? Are you supposed to reappraise altogether your thinking about the very concept of justice in light of the atrocities of history? Are we to see David Lurie as an individual or as a representative of western male colonialism, deserving of whatever he’s made to suffer and more?
From the Crucible

            Personally, I think David Lurie’s position in Disgrace is similar to that of John Proctor in The Crucible (although this doesn’t come out nearly as much in the movie version). And it’s hard not to see feminism in its current manifestations—along with Marxism and postcolonialism—as a pernicious new breed of McCarthyism infecting academia and wreaking havoc with men and literature alike. It’s really no surprise at all that the most significant developments in the realm of narratives lately haven’t occurred in novels at all. Insofar as the cable series contributing to the new golden age of television can be said to adhere to a formula, it’s this: begin with a bad ass male lead who doesn’t apologize for his own existence and has no qualms about expressing his feelings toward women. As far as I know, these shows are just as popular with women viewers as they are with the guys.

            When David first arrives at Lucy’s house, they take a walk and he tells her a story about a dog he remembers from a time when they lived in a neighborhood called Kenilworth.

It was a male. Whenever there was a bitch in the vicinity it would get excited and unmanageable, and with Pavlovian regularity the owners would beat it. This went on until the poor dog didn’t know what to do. At the smell of a bitch it would chase around the garden with its ears flat and its tail between its legs, whining, trying to hide…There was something so ignoble in the spectacle that I despaired. One can punish a dog, it seems to me, for an offence like chewing a slipper. A dog will accept the justice of that: a beating for a chewing. But desire is another story. No animal will accept the justice of being punished for following its instincts.

Lucy breaks in, “So males must be allowed to follow their instincts unchecked? Is that the moral?” David answers,

No, that is not the moral. What was ignoble about the Kenilworth spectacle was that the poor dog had begun to hate its own nature. It no longer needed to be beaten. It was ready to punish itself. At that point it would be better to shoot it.

“Or have it fixed,” Lucy offers. (90)

Also read The Adaptive Appeal of Bad Boys

Why I Am Not a Feminist—and You Shouldn’t Be Either part 3: Engendering Gender Madness


             "As a professional debunker I feel like I know bunk when I see it, and Wertheim has well captured the genre: 'In all likelihood there will be an abundant use of CAPITAL LETTERS and exclamation points!!! Important sections will be underlined or bolded, or circled, for emphasis.'"


This is from Skeptic editor Michael Shermer's review of a book on the demarcation problem, the thorny question of how to recognize whether ideas are revolutionary or just, well, bunk. Obviously, if someone's writing begs for attention in way that seems meretricious or unhinged, you're likely dealing with a bunk peddler. What to make, then, of these lines, to which I have not added any formatting?

"Honestly, I can’t think of a better way to make a girl in grade school question whether she’ll have any interest in or aptitude for science than to present her with a 'science for girls' kit."

"And, science kits that police these gender stereotypes run the risk of alienating boys from science, too."

"I really don’t think that science kits should be segregated by gender, but if you are going to segregate them at least make the experiments for girls NOT SO LAME."

"If girls are at all interested in science, then it must be in a pretty, feminine way that reinforces notions of beauty. It’s mystical. The chemistry of perfumery is hidden behind 'perfection.' But boys get actual physics and chemistry—just like that, with no fancy modifiers. This division is NOT okay..."

To the first, I’d say, really? You must have a very limited imagination. To the second, I’d say, really? Isn’t “police” a strong term for science kits sold at a toy store? I agree with the third, but I think the author needs to settle down. And to the fourth, I’d say, well, if the kids really want kits of this nature—and if they don’t want them the manufacturer won’t be offering them for long—you’d have to demonstrate that they actually cause some harm before you can say, in capitals or otherwise, they’re not okay.

Were these breathless fulminations posted on the pages of some poststructuralist site for feminist rants? The first and second are from philosopher Janet Stemwedel’s blog at Scientific American. The third is from a blog hosted by the American Geophysical Union and was written by geologist Evelyn Mervine. And the fourth is from anthropologist Krystal D’Costa’s blog, also at Scientific American.

           You’d hope these blog posts, as emphatic as they are, would provide links to some pretty compelling research on the dangers of pandering to kids’ and parents’ gender stereotypes. One of the posts has a link to a podcast about research on how vaginas are supposed to smell. Another of Stemwedel’s posts on the issue links to yet another post, by Christie Wilcox, in which she not-so-gently takes the journal Nature to task for publishing what was supposed to be a humorous piece on gender differences. It’s only through this indirect route that you can find any actual evidence—in any of these posts—that stereotyping is harmful. “Reinforcing negative gender stereotypes is anything but harmless,” Wilcox declares. But does humor based on stereotypes in fact reinforce them, or does it make them seem ridiculous? How far are we really willing to go to put a stop to this type of humor? It seems to me that gender and racial and religious stereotypes are the bread-and-butter of just about every comedian in the business. 

            The science Wilcox refers to has nothing to do with humor but instead demonstrates a phenomenon psychologists call stereotype threat. It’s a fascinating topic—really one of the most fascinating in psychology in my opinion. It may even be an important factor in the underrepresentation of women in STEM fields. Still, the connection between research on stereotypes and performance—stereotype boost has also been documented—and humor is tenuous. And the connection with pink and pretty microscopes is even more nebulous.

           Helping women in STEM fields feel more welcome is a worthy cause. Gender stereotypes probably play some role in their current underrepresentation. I take these authors at their word that they routinely experience the ill effects of common misconceptions about women’s cognitive abilities, so I sympathize with their frustration to a degree. I even have to admit that it’s a testament to the success of past feminists that the societal injustices their modern counterparts rail against are so much less overt—so subtle. But they may actually be getting too subtle; decrying them sort of resembles the righteous, evangelical declaiming of conspiracy theorists. If you can imagine a way that somebody may be guilty of reinforcing stereotypes, you no longer even have to shoulder the burden of proving they’re guilty.

          The takeaway from all this righteously indignant finger-pointing is that you should never touch anything with even a remote resemblance to a stereotype. Allow me some ironic capitals of my own: STEREOTYPES BAD!!! This message, not surprisingly, even reaches into realms where a casual dismissal of science is fashionable, and skepticism about the value of empirical research, expressed in tortured prose, is an ascendant virtue—or maybe I have the direction of the influence backward.

           On two separate occasions now, one of my colleagues in the English department has posted the story of a baby named Storm on Facebook. Storm’s parents opted against revealing the newborn’s sex to friends and any but immediate family to protect her or him from those nasty stereotypes. In the comments under these links were various commendations and expressions of solidarity. Storm’s parents, most agreed, are heroes. Parents bragged about all their own children’s androgynous behavior, expressing their desire to rub it in the faces of “gender nazis.” 

From the Toronto Star
             From what I can tell, Storm’s parents had no idea the story of their unorthodox parenting would go viral, so we probably shouldn’t condemn them for using their child to get media attention. And I don’t think the “experiment,” as some have called it, poses any direct threat to Storm’s psychological well-being. But Storm’s parents are jousting with windmills. They’re assuming that gender is something imposed on children by society—those chimerical gender nazis—through a process called socialization. The really disheartening thing is that even the bloggers at Scientific American make this mistake; they assume that sparkly pink science kits that help girls explore the chemistry of lipstick and perfume send direct messages about who and what girls should be, and that the girls will receive and embrace these messages without resistance, as if the little tykes were noble savages with pristine spirits forever vulnerable to the tragic overvaluing of outward beauty.

            When they’re thinking clearly, all parents know a simple truth that gets completely discounted in discussions of gender—it’s really hard to get through to your kids even with messages you’re sending deliberately and explicitly. The notion that you can accidentally send some subtle cue that’s going to profoundly shape a child’s identity deserves a lot more skepticism than it gets (ask my conservative parents, especially my Catholic mom). This is because identity is something children actively create for themselves, not the sum total of all the cultural assumptions foisted on them as they grow up. Children’s minds are not receptacles for all our ideological garbage. They rummage around for their own ideological garbage, and they don’t just pick up whatever they find lying around.

            Psychologist John Money was a prominent advocate of the theory that gender is determined completely through socialization. So he advised the parents of a six-month-old boy whose penis had been destroyed in a botched circumcision to have the testicles removed as well and to raise the boy as a girl. The boy, David Reimer, never thought of himself as a girl, despite his parents’ and Money’s efforts to socialize him as one. Money nevertheless kept declaring success, claiming Reimer (who was called Brenda at the time) proved his theory of gender development. By age 13, however, the poor kid was suicidal. At 14, he declared himself a boy, and later went on to get further surgeries to reconstruct his genitals. In his account, written with John Colapinto, As Nature Made Him: The Boy Who Was Raised as a Girl, Reimer says that Money’s ministrations were in no way therapeutic—they were traumatic. Having read about Reimer in Steven Pinker’s book The Blank Slate: The Modern Denial of Human Nature, I thought of John Money every time I came across the term gender nazi in the Facebook comments about Storm (though I haven’t read Colapinto’s book in its entirety and don’t claim to know the case in enough detail to support such a severe charge).

            Reimer’s case is by no means the only evidence that gender identity and gender-typical behavior are heavily influenced by hormones. Psychiatrist William Reiner and urologist John Gearhart report that raising boys (who’ve been exposed in utero to more testosterone) as girls after surgery to remove underdeveloped sex organs tends not to result in feminine behaviors—or even feminine identity. Of the 16 boys in their study, 2 were raised as boys, while 14 were raised as girls. Five of the fourteen remained female throughout the study, but 4 spontaneously declared themselves to be male, and 4 others decided they were male after being informed of the surgery they’d undergone. All 16 of the children displayed “moderate to marked” degrees of male-typical behavior. The authors write, “At the initial assessment, the parents of only four subjects assigned to female sex reported that their child had never stated a wish to be a boy.”

            An earlier study of so-called pseudo-hermaphrodites, boys with a hormone disorder who are born looking like girls but who become more virile in adolescence, revealed that of 18 participants who were raised as girls, all but one changed their gender identity to male. There is also a condition some girls are born with called Congenital Adrenal Hyperplasia (CAH), which is characterized by an increased amount of male hormones in their bodies. It often leads to abnormal testes and the need for surgery. But Sheri Berenbaum and J. Michael Bailey found that in the group of girls with CAH they studied, increased levels of male-typical behavior could not be explained by the development of male genitalia or the age of surgery. The hormones themselves are the likely cause of the differences. 
From Psychology Today and Satoshi Kanazawa

           One particularly fascinating finding about kids’ preferences for toys comes from the realm of ethology. It turns out that rhesus monkeys show preferences for certain types of toys depending on their gender—and they’re the same preferences you would expect. Girls will play with plush dolls or with wheeled vehicles, but boys are much more likely to go for the cars and trucks. And the difference is even more pronounced in vervet monkeys, with both females and males spending significantly more time with toys we might in other contexts call “stereotypical.” There’s even some good preliminary evidence that chimpanzees play with sticks differently depending on their gender, with males using them as tools or weapons and females cradling them like babies.

            Are gender roles based solely on stereotypes and cultural contingencies? In The Blank Slate, Pinker excerpts large sections of anthropologist Donald Brown’s inventory of behaviors that have been observed by ethnographers in all cultures that have been surveyed. Brown’s book is called Human Universals, and it casts serious doubt on theories that rule out every factor influencing development except socialization. Included in the inventory: “classification of sex,” “females do more direct child care,” “male and female and adult and child seen as having different natures,” “males more aggressive,” and “sex (gender) terminology is fundamentally binary” (435-8). These observations are based on societies, not individuals, who vary much more dramatically one to the next. The point isn’t that genes or biology determine behavioral outcomes; the relationship between biology and behavior isn’t mechanistic—it’s probabilistic. But the probabilities tend to be much higher than anyone in English departments assumes—higher even than the bloggers at Scientific American assume.

            Interestingly, even though there are resilient differences in math test scores between boys and girls—with boys’ scores showing the same average but stretching farther at each tail of the bell curve—researchers exploring women’s underrepresentation in STEM fields have ruled out the higher aptitude of a small subset of men as the most important factor. They’ve also ruled out socialization. Reviewing multiple sources of evidence, Stephen Ceci and Wendy Williams find that

            the omnipresent claim that sex differences in mathematics 
            result from early socialization (i.e., parents and teachers 
            inculcating a ‘‘math is for boys’’ attitude) fails empirical 
            scrutiny. One cannot assert that socialization causes girls to 
            opt out of math and science when girls take as many math 
            and science courses as boys in grades K–12, achieve higher 
            grades in them, and major in college math in roughly equal 
            numbers to males. Moreover, survey evidence of parental 
            attitudes and behaviors undermines the socialization 
            argument, at least for recent cohorts. (3)

If it’s not ability, and it’s not socialization, then how do we explain the greater desire on the part of men to pursue careers in math-intensive fields? Ceci and Williams believe it’s a combination of divergent preferences and the biological constraints of childbearing. Women tend to be more interested in social fields; while men like fields with a focus on objects and abstractions. However, girls with CAH show preferences closer to those of boys. (Cool, huh?)

  Ceci and Williams also point out that women who excel at math tend to score highly in tests of verbal reasoning as well, giving them more fields to choose from. (A recent longitudinal study replicates this finding - 3-26-2013.) This is interesting to me because if women are more likely to pursue careers dealing with people and words, they’re also more likely to be exposed to the strain of feminism that views science as just another male conspiracy to justify and perpetuate the patriarchal status quo. Poststructuralism and New Historicism are all the rage in the English department I study in, and deconstructing scientific texts is de rigueur. Might Derrida, Lacan, Foucault, and all their feminist successors be at fault for women’s underrepresentation in STEM fields at least as much as toys and stereotypes?

            I have little doubt that if society were arranged to optimize women’s interest in STEM fields they would be much better represented in them. But society isn’t a very easy thing to manipulate. We have to consider the possibility that the victory would be Pyrrhic. In any case, we should avoid treating children like ideological chess pieces. There’s good evidence that we couldn’t keep little kids from seeking gender cues even if we tried, and trying strikes me as cruel. None of this is to say that biology determines everything, or that gender role development is simple. In fact, my problem with the feminist view of gender is that it’s far too crude to account for such a complex phenomenon. The feminists are arm chair pontificators at best and conspiracy theorists at worst. They believe stereotypes can only be harmful. That’s akin to saying that the rules of grammar serve solely to curtail our ability to freely express ourselves. While grammar need not be as rigid as many once believed, doing away with it altogether would reduce language to meaningless babble. Humans need stereotypes and roles. We cannot live in a cultural vacuum.

            At the same time, in keeping with the general trend toward tribalism, the feminists’ complaints about pink microscopes are unfair to boys and young men. Imagine being a science-obsessed teenage boy who comes across a bunch of rants on the website for your favorite magazine. They all say, in capital and bolded letters, that suggesting to girls that trying to be pretty is a worthwhile endeavor represents some outrageous offense, that it will cause catastrophic psychological and economic harm to them. It doesn’t take a male or female genius to figure out that the main source of teenage girls’ desire to be pretty is the realization that pretty girls get more attention from hot guys. If a toy can arouse so much ire for suggesting a girl might like to be pretty, then young guys had better control their responses to hot girls—think of the message it sends. So we’re back to the idea that male attraction is inherently oppressive. Since most men can’t help being attracted to women, well, shame on them, right? 


(Full disclosure: probably as a result of a phenomenon called assortive pairing, I find ignorance of science to be a huge turn-off.)
Check out part 2 on "The Objectionable Concept of Objectification."
And part 1 on earnings.
These posts have generated pretty lengthy comment threads on Facebook, so stay tuned as well for updates based on my concession of points and links to further evidence.
And, as always, tell me what you think and share this with anyone you think would rip it apart (or anyone who might just enjoy it).
Update: Just a few minutes after posting this, I came across Evolutionary Psychologist Jesse Bering's Facebook update saying he was being unfairly attacked by feminists for his own Scientific American blog. If you'd like to show your solidarity, go to http://blogs.scientificamerican.com/bering-in-mind/.
Go here to read my response to commenters.

Literature and Rock n Roll

I ended my Intro Rhet Comp class early on Thursday. I’d scheduled fifteen minutes of the class for discussing the chapter of the textbook assigned for the day, and fifteen minutes after that to a group project based on the reading. After calling on two students randomly and getting a bewildered look from both, I asked for a show of hands to see how many of the eighteen present had read the chapter. One hand went up. Suddenly, the class’s performance on the last paper began to make more sense. Bewildered now myself as to how so many students could be so cavalier about their grades, how they could be content with doing the bare minimum and constantly testing to see if they could get by with even less, I floated the idea of daily quizzes by a few of my fellow TA’s who share an office with me during our pointless office hours.

“I don’t see my job as making sure students do the reading,” Darlene said. She’s a returning adult, one of the women who frequently derail classes by straining to apply their personal and family experiences to the questions raised by professors. “I look at the textbook as just a guide they can use if they need it.” It’s her first semester teaching.

A visceral antipathy toward reading is pervasive on our campus. I read it in my students’ faces every time I discuss an upcoming assigned chapter, this look of “Why are you doing this to us?” But I also encounter it in advanced Lit courses. Last week, after having been arranged into groups of five by the professor, I was excited to see what my classmates had to say about Gatsby, an old favorite of mine. Before getting started, though, we did what had become an obligatory round of disclosing how far into the reading each of us had made it. I was the only one in the group who’d actually finished it.

Undeterred, I kept an eye on my classmates’ Blackboard postings over the following days. The few comments that appeared that were actually about the novel were resoundingly, astonishingly, negative. “The characters are superficial and the plot is confusing.” But mostly people wrote about their difficulties making sense of the “tricky” narration—as if Fitzgerald had let them down instead of the other way around. And these are people who profess to enjoy reading.

Seeing those responses was shocking, uncanny, and oddly encouraging. This past summer I finished a draft of a novel and, after asking on Facebook who all was interested in reading it, sent out fifteen copies. My goal was to get as many responses as I could from educated people who weren’t necessarily English or Literature majors. I got three detailed responses. One was from a fellow English grad student. The other two I would see echoed almost word for word two months later in my classmates’ postings on Gatsby. They complained about not getting the characters, but they’d also missed key elements of the plot which would’ve brought the characters into better focus. I could’ve responded to the poor reception—the twelve copies that went unanswered were somehow worse—by blaming the readers. Instead, I became pretty demoralized. Seeing how many people had the same response to Fitzgerald didn’t exactly reassure me of my bright future as a novelist, but it did hang a question mark over what had for those two months been a period.

Still, I have to wonder if literature has become one of those geeky pursuits, like video games, Dungeons and Dragons, or magic, that so many people--males in particular--obsess over, only to experience more and more isolation because of it. At the opposite end of the spectrum lie passions for sports and music, the crowd-pleasing passions, which are really just as pointless except for their wider followings and the social sanction of popularity.

Taking the GRE again after 10 Years

            I had it timed: if I went to the bathroom at 8:25, I’d be finishing up the essay portion of the test about ten minutes after my bladder was full again. Caffeine being essential for me to get into the proper state of mind for writing, I’d woken up to three cans of Diet Mountain Dew and two and half rather large cups of coffee. I knew I might not get called in to take the test precisely at 8:30, but I figured I could handle the pressure, as it were. The clock in the office of the test center read 8:45 when I walked in. Paperwork, signatures, getting a picture taken, turning out all my pockets (where I managed to keep my three talismans concealed)—by the time I was sitting down in the carrel—in a room that might serve as a meeting place for prisoners and their lawyers—it was after 9:00. And there were still more preliminaries to go through.

            Test takers are allotted 45 minutes for an essay on the “Issue Topic” prompted by a short quote. The “Analysis of an Argument” essay takes a half hour. The need to piss got urgent with about ten minutes left on the clock for the issue essay. By the end of the second essay, I was squirming and dancing and pretty desperate. Of course, I had to wait for our warden to let me out of the testing room. And then I had to halt midway through the office to come back and sign myself out. Standing at the urinal—and standing and standing—I had plenty of time to consider how poorly designed my strategy had been. I won’t find out my scores for the essay portion for ten or so days.
**********************************

            I’ve been searching my apartment for the letter with my official scores from the first time I took the GRE about ten years ago. I’d taken it near the end of the summer, at one of those times in life of great intellectual awakening. With bachelor’s degrees in both anthropology and psychology, and with only the most inchoate glimmerings of a few possible plans for the future, I lived in my dad’s enormous house with my oldest brother, who had returned after graduating from Notre Dame and was now taking graduate courses at IPFW, my alma mater, and some roommates. I delivered pizzas in the convertible Mustang I bought as a sort of hand-me-down from that same brother. And I spent hours every day reading.

            I’m curious about the specific date of the test because it would allow me place it in the context of what I was reading. It would also help me ascertain the amount of time I spent preparing. If memory serves, I was doing things like pouring over various books by Stephen Jay Gould and Richard Dawkins, trying to decide which one of them knew the real skinny on how evolution works. I think by then I’d read Frank Sulloway’s Born to Rebel, in which he applied complex statistics to data culled from historical samples and concluded that later-born siblings tend to be less conscientious but more open to new ideas and experiences. I was delighted to hear that the former president had read Jared Diamond’s Guns, Germs, and Steel, and thought it tragically unimaginable that the current president would ever read anything like that. At some point, I began circling words I didn’t recognize or couldn’t define so when I was finished with the chapter I could look them up and make a few flashcards.

            I’m not even sure the flashcards were in anticipation of the GRE. Several of my classmates in both the anthropology and psychology departments had spoken to me by then of their dejection upon receiving their scores. I was scared to take it. The trend seemed to be that everyone was getting about a hundred points less on this test than they did on the SAT. I decided I only really cared about the verbal reasoning section, and a 620 on that really wasn’t acceptable. Beyond the flashcards, I got my hands on a Kaplan CD-ROM from a guy at school and started doing all the practice tests on it. The scores it gave me hovered in the mid-600s. It also gave me scads of unfamiliar words (like scad) to put in my stack of flashcards, which grew, ridiculously, to the height of about a foot.

            I don’t remember much about the test itself. It was at a Sylvan Learning Center that closed a while back. One of the reading comprehension excerpts was on chimpanzees, which I saw as a good sign. When I was done, there was a screen giving me a chance to admit I cheated. It struck me as odd. Then came the screen with my scores—800 verbal reasoning. I looked around the room and saw nothing but the backs of silent test-takers. Could this be right? I never ace anything. It sank in when I was sitting down in the Mustang. Driving home on I-69, I sang along to “The Crush” by Dave Matthews, elated.

            I got accepted into MIT’s program in science writing based on that score and a writing sample in which I defended Frank Sulloway’s birth order theory against Judith Rich Harris, the author of The Nurture Assumption, another great book. But Harris’s arguments struck me as petty and somewhat disgraceful. She was engaging in something akin to a political campaign against a competing theory, rather than making a good faith effort to discover the truth. Anyway, the article I wrote got long and unwieldy. Michael Shermer considered it for publication in Skeptic but ultimately declined because I just didn’t have my chops up when it came to writing about science. By then, I was a writer of fiction.

            That’s why upon discovering how expensive a year in Cambridge would be and how little financial aid I’d be getting I declined MIT's invitation to attend their program. If being a science writer was my dream, I’d have gone. But I decided to hold out for an acceptance to an MFA program in creative writing. I’d already applied two years in row before stretching my net to include science writing. But the year I got accepted at MIT ended up being the third year of summary rejection on the fiction front. I had one more year before that perfect GRE score expired.
**********

            Year four went the same way all the other years had gone. I was in my late twenties now and had the feeling whatever opportunities that were once open to me had slipped away. Next came a crazy job at a restaurant—Lucky’s—and a tumultuous relationship with the kitchen manager. After I had to move out of the apartment I shared with her in the wake of our second breakup (there would be a third), I was in a pretty bad place. But I made the smartest decision I’d made in a while and went back to school to get my master’s in English at IPFW.

            The plan was to improve my qualifications for creative writing programs. And now that I’m nearly finished with the program I put re-taking the GRE at the top of my list for things to do this summer. In the middle of May, I registered to take it on June 22nd. I’d been dreading it ever since my original score expired, but now I was really worried. What would it mean if I didn’t get an 800 again? What if I got significantly lower than that? The MFA programs I’ll be applying to are insanely competitive: between five hundred and a thousand applicants for less than a dozen spaces. At the same time, though, there was a sense that a lower score would serve as this perfect symbol for just how far I’d let my life go off-track.

            Without much conscious awareness of what I was doing, I started playing out a Rocky narrative, or some story like Mohammed Ali making his comeback after losing his boxing license for refusing to serve in Vietnam. I would prove I wasn’t a has-been, that whatever meager accomplishments I had under my belt weren’t flukes. Last semester I wrote a paper on how to practice to be creative, and one of the books I read for it was K. Anders Ericsson’s The Road to Excellence. So, after signing up for the test I created a regimen of what Ericsson calls “deliberate practice,” based on anticipation and immediate feedback. I got my hands on as many sample items and sample tests I could find. I made little flashcards with the correct answers on them to make the feedback as close as possible to the hazarded answer. I put hours and hours into it. And I came up with a strategy for each section, and for every possible contingency I could think of. I was going to beat the GRE, again, through sheer force of will.
***********

            The order of the sections is variable. Ideally, the verbal section would have come first after the essay section so I wouldn’t have to budget my stores of concentration. But sitting down again after relieving my bladder I saw the quantitative section appear before me on the screen. Oh well, I planned for this too, I thought. I adhered pretty well to my strategy of working for a certain length of time to see if I could get the answer and then guessing if it didn’t look promising. And I achieved my goal for this section by not embarrassing myself. I got a 650.

            The trouble began almost immediately when the verbal questions starting coming. The strategy for doing analogies, the questions I most often missed in practice, was to work out the connection between the top words, “the bridge,” before considering the five word couples below to see which one has the same bridge. But because the screen was so large, and because I was still jittery from the caffeine, I couldn’t read the first word pair without seeing all the others. I abandoned the strategy with the first question.

            Then disaster struck. I’d anticipated only two sets of reading comprehension questions, but then, with the five minute warning already having passed, another impossibly long blurb appeared. I resign myself at that point to having to give up my perfect score. I said to myself, “Just read it quick and give the best answers you can.” I finished the section with about twenty seconds left. At least all the antonyms had been easy. Next came an experimental section I agreed to take since I didn’t need to worry about flagging concentration anymore. For the entire eighteen minutes it took, I sat there feeling completely defeated. I doubt my answers for that section will be of much use.

            Finally, I was asked if I wanted to abandon my scores—a ploy, I’m sure to get skittish people to pay to take the test twice. I said no, and clicked to see and record my scores. There it was at the top of the screen, my 800. I’d visualized the moment several times. I was to raise one arm in victory—but I couldn’t because the warden would just think I was raising my hand to signal I needed something. I also couldn’t because I didn’t feel victorious. I still felt defeated. I was sure all the preparation I’d done had been completely pointless. I hadn’t boxed. I’d clenched my jaw, bunched up my fist, and brawled.

            I listened to “The Crush” on the way home again, but as I detoured around all the construction downtown I wasn’t in a celebratory mood. I wasn’t elated. I was disturbed. The experience hadn’t been at all like a Rocky movie. It was a lot more like Gattaca. I’d come in, had my finger pricked so they could read my DNA, and had the verdict delivered to me. Any score could have come up on the screen. I had no control over it. That it turned out to be the one I was after was just an accident. A fluke.
**************

            The week before I took the test, I’d met a woman at Columbia Street who used to teach seventh graders. After telling her I taught Intro Comp at IPFW, we discussed how teaching is a process of translation from how you understand something into a language that will allow others who lack your experience and knowledge to understand it. Then you have to add some element of entertainment so you don’t lose their attention. The younger the students, the more patience it takes to teach them. Beginning when I was an undergrad working in the Writing Center, but really picking up pace as I got more and more experience as a TA, the delight I used to feel in regard to my own cleverness was being superseded by the nagging doubt that I could ever pass along the method behind it to anyone.

            When you’re young (or conservative), it’s easy to look at people who don’t do as well as you with disdain, as if it’s a moral failing on their part. You hold the conviction deep in your gut that if they merely did what you’ve done they’d have what you have or know what you know. Teaching disabuses you of this conviction (which might be why so many teachers are liberal). How many times did I sit with a sharp kid in the writing center trying to explain some element of college writing to him or her, trying to think back to how I had figured it out, and realizing either that I’d simply understood it without much effort or arrived at an understanding through a process that had already failed this kid? You might expect such a realization would make someone feel really brilliant. But in fact it’s humbling. You wonder how many things there are, fascinating things, important things, that despite your own best effort you’ll never really get. Someone, for instance, probably “just gets” how to relay complex information to freshman writers—just gets teaching.

            And if, despite your efforts, you’re simply accorded a faculty for perceiving this or understanding that, if you ever lose it your prospects for recreating the same magic are dismal. What can be given can be taken away. Finally, there’s the question of desert. That I can score an 800 on the verbal reasoning section of the GRE is not tied to my effort or to my will. I like to read, always have. It’s not work to me. My proficiency is morally arbitrary. And yet everyone will say about my accomplishments and accolades, “You deserve it.”

            Really, though, this unsettled feeling notwithstanding, this is some stupid shit to complain about. I aced the GRE—again. It’s time to celebrate.

How to Read Stories--You're probably doing it wrong

There are whole books out there about how to read like a professor or a writer, or how to speed-read and still remember every word. For the most part, you can discard all of them. Studies have shown speed readers are frauds—the faster they read the less they comprehend and remember. The professors suggest applying the wacky theories they use to write their scholarly articles, theories which serve to cast readers out of the story into some abstract realm of symbols, psychological forces, or politics. I find the endeavor offensive.

Writers writing about how to read like a writer are operating on good faith. They just tend to be a bit deluded. Literature is very much like a magic trick, but of course it’s not real magic. They like to encourage people to stand in awe of great works and great passages—something I frankly don’t need any encouragement to do (what is it about the end of “Mr. Sammler’s Planet”?) But to get to those mystical passages you have to read a lot of workaday prose, even in the work of the most lyrical and crafty writers. Awe simply can’t be used as a reading strategy.

Good fiction is like a magic trick because it’s constructed of small parts that our minds can’t help responding to holistically. We read a few lines and all the sudden we have a person in mind; after a few pages we find ourselves caring about what happens to this person. Writers often avoid talking about the trick and the methods and strategies that go into it because they’re afraid once the mystery is gone the trick will cease to convince. But even good magicians will tell you well performed routines frequently astonish even the one performing them. Focusing on the parts does not diminish appreciation for the whole.

The way to read a piece of fiction is to use the information you've already read in order to anticipate what will happen next. Most contemporary stories are divided into several sections, which offer readers the opportunity to pause after each, reflecting how it may fit into the whole of the work. The author had a purpose in including each section: furthering the plot, revealing the character’s personality, developing a theme, or playing with perspective. Practice posing the questions to yourself at the end of each section, what has the author just done, and what does it suggests she’ll likely do in sections to come.

In the early sections, questions will probably be general: What type of story is this? What type of characters are these? But by the time you reach about the two/thirds point they will be much more specific: What’s the author going to do with this character? How is this tension going to be resolved? Efforts to classify and anticipate the elements of the story will, if nothing else, lead to greater engagement with it. Every new character should be memorized—even if doing so requires a mnemonic (practice coming up with one on the fly).

The larger goal, though, is a better understanding of how the type of fiction you read works. Your efforts to place each part into the context of the whole will, over time, as you read more stories, give you a finer appreciation for the strategies writers use to construct their work, one scene or one section at a time. And as you try to anticipate the parts to come from the parts you’ve read you will be training your mind to notice patterns, laying down templates for how to accomplish the types of effects—surprise, emotional resonance, lyricism, profundity—the author has accomplished.

By trying to get ahead of the author, as it were, you won’t be learning to simply reproduce the same effects. By internalizing the strategies, making them automatic, you’ll be freeing up your conscious mind for new flights of creative re-working. You’ll be using the more skilled author’s work to bootstrap your own skill level. But once you’ve accomplished this there’ll be nothing stopping you from taking your own writing to the next level. Anticipation makes reading a challenge in real time—like a video game. And games can be conquered.

Finally, if a story moves you strongly, re-read it immediately. And then put it in a stack for future re-reading.

Productivity as Practice:An Expert Performance Approach to Creative Writing Pedagogy Part 3

Start reading at part one.
            But the question of what standards of success the instructor is to apply to students’ work, as well as the ones instructors will encourage the students to apply to each others’ work, has yet to be addressed. The skills students develop through practicing evaluating their own work will both be based on their evaluations of the works of others and be applied to them. The first step toward becoming a creative writer is recognizing how much one likes the writing of another. The work the student is initially exposed to will almost certainly have gone through a complex series of assessments beginning with the author’s of his own work, onto commenters and editors working on behalf of the author, then onto editors working on behalf of publishers, and finally to the publishers themselves. Even upon publication, any given work is unlikely to be read by a majority of readers who appreciate the type of writing it represents until a critical threshold is reached beyond which its likelihood of becoming recommended reading is increased. At some point in the process it may even reach the attention of critics and reviewers, who will themselves evaluate the work either positively or negatively. (This is leaving out the roles of branding and author reputation because they probably aren’t practicable skills.) Since feedback cannot be grounded in any absolute or easily measurable criteria, Ericsson advocates a “socially based definition of creativity” (330). And, since students develop their evaluative skills through internalizing and anticipating the evaluations of others, the choice of which workshop to attend is paramount. The student should seek out those most versed in and most appreciative of the type of writing he aspires to master.

            Simply reading theoretical essays on poetry or storytelling, as Vikil has his students do, is probably far less effective than sampling a theorist’s or critic’s work and then trying to anticipate that evaluator’s response to a work he or she has written about. Some critics’ work lends itself to this type of exercise more readily than others; those who focus on literary as opposed to political elements, and those who put more effort into using sound methods to ensure the validity of their psychological or sociological theories—if they must theorize—will be much more helpful than those who see each new work as an opportunity to reiterate their favorite ideas in a fresh context. It may be advisable, in other words, to concentrate on reviewers rather than critics and theorists. After having learned to anticipate the responses of a few reviewers whose work is influential, the student will be better equipped to evaluate his or her own work in terms of how it will be received in the social context that will be the final arbiter of success or failure.

            Anticipation, as it allows for feedback, could form the basis for several types of practice exercises. Ericsson cites his own and others’ research demonstrating that chess players improve not as a function of how much time they spend playing chess but through studying past games between chess masters. “By trying to select the best move for each position of the game,” Ericsson writes, “and comparing their selected move to the actual move of the game, the players identify discrepancies where they must study the chess position more deeply to uncover the reason for the master’s move” (37). In a similar way, pausing in the process of reading to anticipate a successful author’s next move in a story or novel should offer an opportunity for creative writing students to compare their ideas with the author’s. Of course, areas of divergence between the reader’s ideas for a next move and the one the author actually made need not be interpreted as a mistake on the part of the reader—the reader’s idea may even be better. However, in anticipating what will happen next in a story, the student is generating ideas and therefore getting practice in the area of productivity. And, whether or not the author’s ideas are better, the student will develop greater familiarity with her methods through such active engagement with them. Finally, the students will be getting practice evaluating ideas as they compare their own to those of the author.

            A possible objection to implementing this anticipatory reading method in a creative writing curriculum is that a student learning to anticipate an author’s moves would simply be learning to make moves like the ones that author makes—which amounts to reproduction, not creativity. Indeed, one of the theories Ericsson has explored to explain how expertise develops posits a sort of rote memorization of strategies and their proper application to a limited set of situations. “For a long time it was believed that experts acquired a large repertoire of patterns,” he explains, “and their superior performance could be attributed to simple pattern matching and recall of previously stored actions from memory in an effortless and automatic manner” (331). If expertise relies on memory and pattern recognition, though, then experts would fare no better in novel situations than non-experts. Ericsson has found just the opposite to be the case.

"Superior expert performers in domains such as music, chess, and medicine can generate better actions than their less skilled peers even in situations they have never directly experienced. Expert performers have acquired refined mental representations that maintain access to relevant information about the situation and support more extensive, flexible reasoning to determine the appropriate actions demanded by the encountered situation." (331)

What the creative writer would be developing through techniques for practice such as anticipation-based reading likely goes beyond a simple accumulation of fixed strategies—a bigger bag of tricks appropriated from other authors. They would instead be developing a complex working model of storytelling as well as a greater capacity for representing and manipulating the various aspects of their own stories in working memory.

            Skepticism about whether literary writing of any sort can be taught—or learned in any mundane or systematic way—derives from a real and important insight: authors are judged not by how well they reproduce the formulas of poetry and storytelling but by how successful they are in reformulating the conventional techniques of the previous generation of writers. No one taught Cervantes his epic-absurd form of parody. No one taught Shakespeare how to explore the inner workings of his characters’ minds through monologues. No one taught Virginia Woolf how to shun external trappings and delve so exquisitely into the consciousness of her characters. Yet observations of where authors came to reside in relation to prevailing literary cultures don’t always offer clues to the mode of transportation. Woolf, for instance, wrote a great deal about the fashion for representing characters through references to their physical appearances and lists of their possessions in her reviews for the Times Literary Supplement. She didn’t develop her own approach oblivious of what she called “materialism”; fully understanding the method, she found it insufficient for what she hoped to accomplish with her own work. And she’d spent a lot time in her youth reading Shakespeare, with those long eminently revealing monologues (Wood 110). Supposing creative genius is born of mastery of conventions and techniques and not ignorance of or antipathy toward them, the emphasis on the works of established authors in creative writing pedagogy ceases to savor of hidebound conservatism.

            The general pedagogical outline focusing on practice devoted to productivity, as well as the general approach to reading based on anticipation can be refined to accommodate any student’s proclivities or concerns. A student who wants to develop skill in describing characters’ physical appearances in a way that captures something of the essence of their personalities may begin by studying the work of authors from Charles Dickens to Saul Bellow. Though it’s difficult to imagine how such descriptions might be anticipated, the characters’ later development over the course of the plot does offer opportunities to test predictions. Coming away from studies of past works, the student need not be limited to exercises on blank screens or sheets of paper; practice might entail generating multiple ideas for describing some interesting individual he knows in real life, or describing multiple individuals he knows. He may set himself the task of coming up with a good description for everyone interviewed during the course of a television news program. He can practice describing random people who pass on a campus sidewalk, imagining details of their lives and personalities, or characters in shows and movies. By the time the aspiring author is sitting down to write about her own character in a story or novel, she will all but automatically produce a number of possible strategies for making that character come alive through words, increasing the likelihood that she’ll light on one that resonates strongly, first with her own memories and emotions and then with those of her readers. And, if Simonton’s theory has any validity, the works produced according to this strategy need not resemble each other any more than one species resembles another.

            All of the conventional elements of craft—character, plot, theme, dialogue, point of view, and even higher-order dimensions like voice—readily lend themselves to this qualitative approach to practice. A creative writing instructor may coach a student who wants to be better able to devise compelling plots to read stories recognized as excelling in that dimension, encouraging her to pause along the way to write a list of possible complications, twists, and resolutions to compare with the ones she’ll eventually discover in the actual text. If the student fails to anticipate the author’s moves, she can then compare her ideas with the author’s, giving her a deeper sense of why one works better than the others. She may even practice anticipating the plots of television shows and movies, or trying to conceive of how stories in the news might be rendered as fictional plots. To practice describing settings, students could be encouraged to come up with multiple metaphors and similes based on one set and then another of the physical features they observe in real places. How many ways, a student may be prompted, can you have characters exchange the same basic information in a dialogue? Which ones reveal more of the characters’ personalities? Which ones most effectively reprise and develop the themes you’re working with? Any single idea generated in these practice sessions is unlikely to represent a significant breakthrough. But the more ideas one has the more likely she’ll discover one which seems likely to her to garner wider recognition of superior quality. The productivity approach can also be applied to revision and would consist of the writer identifying weak passages or scenes in an early draft and generating several new versions of each one so that a single, best version can be chosen for later drafts.

            What I’ve attempted here is a sketch of one possible approach to teaching. It seems that since many worry about the future of literature, fearing that the growing influence of workshops will lead to insularity and standardization, too few teachers are coming forward with ideas on how to help their students improve, as if whatever methods they admit to using would inevitably lend credence to the image of workshops as assembly lines for the production of mediocre and tragically uninspired poems and short stories. But, if creative writing is in danger of being standardized into obsolescence, the publishing industry is the more likely culprit, as every starry-eyed would-be author knows full well publication is the one irreducible factor underlying professional legitimacy. And research has pretty thoroughly ruled out the notion that familiarity with the techniques of the masters in any domain inevitably precludes original, truly creative thinking. The general outline for practice based on productivity and evaluation can be personalized and refined in countless ways, and students can be counted on to bring an endless variety of experiences and perspectives to workshops, variety that would be difficult, to say the least, to completely eradicate in the span of the two or three years allotted to MFA programs.

            The productivity and evaluation model for creative writing pedagogy also holds a great deal of potential for further development. For instance, a survey of successful poets and fiction writers asking them how they practice—after providing them a précis of Ericsson’s and Simonton’s findings on what constitutes practice—may lead to the development of an enormously useful and surprising repertoire of training techniques. How many authors engage in activities they think of as simple games or distractions but in fact contribute to their ability to write engaging and moving stories or poems? Ultimately, though, the discovery of increasingly effective methods will rely on rigorously designed research comparing approaches to each other. The popular rankings for MFA programs based on the professional success of students who graduate from them are a step in the direction of this type of research, but they have the rather serious flaw of sampling bias owing to higher ranking schools having the advantage of larger applicant pools. At this stage, though, even the subjective ratings of individuals experimenting with several practice techniques would be a useful guide for adjusting and refining teaching methods.

            Applying the expert performance framework developed by Ericsson, Simonton, Csikszentmihaly, and their colleagues to creative writing pedagogy would probably not drastically revolutionize teaching and writing practices. It would rather represent a shift in focus from the evaluation-heavy workshop model onto methods for generating ideas. And of course activities like brainstorming and free writing are as old as the hills. What may be new is the conception of these invention strategies as a form of practice to be engaged in for the purpose of developing skills, and the idea that this practice can and should be engaged in independent of any given writing project. Even if a writing student isn’t working on a story or novel, even if he doesn’t have an idea for one yet, he should still be practicing to be a better storyteller or novelist. It’s probably the case, too, that many or most professional writers already habitually engage in activities fitting the parameters of practice laid out by the expert performance model. Such activities probably already play at least some role in classrooms. Since the basic framework can be tailored to any individual’s interests, passions, weaknesses, and strengths, and since it stresses the importance and quantity of new ideas, it’s not inconceivable that more of this type of practice will lead to greater as opposed to less originality.

Productivity as Practice: An Expert Performance Approach to Creative Writing Pedagogy Part 2


Begin with part one.
            Of course, productivity alone cannot account for impactful ideas and works; at some point the most promising ideas must be culled from among the multitude. Since foresight seems to play little if any role in the process, Simonton, following D.T. Campbell, describes it as one of “blind variation and selective retention” (310). Simonton thus theorizes that creativity is Darwinian. Innovative and valuable ideas are often borne of non-linear or “divergent” thinking, which means their future use may not be at all apparent when they are originally conceived. So, Csikszentmihalyi follows his advice to produce multiple ideas with the suggestion, “Try to produce unlikely ideas” (369). Ignoring future utility, then, seems to be important for the creative process, at least until the stage is reached when one should “Shift from openness to closure.” Csikszentmihalyi explains

"Good scientists, like good artists, must let their minds roam playfully or they will not discover new facts, new patterns, new relationships. At the same time, they must also be able to evaluate critically every novelty they encounter, forget immediately the spurious ones, and then concentrate their minds on developing and realizing the few that are promising." (361)

So, two sets of skills appear to lie at the heart of creative endeavors, and they suggest themselves as focal areas for those hoping to build on their talents. In the domain of creative writing, it would seem the most important things to practice are producing multiple and unlikely ideas, and evaluating those ideas to see which are the most viable.

            The workshop method prevalent in graduate writing programs probably involves at least some degree of practice in both of these skills. Novelist and teacher Ardashir Vakil, in his thoughtful and candid essay, “Teaching Creative Writing,” outlines what happens in his classrooms.

"In the first part of the workshop students are given writing exercises. These vary from the most basic—write about a painful experience from your childhood—to more elaborate games in which you get pairs to tell stories to each other and then write the other person’s story with some bits invented. Then we look at texts by established writers and try to analyse what makes them work—what has the writer done?—in terms of character, language, voice and structure to capture our attention and how have they brought forth a visceral emotional response from the reader." (158)

            These exercises amount to efforts to generate ideas by taking inspiration from life, the writer’s or someone else’s, or from the work of successful writers. The analysis of noteworthy texts also shades into the practice of evaluating ideas and methods. Already, though, it seems the focus leans more toward evaluation, the ability to recognize useful ideas, than productivity. And this emphasis becomes even more pronounced as the course progresses.

"Along the way, we read essays, interviews and extracts by established writers, reflecting on their practice. Sometimes I use an essay by Freud, Bakhtin or Benjamin on theories of storytelling. Finally, there is a group workshop in which we read and discuss each others’ writing. Each week, someone offers up a story or a few poems or an extract to the group, who go away and read, annotate and comment on it." (158)

Though the writer’s reflections on their practices may describe methods for generating ideas, those methods don’t seem to comprise an integral part of the class. Vakil reports that “with minor variations” this approach is common to creative writing programs all over England and America.

            Before dismissing any of these practices, though, it is important to note that creative writing must rely on skills beyond the production and assessment of random ideas. One could have a wonderful idea for a story but not have the language or storytelling skills necessary to convey it clearly and movingly. Or one may have a great idea for how to string words together into a beautiful sentence but lack any sense of how to fit it into a larger plot or poem. In a critique of the blind-variation and selective-retention model, Ericsson points out that productivity in terms of published works, which is what Simonton used to arrive at his equal odds rule, takes a certain level of expertise for granted. Whether students learn to develop multiple new ideas by writing down each other’s stories or not, it is likely important that they get practice with the basic skills of stringing words together into narratives. As Ericsson explains, “Unless the individual has the technical mastery to develop his or her ideas or products fully, it is unlikely that judges will be able to recognize their value and potential” (330). Though mere productivity may be what separates the professional from the game-changer, to get to the level of expertise required to reliably produce professional-grade work of any quality takes more than a bunch of blindly conceived ideas. As if defending Vakil’s assigned reading of “established writers,” Ericsson argues that “anyone interested in being able to anticipate better what is valued by experts in a domain should study the teachings and recognized masterpieces of master teachers in that domain” (330).

            Part of the disagreement between Simonton and Ericsson stems from their focusing on different levels of the creative process. In the domain of creative writing, even the skills underlying “technical mastery” are open to revision. Writers can—and certainly have—experimented with every aspect of storytelling from word choice and syntax at the fundamental level to perspective and voice at higher-order levels. The same is true for poetry. Assuming the equal odds principle can be extrapolated into each of these levels, teachers of creative writing might view their role as centering on the assessment of their students’ strengths and weaknesses and thenceforth offering encouragement to them to practice in those areas they show poorer skills in. “Develop what you lack” (360) is another of Csikszentmihalyi’s prescriptions. The teacher also has at her disposal the evaluations of each student’s classmates, which she might collate into a more unified assessment. Rather than focusing solely on a student’s text, then, the teacher could ask for the class’s impressions of the writer’s skills as evidenced by the text in relation to others offered by that student in the past. Once a few weaknesses have been agreed upon, the student can then devote practice sessions to generating multiple ideas in that area and subsequently evaluating them with reference to the works of successful authors.

            The general outline for creative writing workshops based on the expert performance framework might entail the following: Get the workshop students to provide assessments, based on each of the texts submitted for the workshop, of their colleagues’ strengths and weaknesses to supplement those provided by the instructor. If a student learns from such assessments that, say, his plots are engaging but his characters are eminently forgettable, he can then devote practice sessions to characterization. These sessions should consist of 1) studying the work of authors particularly strong in this area, 2) brainstorming exercises in which the student generates a large number of ideas in the area, and 3) an exercise at a later time involving a comparative assessment of the ideas produced in the prior stage. This general formula can be applied to developing skills in any aspect of creative writing, from word choice and syntax to plot and perspective. As the workshop progresses and the student receives more feedback from the evaluations, he will get better at anticipating the responses of the instructor and his classmates, thus honing the evaluative skills necessary for the third practice phase.

            The precedence of quantity of ideas over quality may be something of a dirty little secret for those with romantic conceptions of creative writing. Probably owing to these romantic or mystical notions about creativity, workshops focus on assessment and evaluation to the exclusion of productivity. One objection to applying the productivity approach within the expert performance framework likely to be leveled by those with romantic leanings is that they ignore the emotional aspects of creative writing. Where in the process of developing a slew of random words and images and characters does the heart come in? Many writers report that they are routinely struck with ideas and characters—some of whom are inspired by real people—that they simply have to write about. And these ideas that come equipped with preformed emotional resonance are the same ones that end up striking a chord with readers. Applying a formula to the process of conceiving ideas, even assuming it doesn’t necessarily lead to formulaic works, might simply crowd out or somehow dissipate the emotional pull of these moments of true inspiration.

            This account may, however, go further toward supporting the expert performance methods than casting doubt on them. For one, it leaves unanswered the question of how many ideas the writer was developing when the big inspiration struck. How many other real people had the author considered as possible characters before lighting on the one deemed perfect? Like the dreamer who becomes convinced of her dreams’ prophetic powers by dint of forgetting the much greater proportion of dreams that don’t come true, these writers are likely discounting a large number of ideas they generated before settling on the one with the greatest potential. Far from being completely left out of the training methods, emotional resonance can just as easily take ascendant priority as an evaluative criterion. It can even play a central role in the other two phases of practice.

            If the student reads a passage from another author’s work that evokes a strong emotion, she can then analyze the writing to see what made it so powerful. Also, since the emotion the passage evoked will likely prime the reader’s mind to recall experiences that aroused similar feelings—memories which resonate with the passage—it offers an opportunity for brainstorming ideas linked with those feelings, which will in turn have greater potential for evoking them in other readers. And of course the student need not limit herself to memories of actual events; she can elaborate on those events or extemporize to come up with completely new scenes. The fear that a cognitive exercise must preclude any emotional component is based on a false dichotomy between feeling and thought and the assumption that emotions are locked off from thinking and thus not amenable to deliberate training. Any good actor can attest that this assumption is wrong.

Productivity as Practice: An Expert Performance Approach to Creative Writing Pedagogy Part 1

            Much of the pedagogy in creative writing workshops derives solely from tradition and rests on the assumption that the mind of the talented writer will adopt its own learned practices in the process of writing. The difficult question of whether mastery, or even expertise, can be inculcated through any process of instruction, and the long-standing tradition of assuming the answer is an only somewhat qualified “no”, comprise just one of several impediments to developing an empirically supported set of teaching methods for aspiring writers. Even the phrase, “empirically supported,” conjures for many the specter of formula, which they fear students will be encouraged to apply to their writing, robbing the products of some mysterious and ineffable quality of freshness and spontaneity. Since the criterion of originality is only one of several that are much easier to recognize than they are to define, the biggest hindrance to moving traditional workshop pedagogy onto firmer empirical ground may be the intractability of the question of what evaluative standards should be applied to student writing. Psychologist K. Anders Ericsson’s central finding in his research on expert achievement is that what separates those who attain a merely sufficient level of proficiency in a performance domain from those who reach higher levels of excellence is the amount of time devoted over the course of training to deliberate practice. But, in a domain with criteria for success that can only be abstractly defined, like creative writing, what would constitute deliberate practice is as difficult to describe in any detail as the standards by which work in that domain are evaluated.

            Paul Kezle, in a review article whose title, “What Creative Writing Pedagogy Might Be,” promises more than the conclusions deliver, writes, “The Iowa Workshop model originally laid out by Paul Engle stands as the pillar of origination for all debate about creative writing pedagogy” (127). This model, which Kezle describes as one of “top-down apprenticeship,” involves a published author who’s achieved some level of acclaim—usually commensurate to the prestige of the school housing the program—whose teaching method consists of little more than moderating evaluative class discussions on each student’s work in turn. The appeal of this method is two-fold. As Shirley Geok-lin Lim explains, it “reliev[es] the teacher of the necessity to offer teacher feedback to students’ writing, through editing, commentary, and other one-to-one, labor intensive, authority-based evaluation” (81), leaving the teacher more time to write his or her own work as the students essentially teach each other and, hopefully, themselves. This aspect of self-teaching is the second main appeal of the workshop method—it bypasses the pesky issue of whether creative writing can be taught, letting the gates of the sacred citadel of creative talent remain closed. Furthermore, as is made inescapably clear in Mark McGurl’s book The Program Era, which tracks the burgeoning of creative writing programs as their numbers go from less than eighty in 1975 to nearly nine hundred today, the method works, at least in terms of its own proliferation.

            But what, beyond enrolling in a workshop, can a writer do to get better at writing? The answer to this question, assuming it can be reliably applied to other writers, holds the key to answering the question of what creative writing teachers can do to help their students improve. Lim, along with many other scholars and teachers with backgrounds in composition, suggests that pedagogy needs to get beyond “lore,” by which she means “the ad hoc strategies composing what today is widely accepted as standard workshop technique” (79). Unfortunately, the direction these theorists take is forbiddingly abstruse, focusing on issues of gender and ethnic identity in the classroom, or the negotiation of power roles (see Russel 109 for a review.) Their prescription for creative writing pedagogy boils down to an injunction to introduce students to poststructuralist ways of thinking and writing. An example sentence from Lim will suffice to show why implementing this approach would be impractical:

"As Kalamaras has argued, however, collective identities, socially constructed, historically circumscribed, uniquely experienced, call for a “socially responsible” engagement, not only on the level of theme and content but particularly on that of language awareness, whether of oral or dialectic-orthographic “voice,” lexical choice, particular idiolect features, linguistic registers, and what Mikhail Bakhtin called heteroglossic characteristics." (86)

Assuming the goal is not to help marginalized individuals find a voice and communicate effectively and expressively in society but rather to help a group of students demonstrating some degree of both talent and passion in the realm of creative writing to reach the highest levels of success possible—or even simply to succeed in finding a way to get paid for doing what they love—arcane linguistic theories are unlikely to be of much use. (Whether they’re of any real use even for the prior goal is debatable.)

            Conceiving of creative writing as the product of a type of performance demanding several discrete skills, at least some of which are improvable through training, brings it into a realm that has been explored with increasing comprehensiveness and with ever more refined methods by psychologists. While University of Chicago professor Mihaly Csikszentmihalyi writes about the large group of highly successful people in creative fields interviewed for his book Creativity: Flow and the Psychology of Discovery and Invention as if they were a breed apart, even going so far as to devote an entire chapter to “The Creative Personality,” and in so doing reinforcing the idea that creative talent is something one is simply born with, he does manage to provide several potentially useful strategies for “Enhancing Personal Creativity” in a chapter by that name. “Just as a physician may look at the physical habits of the most healthy individuals” Csikszentmihalyi writes, “to find in them a prescription that will help everyone else to be more healthy, so we may extract some useful ideas from the lives of a few creative persons about how to enrich the lives of everyone else” (343). The aspirant creative writer must understand, though, that “to move from personal to cultural creativity one needs talent, training, and an enormous dose of good luck” (344). This equation, as it suggests only one variable amenable to deliberate effort, offers a refinement to the question of what an effective creative writing pedagogy might entail. How does one train to be a better a writer? Training as a determining factor underlying exceptional accomplishments is underscored by Ericsson’s finding that “amount of experience in a domain is often a weak predictor of performance” (20). Simply writing poems and stories may not be enough to ensure success in the realm of creative writing, especially considering the intense competition evidenced by those nearly nine hundred MFA programs.

            Because writing stories and poems seldom entails a performance in real time, but instead involves multiple opportunities for inspiration and revision, the distinction Ericsson found between simply engaging in an activity and training for it may not be as stark for creative writing. Writing and training may overlap if the tasks involved in writing meet the requirements for effective training. Having identified deliberate practice as the most important predictor of expert performance, Ericsson breaks the concept down into three elements: “a well-defined task with an appropriate level of difficulty for the particular individual, informative feedback, and opportunities for repetition and corrections of errors” (21). Deliberate practice requires immediate feedback on performance. In a sense, success can be said to multiply in direct proportion to the accumulation of past failures. But how is a poet to know if the line she’s just written constitutes a success or failure? How does a novelist know if a scene or a chapter bears comparison to the greats of literature?

            One possible way to get around the problem of indefinable evaluative standards is to focus on quantity instead of quality. Ericsson’s colleague, Dean Simonton, studies people in various fields in which innovation is highly valued in an attempt to discover what separates those who exhibit “received expertise,” mastering and carrying on dominant traditions in arts or sciences, from those who show “creative expertise” (228) by transforming or advancing those traditions. Contrary to the conventional view that some individuals possess a finely attuned sense of how to go about producing a successful creative work, Simonton finds that what he calls “the equal odds rule” holds in every creative field he’s studied. What the rule suggests is “that quality correlates positively with quantity, so that creativity becomes a linear statistical function of productivity” (235). Individuals working in creative fields can never be sure which of their works will have an impact, so the creators who have the greatest impact tend to be those who produce the greatest number of works. Simonton has discovered that this rule holds at every stage in the individual’s lifespan, leading him to conclude that success derives more from productivity and playing the odds than from sure-footed and far-seeing genius. “The odds of hitting a bull’s eye,” he writes, “is a probabilistic function of the number of shots” (234). Csikszentmihalyi discovered a similar quantitative principle among the creative people he surveyed; part of creativity, he suggests, is having multiple ideas where only one seems necessary, leading him to the prescription for enhancing personal creativity, “Produce as many ideas as possible” (368).

Some Concerns I have about the Class I Teach


I consider my task as an Intro to Rhetoric and Composition teacher to help my students learn to communicate in writing. I serve other gods though. I want to prepare the students for the types of writing they'll be asked to do in other departments. And I have to address the concerns of the senior faculty in the English department. What bothers me is that despite reams of research I've read in the field of Rhet Comp I've seen nothing resembling an empirical assessment of rival teaching methods. Far too often it seems what ends up being stressed in Freshman Rhet Comp classes is simply what's fashionable among Rhet Comp scholars.


Of course, Rhet Comp scholars are, by definition successful writers, so why shouldn't a writing pedagogy be based on what they concern themselves with? And why not try to equip students with everything that might be of use to them?


Imagine a research program designed to uncover the differences in writing strategies between beginning and expert writers. Upon completion, the researchers provide a list of practices the findings suggest teachers should encourage their students to adopt. Expert writers, for instance, devote much more time to revision than beginners; therefore, students should be exhorted to revise their papers more than they may deem necessary. The pedagogical formula here is to try to make beginning writers behave like expert writers. This idea is likely far too simple, and may actually encumber students on their paths toward expertise more than it helps them. Revision is possible, even irresistible, for expert writers owing to their keen sensibility for what constitutes good writing; insisting to beginners that they revise then calls on them to apply knowledge and skills in the assessment of their own writing they have yet to develop, making revision just another meaningless routine toward the goal of satisfying a teacher.

There is a saying among mixed martial arts trainers that the best way to teach somebody nothing is to try to teach them everything. Attempts to load beginners up with the strategies of experts at too early a stage in their education inhere with the danger of overwhelming them, leaving them discouraged and in despair as to their chances of ever acquiring that expertise which seems so far beyond their comprehension. Prevailing upon students in Intro Comp courses the importance of being attuned to the audience of their writing and the genre conventions by which it will be assessed may speed them along the path toward practices similar to those of more experienced writers, but being forced to consider the added dimensions of audience and genre as they’re struggling to compose grammatical sentences within viable paragraphs will just as likely convince them this writing thing is just too complicated. While it is true that experienced writers routinely consider audience and genre, this observation leaves two important questions unanswered: At what point in their development did they begin to incorporate these considerations into their strategies? And how did they learn to do so?

Instruction, to avoid overwhelming students or saddling them with practices devoid of meaning, must be stage-appropriate, and pedagogy based on research findings on the strategies of experts must consider the possibility that any given practice may emerge, not as a result of direct teaching, but as a byproduct of the refinement of other skills or the general growth of awareness about the discipline. Asking beginning writers to analyze a piece of writing in terms of genre and audience may simply be trying to get them to draw on knowledge they’ve yet to acquire, knowledge they may acquire automatically, without any instruction or encouragement, as they encounter and become more familiar with a greater number of texts in various genres and come to know the types of people, including individual scholars, who will likely be interested in any given work. Writing instructors need to avoid behaving like parents who try to teach their children to walk and talk as early as possible to give them a head start on the road to achievement, even though timing of first words and first steps seems to emerge independent of parents instructions, when the children are ready, and have nothing to do with later proficiency or grace.

What Use is a Memory These Days?

             I like to tell people I continue to work as a waiter because I want to keep one foot in the real world. As stressful and jarring as it is to be thrown into a crowded restaurant on a bustling Saturday night after doing academic stuff all week, I realize at the end of each of those harrowing shifts there’s nothing that quite matches their demand for fluid intelligence. And I like to add yet another demand. At my first restaurant job, a fellow server named Becky explained to me once that she never really decided not to write down orders; she just realized at some point she wasn’t even referring to what she’d written when she typed the orders into the computer. Since I was still learning the menu at the time I suspected Becky might be a genius. But it wasn’t long before I was memorizing my orders too.

            I was still at that first restaurant when I read Mind Performance Hacks and learned about mnemonics like memory palaces and number-rhyme pegs (one-gun, two-shoe…), but I decided against trying to use them at work. Munchies (which later became Luckies) was my brain gym; the idea was to be challenged, not to use shortcuts. Still, I was uncomfortable every time I approached a table with ten or more people, knowing I was good for at best eleven orders. Maybe I could push that number higher, but it would mean getting big tables more often than I could count on. So in the back of my mind I toyed with the possibility of sitting down some day and mastering the memory techniques.

            The day I first attended a class to prepare me for teaching Intro to Rhetoric and Composition courses the professor challenged us to remember the names of all our classmates. Sitting in a circle, we each in turn introduced ourselves and commented on our favorite item of clothing in our wardrobes. I treated it as an order. But I had an extra few seconds for each name, so I went back and reviewed as many prior names as I could before the next person said his or her name. A classmate named Shannon and I were the only two to remember all twenty-two names. (I have no idea how she did it.) Pleased with myself, I figured I’d have no difficulty remembering the names of my students in the future. And I didn’t—until my third semester teaching. That’s when names and faces started to blur and I began to find myself staring at some poor student as I was taking attendance, silently cursing him for being such an undifferentiated mass of human goo.

            For this semester, it was back to Mind Performance Hacks. After learning the twenty-two names on my class roster in about five minutes using a memory palace and celebrities with the same names—without a single rehearsal—I decided I might want to look into these mnemonics after all. First, I memorized a Phillip Larkin poem (a day and a half), then fifteen of Arthur Aaron’s questions (five minutes), then the geological time scale as it’s printed in my Merriam Webster’s Collegiate Dictionary (forty-five minutes). I’ve had mixed success (mixed failure) using memory palaces at work. I beat my record of eleven by correctly encoding thirteen orders and matching them to the proper positions at the table. But I’ve also botched a couple six and eight-tops. Apparently, the new technique is interfering with old ones I didn’t even know I was using.

            But according to Mind Hacks there’s yet another level beyond memory palaces. Before committing to the Dominic System or the Major System, though, I was anxious to read something I’d come across while browsing Amazon. In one of those bizarre coincidences Jungians and New Agers read signs into, tech writer Joshua Foer was publishing a book just as I was deciding to do further research. In a blurb on the back cover of Moonwalking with Einstein: The Art and Science of Remembering Everything, science writer Jonah Lehrer claims Foer’s book “invents a new genre of nonfiction.” Foer himself calls it “participatory journalism.” But the book uncannily resembles Neil Strauss’s foray into the world of pick up artists and the resultant Horatio Alger story in The Game. In place of PUA’s, Foer encounters and gets taken under the wings of MA’s—Mental Athletes. And he undergoes a transformation from well educated, above average geeky guy to U.S. Memory Champion over the course of a year. Even though the best in this country are usually not very competitive on the world stage, Foer’s accomplishment is still absurdly impressive. Like Strauss, though, he insists at every step of the way that getting good, achieving excellence, is only a matter of training and determination. (Largely owing to his picture on the jacket, I kept thinking of Foer as Harry Potter writing about his first year at Hogwarts.)

            Strauss enjoyed the luxury of his topic’s intrinsic fascination, and to a slightly lesser extent this is true for Foer too. I have to say, though, as much as I enjoyed The Game, Moonwalking with Einstein is a much better book. Foer brings a refreshing skepticism to his analysis that’s disturbingly lacking in Strauss’s writing. (Neuro-Linguistic Programming is total bullshit.) Tony Buzan, the leader of the memory renaissance, who put on the first World Championships and is still lobbying to have his methods—most of which are not really his—implemented as part of regular class curricula, jumps from the pages of Foer’s book like a character from a Saul Bellow novel, a complete shyster who after spouting off a bunch of nonsense manages to say things that are shockingly profound and, you sense, completely true. We discover too that the mnemonists’ world has an analog to the magic world’s Uri Geller, a guy who uses the standard repertoire of tricks but claims he’s using nothing but his natural gifts. If you’ve seen the documentary Brainman, you know who he’s talking about.

            Foer is better too at fleshing out some of the underlying philosophical issues. While it’s true The Game explores the theme of questioning the ultimate worth of mastery by describing how it turns a lot of guys into unsavory characters, Strauss’s self-promotion drowns out any meaningful examination of the issue. How could a guy’s life be derailed by his efforts to master the skills involved in seducing a woman? (Sounds like a great idea for a novel—stay tuned.) Foer does better at making the question explicit and trying to work out some answers. Will enhanced memory mean greater wisdom? Does “elaborative encoding” detract from the meaning of what’s being memorized? What does it mean that so many of these mnemonic enthusiasts are, as Foer describes, people who are “indistinguishable from those” you’d find at a “‘Weird Al’ Yankovic (five of spades) concert”? (189). And what role should memory palaces play in education?

            I’m glad I read the book before committing to my Dominic System because it turns out it’s been tweaked. The idea is to come up with a person performing some action for every two-digit combination. Ozzy Osborne is my 00, and he’s biting the head off a bat. But now competitive mnemonists are using the PAO system, which means person action object. My 00 still works but several others I’d come up with don’t. With three bits of information for every two-digit number, you can memorize one image for every six digits. MA’s memorize multiple decks of cards by putting these images in memory palaces. Anyway, now I can get back to creating my personal image inventory, which incidentally is really hard. Try coming up with a hundred distinct and easily recognizable actions even without attaching them to people. And what will I do with it when I’m done? Well, there are a lot of things I want to memorize as scaffolds for future learning: the periodic table, a more detailed and up-to-date geological timescale, maybe some U.S. history, etc. But I have to keep in mind Foer’s book is great because throughout the process of writing it he continued to be a science writer and never fully identified himself as a mnemonist. I too am a writer first. Memorizing is a great first step to learning, but it’s not the ultimate one.

1 More Reason Not to Read

Adam Gopnik, in a recent offering to The New Yorker, does what computational neuroscientist Douglas Hofstadter calls jootsing—jumping outside of the system—in discussing a slew of books on new media and communication technology. In a brilliant move, he takes a step up and away from the books and tries to see them from a broader perspective rather than simply engaging with each in turn. He groups the authors according to their stances toward the revolution as Never-Betters, Ever-Wasers (what revolution?), and Better-Nevers. I am a Never-Better with qualifications. And this is the group Gopnik dismisses with the least ceremony. The optimistic view, “has its excitements,” he writes, “but the history it uses seems to have been taken from the back of a cereal box.” The Never-Betters have a habit of citing how the invention of printing represented a sudden major leap forward to the Enlightenment. But, according to Gopnik, “The idea… that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth” (125). Tyrants and despots were among the first to use the new technology to its fullest potential. Voltaire didn’t come along for another two centuries. On the other hand, today we can cite Tahrir Square.

I like the idea of a “global psyche” (126) some of the more sophisticated Never-Betters put forth because it meshes well with the group mind concept I come away with after reading books by David Sloan Wilson. We can increase the computing power of our minds exponentially by connecting them with other minds. However, this group model suffers the same problem in terms of cognition as it does in those of adaptation—there’s always the threat of selfish free-riders who detract from the adaptiveness of the higher-order organism. I’m afraid there are as of yet far from adequate mechanisms in place to punish propagandists and users who don’t contribute. Even though a group of cooperative minds is vastly superior to any one mind, the group mind suffers dramatically as the quality of the individual brains constituting it deliquesces into narcissistic muck. M.T. Andersen’s novel Feed provides an all-too-recognizable dystopian vision of what the unchecked ascent of marketing can do if the consumerist ethos continues its ruthless campaign against discipline and compassion.

And this is where I begin to sympathize with the Better-Nevers, though I agree with the Ever-Wasers that there have always been old fogies like me grinding their teeth at night about wayward generation nexters (generation text?). Gopnik pulls off the feat of quoting Nicholas Carr, who calls the internet, and his book, The Shallows, at once both warmly and condescendingly to the effect that being online doesn’t draw the same intensity of concentration as reading does. “The medium does matter” (127), Carr insists. (I haven’t read his book yet.) Communication technology also separates us from the flesh-and-blood humans whose avatars we’re communicating with, a phenomenon Sherry Turkle captures in her own wonderful title, Alone Together. It is in discussing these Better-Nevers’ concerns that Gopnik starts to piss me off a bit. After casually referring to research that shows technological multi-tasking does indeed have deleterious effects on people’s capacity for concentration and abstract thinking, to surveys showing an alarming decline in college students self-assessments of empathy, and to research linking this decline to the coterminous decline in the reading of fiction, Gopnik waves it all away in a single sentence: “But if reading a lot of novels gave you exceptional empathy university English departments should be filled with the most compassionate and generous-minded of souls, and, so far, they are not” (128).

This breezy dismissal pisses me off because distracting us from the research doesn’t change the findings. It also pisses me off because Gopnik is right about English departments (though not, I must admit, about the one in which I study, where all the professors are exceptionally generous and open-minded). He’s has, in fact, unwittingly stumbled upon a driving force behind the decline in reading independent of technology and consumerism—the scholarly black hole that is Literary Theory (or Theory as it functions in the Humanities in general). As far back as seventh grade I remember being annoyed at how my Lit. teacher, Mrs. Kalb, seemed to want to treat stories and novels as if they were algebraic equations (New Criticism), and this would only get worse as I got further in my education. It wasn’t that I minded close reading; it was that this heightened attention to the details was in the service of a much too mechanistic analysis of the “text,” which became a type of coded message instead of a story.

As an undergrad, my Shakespeare teacher (at Purdue) was refreshingly atheoretical—though looking back I remember him pushing some bizarre Freudian readings of “Hamlet.” I later had a class on Literature and Psychology, which focused solely on the theories of Carl Jung. Though both Freud and Jung are archetypal pseudoscientists, I’m glad their theories at least approach characters in a way human readers tend to approach them—as actual people, not real, but potentially real, and definitely not linguistic ciphers. The really appalling theories falling under the labels of structuralism and poststructuralism, and even, to a somewhat lesser degree, New Historicism, came later in my education. Fortunately, I was inoculated to them through my independent study of science. According to these theories, novels and stories are vessels for the transmission of culture—again, coded messages—and can be “deconstructed” to uncover all the nastiest elements of civilization: patriarchy, oppression, racism, colonialism, ecotoxicity.

At the core of all these theories is the idea that one should read with critical detatchment, assessing the narrative in terms of how it is functioning symbolically, or assessing and resisting the multitudinous ways in which it's trying to push some "dominant culture" onto you. But for most people it's a struggle to get into the text in the first place, so teachers making them even more difficult by freighting them with all these absurd theories and paranoid concerns merely serves to make it less likely they'll get anything out of reading.


Literary theory needs to begin with the realization that people don't need theory to understand stories. And critics should endeavor to cast their readers into the stories they're examining, not out of them.

Difficult Reading in the Age of Narcissism

It’s hard to imagine amid all the commercial clamoring for our attention, with a mediascape in which impossibly attractive women routinely doff their clothes, and some will even have sex on camera, where the finest specimens of athletes, men (and increasingly women) who take innate gifts voyeurs on the other side of the screen can only fantasize about possessing to peaks probably new to the human race with the advent of nutritional and sports science, struggle against one another in dramatic skirmishes for stakes that dwarf our lifetime net worth—it’s hard to imagine with all this just a few clicks away that there could be anything worth wresting our attention away from the screen for, something worth exercising the discipline involved in actually directing our own attention, taking charge of what we might take the time to deliberately decide will be gratifying to a part of us deeper and more enduring than the flashy whims of any shallow and single-minded industry, no matter how adept that industry’s executives have grown over the last half decade at giving us no choice in the matter.

I’m reading Virginia Woolf’s To the Lighthouse for the first time. It’s for a graduate course I’m taking on representations of the dead in literature. Though I read Mrs. Dalloway some years ago, I never bothered with Lighthouse because I knew Woolf used stream-of-consciousness narration and having slogged through Ulysses in my early twenties I was of the mindset that the technique was an experiment that failed outright. Maybe it’s because I’m a more mature reader; maybe it’s that Woolf takes greater care in orienting her readers within the flood of perceptions and emotional turmoil that is her characters’ inner lives than does Joyce or Faulkner in their ever-so-dense works in the same style. To the Lighthouse is exquisite. And every time I lose myself to the tide of impressions and the figurative estrangement of recognizable feelings I want to call everyone I know and insist they all read it. But then reality sets in and I begin sorting through the ranks of my acquaintances for that rare individual who has the patience and who has managed to develop the sensibility to appreciate such trifles.

Last night a friend texted me. He couldn’t recall the title of a book he’d read as a teenager, one he wanted to recommend to his daughter. Embarrassingly, I recognized the plot elements he used as clues, not from my own experience reading but from the TV miniseries the book had inspired. One more on my to-read list—there’ll never be a shortage. When I texted him back that I might check out the book when I was done with grad school and all the “heavy lifting” it called for, he confessed he was struggling with Crime and Punishment. “C n P,” I responded, “is on my to-read list too.” He can only take ten or so pages at time of the great masterpiece. And he takes frequent breaks with the likes of Tom Clancy. No shame in that, I thought; once in a while you have to take some guilty pleasure. World War Z anyone?

Empathy is on the wane in American youth. A meta-analysis led by Sara Konrath of the University of Michigan at Ann Arbor and published last August found that college students, whose scores on self-report tests have been steadily dropping over the past 30 years, have experienced a particularly dramatic decrease in concern for others just in the last decade. Thirty years ago coincides with the ascent of the radically individualistic ideology of the right in politics. But what has been going on in the past ten years to make the decline accelerate? Well, for one thing people are reading less fiction. Raymond Mar of York University in Toronto and his colleagues published research last year that showed how many stories preschoolers read is strongly associated with how well they understand the emotions of others, and the more fiction adults read the higher they score on those tests of empathy.

In the wake of consciousness-raising efforts like the documentary Supersize Me and the book Fast Food Nation, a culture of foodies has arisen and seems to be growing. The unfettered free market and the ideal of consumerism is bad for our health, it seems. We need to protect our children from marketing and preservatives. But the narcissism at the heart of our economic ideology—the me-first ethos, the trained impulse of what can it do for me now?—corrupts more than just our bodies. We don’t only require nutrition and exercise for our bodies; we need it for our minds as well, and the parts of our minds that have been wasting away in recent decades more than any others are the parts that allow us to direct our own attention rather than letting the TV tell us what’s important and the part that we as a species seven billion strong especially can’t let atrophy, the one that makes us aware not only of the people standing next to and around us but also of those people in Bangladesh who are losing their homes to rising sea levels, the people in Cambodia who go blind making the 2 dollar t-shirts we buy at Wal-Mart, the inner city kids our government can’t afford to keep healthy, much less educate, because they believe they must continue sniffing the farts of billionaires who we supposedly rely on to give us jobs but who in reality are employing the most wretched of Cambodians and paying them minuscule fractions of slave wages.

How to get our lazy asses off the couch of narcissism and instant gratification, to whip our minds that have been turned to mush courtesy of Kim Kardashian and Michael Vick (some role models for empathy those two) into shape for the sake of each other? As an English teacher, I think I need to take some personal responsibility on this front. There are some basic techniques that can be applied to reading fiction, even difficult fiction, that can make it more accessible and more gratifying. First—and I’m addressing this to my colleagues in the department—completely ban from your mind, at least on first reading, anything you’ve ever learned about literary theory. Most of these theories can easily be shown to lack validity, and people were enjoying fiction long before they were thought up by the cranks who sponsor them. Concentrate instead on the characters and the language. Good writers give us all the clues we need into what kind of people they’re writing about, and our feelings toward these people are what lies at the heart of our appreciation of their stories. Rather than stand back and try to decode the story as if it were some type of puzzle, allow yourself to connect with or hate the characters—or both, as the best ones never let you quite decide—based on what they do in the story. As an example of this, I knew I wanted Llewelyn Moss in Cormac McCarthy’s No Country for Old Men to escape when he went back to the caldera to bring the dying Mexican drug runner some water. That’s a guy you can feel for. That's a guy who would feel for you.

But a lot of great stories don’t involve the type of fireworks you find in McCarthy or Palahniuk. Nor should they. Most of what are considered literary writers focus on the types of experiences common to us all, only they explore them with uncommon language. This isn’t done for the sake of being impressive; it’s done to make us see these experiences anew—or to encourage us to see them at all, so prone are we anymore to discount them, never even think about them, because we’re busy watching the game or attending to The Situation.

Fault and Default in Teaching Styles


When the chair of the English Department sat across from me in his office and admonished, “It’s not going to be anything romantic, like making some special connection with a bunch of likeminded students; you’re going to be trying to teach a bunch of people who don’t want to learn,” his warning resonated with my college experiences all too well. Of course, I never believed I would be stepping into “Dead Poets Society” and I didn’t really care. I’m a writer, not a teacher. I was just accepting the position to spruce up my CV and get my tuition covered. So, the chair of the department (who I won’t name here) and I had the same attitude toward the material: it’s hard, but it’s worthwhile—show me that you’re willing to put forth the effort or be on your way.

Still, I was a bit disturbed in the middle of my first semester teaching by how apathetic and lethargic my students were. So I went to the office of another professor, Damian Fleming, who had taught a Chaucer course I’d loved the previous semester. “The Canterbury Tales” in the original Middle English could have been drudgery, but Dr. Fleming had a playful approach that made it irresistible—at least to a geek like me. When I described to him my concern he recognized my predicament immediately. “Do they just sit there with this look on their face like, ‘Why are you doing this to me?’” That was exactly the look I was getting. Dr. Fleming had a few practical suggestions, like having everyone in class speak up, even if it’s just to read out loud, as often as possible. But what really impressed me, and continues to impress me, is that he has come to exemplify for me an attitude toward teaching diametrically opposed to my own default mindset—and he’s a much a better teacher.

The attitude Dr. Fleming subcommunicates is that this stuff—Old English, diphthongs, metathesis—may be arcane and boring, but we’re all fun people so we’re going to have fun with it. Before long, it dawns on you that the material can’t actually be boring if you’re enjoying learning about it. At the same time, you’re seeing evidence of how important all you’re learning about really is every time you read a paper or watch the news—Is English endangered by Spanish-speaking immigrants? Will texts and emails ruin the language?—and suddenly it’s anything but arcane. The difference between Dr. Fleming’s approach and that of the department chair is that the better teacher takes responsibility for student engagement while the lesser teacher places that responsibility solely on the students.

After three semesters of teaching, I’ve realized I’m not content to follow the aloof approach to managing a classroom. I realize I can’t rely on my fascinating anecdotes and quirky asides to convince students that what I’m having them do is worthwhile because my personal style—anyone’s personal style—is hit-or-miss. Some will like it. Some will be put off by it. This is true of the material too; some people will never really get into writing or reading at any point in their lives. But what I can do, and what Dr. Fleming does particularly well, is subcommunicate that the material is interesting and that the tasks are worthwhile. That’s why I’m excited to be reading “Teach Like a Champion” by Doug Lemov. (Don't let the corny title fool you.)

I’ll probably do a full review of Lemov’s book sometime soon (I’m not finished with it yet) but for now I’ll just relate some of the thoughts and realizations I've had as I’ve been reading it. When you get that why-are-you-doing-this-to-me look everyday, you unconsciously begin to make deals with the class, conceding, for instance, that the lessons are boring, but promising to get through them as quickly as possible. Or you concede that the work is tedious—and perhaps even pointless—but you promise not to make it any longer or any more complicated than it has to be to meet the standards set for you by the department. This latter is a big problem for me because I often find myself at odds, ideologically, with the professors and heads of The Department of English and Linguistics, a majority of whom are true believers in the vagaries of Rhetoric or the absurdities of Postmodernism. So it’s easy for me to end up in a position where I’m emphasizing that I’m just a student like everyone else in my class, and I think a lot of what the department promulgates is silly and pointless too. The corollary is, so let’s just work through it all together as quickly and painlessly as possible. Meanwhile the class’s opportunity to really experience the best of what writing can be in any profound way is lost.

“Teach Like a Champion” offers several techniques for getting the class engaged without cutting such deals. No Opt Out, for instance, has teachers come back to students who’ve passed on a question in class, so they learn they can’t simply get away with not paying attention or not doing their homework by pretending not to understand. Right is Right is an important technique for me because I tend to meet students halfway when they’re trying to work through a complicated question, feeding them most of the answer, which really robs them of the opportunity to do it completely on their own and breeds laziness by sending the message that if you just try a little I’ll do most of the work for you. The technique has the teacher insist on a right answer before proceeding, and resisting the urge to fill in the blanks of incomplete answers.

One of the thoughts those students who wear that accusing and terrorized expression are having is probably, I don’t feel like I know this stuff very well so I hope he doesn’t call on me or ask me to do anything. But what if every student in the class knew he or she was going to be called on, knew they’d have to participate and contribute their take on the reading or on the question at hand every day? No Opt Out is good for instilling this expectation and fostering preparedness. But the most important technique on this front in “Teaching Like a Champion” is Cold Call, which has you choosing students to call on randomly—not punitively, which backfires—sometimes regardless of whose hands are raised. This sends the message that the material is important enough that everyone should be able to answer questions on it. And the class itself is important enough that you won’t ever be allowed to sail through it daydreaming.

By applying these and other techniques I hope to be taking responsibility for classroom engagement and not falling back on my default approach of buddying up and making deals. Maybe my students will come away thinking I’m a martinet (I’d be happy if they used that word), but I hope they also come away having lost just a little certainty in their belief that writing is inherently boring and pointless, something you have to do a little of to graduate but beyond that the domain of other people who are more interested in it. Of course, I don’t expect wild success at first application of a bunch of techniques I learned from a book (though it does come with a dvd). I do, however, believe the change in mindset will have an effect on its own.