Monday, January 31, 2011

EYES ON THE PRIZE

Mantel.jpg
“Wolf Hall”, by Hilary Mantel, didn’t just win the Man Booker prize last year: it became the fastest-selling Booker winner ever. But behind her triumph, as she reveals in this memoir, lay a complicated relationship with awards ...

From INTELLIGENT LIFE Magazine, Autumn 2010

In 1994 I brought out a novel called “A Change of Climate”, which was shortlisted for a small prize given to books with a religious theme. It was the first time a novel had got so far with the judges, and I was surprised to be in contention. The main characters in my book were Christian missionaries, but after their God had watched unblinking while they endured trials that would shake any faith, their religion became to them no more than a habit, a set of behavioural tics, and in the absence of a belief in a benign universe they carried on grimly trying to be good because they hardly knew how to do anything else.

The winner was to be announced at a low-key gathering at an old-fashioned publishing house near the British Museum. I had never been to a literary party that was anything like this. Some of the invitees seemed to be taking, with shy simpers, their first alcoholic drink of the year. Conversation was a struggle; all we had in common was God. After I didn’t win, I came out into the fine light evening and hailed a cab. What I felt was the usual flatness after a wasted journey; I told myself I hadn’t really expected to win this one. But as we inched through the traffic, a reaction set in. I was swept, I was possessed, by an urge to do something wicked: something truly odious, something that would reveal me as a mistress of moral turpitude and utterly disqualify me from ever being shortlisted for that prize again. But what can you do, by yourself, in the back of a taxi on the way to Waterloo? Wishing to drain the chalice of evil to the dregs, I found myself out of ideas. I could possibly lean out of the window and make hideous faces at pedestrians; but how would they know that it was my hideous face? They might think I was always like that.

For a week or so, after I won the 2009 Man Booker prize for fiction with “Wolf Hall”, people in the streets did recognise me. They’d seen my triumph face, my unpretending grin of delight stretched as wide as a carved pumpkin. Sometimes they would burble happily at me and squeeze my hand, and sometimes they would just smile warmly as they passed, not quite sure who I was but knowing that they’d seen me in the paper, and in a happy context. On the train home one evening, a pale glowing woman with a Vermeer complexion, alighting prosaically at Woking, followed me through a carriage and whispered to me, leaving on my shoulder a ghost-touch of congratulation. All this was new to me. Before the Man Booker, I had trouble being recognised by a bookseller when I was standing next to a stack of my own books. 

I am a veteran of shortlists. I have served my time in the enclosures where the also-rans cool down after the race, every back turned, the hot crowds sucked away as if by a giant magnet to where the winner basks in the camera-flash. I have sat through a five-hour presentation ceremony in Manchester, where the prize was carried off by Anthony Burgess, then a spindly, elderly figure, who looked down at me from his great height, a cheque between thumb and finger, and said, “I expect you need this more than me,” and there again I experienced a wicked but ungratified impulse, to snatch the cheque away and stuff it into my bra. After such an evening, it’s hard to sleep; your failure turns into a queasy mess that churns inside you, mixed in with fragments from the sponsors’ speeches, and the traitorous whispers of dissatisfied judges. Lunchtime ceremonies are easier; but then, what do you do with the rest of the day? Once, when I was trudging home from my second failure to win the £20,000 Sunday Express Book of the Year award, a small boy I knew bobbed out on to the balcony of his flat.

“Did you win?”

I shook my head.

“Never mind,” he said, just like everyone else. And then, quite unlike everyone else: “If you like, you can come up and play with my guinea pig.”

That’s what friends are for. You need distraction; or you need to go home (as I do these days when I lose) and defiantly knock together a paragraph or two of your next effort. At my third shortlisting, I did win the Sunday Express prize. This time it was an evening event, and as the announcement approached I found myself pushing and shoving through a dense crowd of invitees, trying to get somewhere near the front just in case; and getting dirty looks, and elbows in the ribs. At the moment of the announcement I thought that a vast tray of ice-cubes had been broken over my head; the crackling noise was applause, the splintered light was from flashbulbs. The organisers made me hold up, for the cameras, one of those giant cheques they used to give to winners of the football pools. I did it without demur. Did I feel a fool? No. I felt rich.


How to conduct yourself as winner or loser is something the modern writer must work out without help from the writers of the past. As a stylist, you may pick up a trick or two from Proust, but on prize night he’d just have stayed in bed. As prizes have proliferated and increased, advances and royalties have fallen, and the freakish income that a prize brings is more and more important. Prizes bring media attention, especially if the judges can arrange to fall out in public. They bring in-store displays, and press advertising, and all the marketing goodies denied the non-winner; they bring sales, a stimulus to trade at a time when bookselling is in trouble. By the time I won the Man Booker I had scrabbled my way to half a dozen lesser awards, but in the 1980s and 1990s marketing was less sharp, and the whole prize business looked less like a blood sport. I had been publishing for over 20 years, and although the reviewers had been consistently kind, I had never sold in great numbers. But moments after I took my cheque from the hands of the Man Booker judges, an ally approached me, stabbing at an electronic device in her hand: “I’ve just checked Amazon—you’re number one—you’re outselling Dan Brown.”

Amazon itself—with its rating system, its sales charts, its reader reviews—feels like part of the prize industry, part of the process of constantly ranking and categorising authors, and ranking and categorising them in the most public way. To survive the scrutiny you must understand that (much as you love winning them) prizes are not, or not necessarily, a judgment on the literary merit of your work. Winners emerge by negotiation and compromise. Awards have their political aspects, and juries like to demonstrate independence of mind; sometimes a book which has taken one major award is covertly excluded from consideration for others. Sometimes the judges are actors or politicians, who harbour a wish to write fiction themselves—if, of course, they had the time. I have sat on juries where the clashing of celebrity egos drowned out the whispers from the pages surveyed, and the experience has been so unfair and miserable that I have said to myself “never again”. 
But you do learn from being a judge that, in a literary sense, some verdicts matter and some don’t.

Sometimes the pressure on judges seems intolerable. When I was a Booker judge myself, back in 1990, we read about 105 books. Last year there were 132. I was lucky enough to serve under a businesslike chairman, Sir Denis Forman, a man who knew how to run a meeting from his time at Granada Television. All the same, I remember the final judging session as one of the great ordeals of my life. So much depended on it, for the winners and the losers. I was already nettled by the leaking of tittle-tattle and misinformation to journalists. It came from the administration, I think, not the judges. “Just mischief,” someone suggested, smiling. I was taking it too seriously, I suppose: as if it were a capital trial, and we were going to hang five authors and let one escape. But we were, it seemed to me, giving one author a life—a different life. There was an element of bathos when the winner, A.S. Byatt, said that she would use the money to build a swimming pool at her second home. At times of crisis—and winning this prize is a crisis—people say the most extraordinary things. I seem to recall one novelist saying more humbly that his winner’s cheque would pay for an extra bathroom. For years I dreamt of pursuing the watery theme: of flourishing my £50,000 with a cry of, “At last, I see my way to an indoor lavatory.”

I didn’t say it, of course. Jokes are wasted at prize time. I had never been shortlisted for the Booker till the year of my win, but I looked at those who were already winners with a narrow eye; I read their books, and also searched their faces. But whether it has changed their lives as it has changed mine is a mystery to me. The smiling repression—so many years of congratulating others, so many trudges home, so many taxis, guinea pigs; the sheer hypocrisy of pretending you don’t mind losing. These take their toll. You become a worse person, though not necessarily a worse writer, while you’re waiting for your luck to turn. When finally last year at the Guildhall in London, after an evening of dining and of speeches that, it seemed to me, were excruciatingly prolonged, when finally the moment came and I heard the name of a book and that book was mine, I jumped out of my chair as if I had been shot out of a catapult, and what I felt was primitive, savage glee. You have to win the Man Booker at the right time, pious folk tell you. You have to win it for the right book, runs the received wisdom. Balderdash, said my heart; but it used a stronger, shorter word. You just have to win it, right now. Hand it over. It’s been long enough.

The writer inside you feels no sense of entitlement. She—or it—judges a work by internal standards that are hard to communicate or define. The “author”, the professional who is in the prose business, has worldly concerns. You know the first question from the press will be, “What will you do with the money?” The truth was that I would use it to reduce my mortgage. But that reply would by no means do, and I felt obliged to say “Sex, drugs and rock’n’roll.” The public don’t like to think of authors as citizens who pay their debts. They like to think of them living lives of fabulous dissipation in warm climates, at someone else’s expense. The public want to regard you as a being set apart, with some quirk of brain function or some inbuilt moral freakishness that would explain everything, if only you would acknowledge it. They want to know, what is the stimulus to your creativity? What makes you write?

Sometimes you want to shrug and say, it’s my job. You don’t ask a plumber, what makes you plumb? You understand he does it to get his living. You don’t draw him aside and say, “Actually I plumb a bit myself, would you take a look at this loo I fitted? All my friends say it’s rather good.” But it’s little use insisting that writing is an ordinary job; you’d be lying. Readers understand that something strange is going on when a successful work of fiction is created—something that, annoyingly, defies being cast into words. If we poke the author with a stick for long enough, hard enough, he’ll crack and show us the secret, which perhaps he doesn’t know himself. We have to catch him when he’s vulnerable—and he is never more vulnerable than when someone has caught him on a public platform and given him a big cheque. He may be grinning from ear to ear, but he’s swarming with existential doubts.


“You are currently the top writer in the world,” an interviewer said to me on Booker night.
“It’s not the Olympics,” I said, aghast. The progress of the heart—which is what your writing is—cannot be measured like the progress of your feet on a race track. And yet, you can’t deny, it has been. On a particular night in October, you’ve got your nose ahead of J.M. Coetzee.
   
I have found I can live with the contradictions. I think there is one kind of writer who might be scalped and skinned by the demands the prize imposes, and that is the writer who finds public performance difficult, who has failed to create a persona he can send out to do the show. As people often observe, there is no reason why skill in writing and skill in platform performance would go together; I have witnessed some horrible scenes in the back rooms of bookshops, as writers sweat and stutter and suffer a mini-breakdown before going out to face 20 people, some of whom have wandered in because they saw a light, some of whom have manuscript-shaped parcels under their seats, some of whom have never heard of you before tonight, and have come on purpose to tell you so. Generally, it seems to me, authors are better at presenting themselves than they were ten years ago. Festivals flourish, we get more practice; you could give a reading somewhere every week of the year if you liked. For me the transition between desk and platform seems natural enough. I think of writing fiction as a sort of condensed version of acting and each book as a vast overblown play. You impersonate your characters intensively, you live inside their skins, wear their clothes and stamp or mince through life in their shoes; you breathe in their air. “Madame Bovary, c’est moi.” Of course she is. Who else could she be?

Some nine months on, I can report that the Man Booker has done me nothing but good. Because I am in the middle of a project—my next book is the sequel to the prize-winner—it has not destabilised me, just delayed me. The delay is worthwhile, because the prize has helped me find publishers in 30 countries. It has made my sales soar and hugely boosted my royalties. In doing these things it has cut me free. For the next few years at least, I can write what I like, just as I could before I was ever in print. I wrote for 12 years before I published anything, and in those years I felt a recklessness, a hungry desire, a gnawing expectation, that I lost when I became a jobbing professional who would tap you out a quick 800 words, to a deadline, on almost anything you liked. It is hard to make a good income from fiction alone, but now perhaps I can do it. I haven’t lived in a glamorous whirl since I won the prize. I could have taken up any number of invitations to festivals abroad, but only if I ditched the commitments at home that were already in my diary. I am, anyway, a bit world-weary and more than a bit ill, and intensely interested in the next thing I will write. Even when you are taking your bow, lapping up applause, you do know this brute fact: that you are only as good as your next sentence. You might wake up tomorrow and not be able to do it. The process itself will not fail you. But your nerve might fail.

On the evening of the Man Booker, if you are a shortlisted candidate, you are told that if you win you will be speaking live on air within a moment or two, and that after a long and late night you must be up early for breakfast TV, and that you will be talk-talk-talking into the middle of next week, to an overlapping series of interviewers. You must be ready, poised; so everyone is given a copy of the winner’s schedule to tuck away in their pocket or bag. So, for some hours, no one is the winner and you are all the winner. I already had plans for my week should I lose, and as I waited, watching the TV cameras manoeuvre in the run-up to the chairman’s speech, I split neatly into two component parts: one for schedule A, one for schedule B. All such decisions are narrow ones. You win by a squeak or you lose. Your life changes or it doesn’t. There is really no cause for self-congratulation: no time, either. You do not know till the moment you know; or at least, no wash of rumour reached me, lapping towards the stage from the back of the hall. So I wonder, what happened to the woman on schedule B, the one with the sinking heart and the sad loser’s smile? I can’t help worrying she’s escaped and she’s out there by night, in the chill of last autumn, wandering the city streets in a most inappropriate gold dress.

"Wolf Hall" is out now in paperback (Fourth Estate)

(Hilary Mantel is the author of ten novels, including "Wolf Hall".)
Picture Credit: Diver Aguilar

Friday, January 28, 2011

The Philosophical Novel

by James Ryerson
New York Times, 20 January 2011

Can a novelist write philosophically? Even those novelists most commonly deemed “philosophical” have sometimes answered with an emphatic no. Iris Murdoch, the longtime Oxford philosopher and author of some two dozen novels treating highbrow themes like consciousness and morality, argued that philosophy and literature were contrary pursuits. Philosophy calls on the analytical mind to solve conceptual problems in an “austere, unselfish, candid” prose, she said in a BBC interview broadcast in 1978, while literature looks to the imagination to show us something “mysterious, ambiguous, particular” about the world. Any appearance of philosophical ideas in her own novels was an inconsequential reflection of what she happened to know. “If I knew about sailing ships I would put in sailing ships,” she said. “And in a way, as a novelist, I would rather know about sailing ships than about philosophy.”

Some novelists with philosophical backgrounds vividly recall how they felt when they first encountered Murdoch’s hard-nosed view. Rebecca Newberger Goldstein, whose first novel, “The Mind-Body Problem” (1983), was published after she earned a Ph.D. in philosophy from Princeton, remembers being disappointed and confused. “It didn’t ring true,” she told me. “But how could she not be being truthful about such a central feature of her intellectual and artistic life?” Still, Goldstein and other philosophically trained novelists — including David Foster Wallace, William H. Gass and Clancy Martin — have themselves wrestled with the relationship between their two intellectual masters. Both disciplines seek to ask big questions, to locate and describe deeper truths, to shape some kind of order from the muddle of the world. But are they competitors — the imaginative intellect pitted against the logical mind — or teammates, tackling the same problems from different angles?

Philosophy has historically viewed literature with suspicion, or at least a vague unease. Plato was openly hostile to art, fearful of its ability to produce emotionally beguiling falsehoods that would disrupt the quest for what is real and true. Plato’s view was extreme (he proposed banning dramatists from his model state), but he wasn’t crazy to suggest that the two enterprises have incompatible agendas. Philosophy is written for the few; literature for the many. Philosophy is concerned with the general and abstract; literature with the specific and particular. Philosophy dispels illusions; literature creates them. Most philosophers are wary of the aesthetic urge in themselves. It says something about philosophy that two of its greatest practitioners, Aristotle and Kant, were pretty terrible writers.

Of course, such oppositions are never so simple. Plato, paradoxically, was himself a brilliant literary artist. Nietzsche, Schopenhauer and Kierkegaard were all writers of immense literary as well as philosophical power. Philosophers like Jean-Paul Sartre and George Santayana have written novels, while novelists like Thomas Mann and Robert Musil have created fiction dense with philosophical allusion. Some have even suggested, only half in jest, that of the brothers William and Henry James, the philosopher, William, was the more natural novelist, while the novelist, Henry, was the more natural philosopher. (Experts quibble: “If William is often said to be novelistic, that’s because he is widely — but wrongly — thought to write well,” the philosopher Jerry Fodor told me. “If Henry is said to be philosophical, that’s because he is widely — but wrongly — thought to write badly.”)

David Foster Wallace, who briefly attended the Ph.D. program in philosophy at Harvard after writing a first-rate undergraduate philosophy thesis (published in December by Columbia University Press as “Fate, Time, and Language”), believed that fiction offered a way to capture the emotional mood of a philosophical work. The goal, as he explained in a 1990 essay in The Review of Contemporary Fiction, wasn’t to make “abstract philosophy ‘accessible’ ” by simplifying ideas for a lay audience, but to figure out how to recreate a reader’s more subjective reactions to a philosophical text. Unfortunately, Wallace declared his most overtly philosophical novel — his first, “The Broom of the System” (1987), which incorporates the ideas of Ludwig Wittgenstein — to be a failure in this respect. But he thought others had succeeded in writing “philosophically,” especially David Markson, whose bleak, abstract, solitary novel “Wittgenstein’s Mistress” (1988) he praised for evoking the bleak, abstract, solitary feel of Wittgenstein’s early philosophy.

Another of Wallace’s favorite novels was “Omensetter’s Luck” (1966), by William H. Gass, who received his Ph.D. in philosophy from Cornell and taught philosophy for many years at Washington University in St. Louis. In an interview with The Paris Review in 1976, Gass confessed to feeling a powerful resistance to the analytical rigor of his academic schooling (“I hated it in lots of ways”), though he ultimately appreciated it as a kind of mental strength-training. Like Murdoch, he claimed that the influence of his philosophical education on his fiction was negligible. “I don’t pretend to be treating issues in any philosophical sense,” he said. “I am happy to be aware of how complicated, and how far from handling certain things properly I am, when I am swinging so wildly around.”

Unlike Murdoch, Gass and Wallace, Rebecca Newberger Goldstein, whose latest novel is “36 Arguments for the Existence of God,” treats philosophical questions with unabashed directness in her fiction, often featuring debates or dialogues among characters who are themselves philosophers or physicists or mathematicians. Still, she says that part of her empathizes with Murdoch’s wish to keep the loose subjectivity of the novel at a safe remove from the philosopher’s search for hard truth. It’s a “huge source of inner conflict,” she told me. “I come from a hard-core analytic background: philosophy of science, mathematical logic. I believe in the ideal of objectivity.” But she has become convinced over the years of what you might call the psychology of philosophy: that how we tackle intellectual problems depends critically on who we are as individuals, and is as much a function of temperament as cognition. Embedding a philosophical debate in richly imagined human stories conveys a key aspect of intellectual life. You don’t just understand a conceptual problem, she says: “You feel the problem.”

If you don’t want to overtly feature philosophical ideas in your novel, how sly about it can you be before the effect is lost? Clancy Martin’s first novel, “How to Sell” (2009), a drug-, sex- and diamond-fueled story about a high-school dropout who works with his older brother in the jewelry business, was celebrated by critics as a lot of things — but “philosophical” was not usually one of them. Martin, a professor of philosophy at the University of Missouri at Kansas City, had nonetheless woven into the story, which is at its heart about forms of deception, disguised versions of Kant’s argument on the supposed right to lie in order to save a life, Aristotle’s typology of four kinds of liars, and Nietzsche’s theory of deception (the topic of Martin’s Ph.D. dissertation). Not that anyone noticed. “A lot of my critics said: ‘Couldn’t put it down. You’ll read it in three hours!’ ” Martin told me. “And I felt like I put too much speed into the fastball. I mean, just because you can read it in three hours doesn’t mean that you ought to do so, or that there’s nothing hiding beneath the surface.”
Which raises an interesting, even philosophical question: Is it possible to write a philosophical novel without anyone knowing it?

James Ryerson is an editor at The New York Times Magazine. He wrote the introduction to David Foster Wallace’s “Fate, Time, and Language: An Essay on Free Will,” published in December.

Nonfiction: Nabokov Theory on Butterfly Evolution Is Vindicated

by Carl Zimmer
New York Times, 25 January 2011


A male Acmon blue butterfly (Icaricia acmon). Vladimir Nabokov described the Icaricia genus in 1944.

Vladimir Nabokov may be known to most people as the author of classic novels like “Lolita” and “Pale Fire.” But even as he was writing those books, Nabokov had a parallel existence as a self-taught expert on butterflies.

He was the curator of lepidoptera at the Museum of Comparative Zoology at Harvard University, and collected the insects across the United States. He published detailed descriptions of hundreds of species. And in a speculative moment in 1945, he came up with a sweeping hypothesis for the evolution of the butterflies he studied, a group known as the Polyommatus blues. He envisioned them coming to the New World from Asia over millions of years in a series of waves.

Few professional lepidopterists took these ideas seriously during Nabokov’s lifetime. But in the years since his death in 1977, his scientific reputation has grown. And over the past 10 years, a team of scientists has been applying gene-sequencing technology to his hypothesis about how Polyommatus blues evolved. On Tuesday in the Proceedings of the Royal Society of London, they reported that Nabokov was absolutely right.

“It’s really quite a marvel,” said Naomi Pierce of Harvard, a co-author of the paper.

Nabokov inherited his passion for butterflies from his parents. When his father was imprisoned by the Russian authorities for his political activities, the 8-year-old Vladimir brought a butterfly to his cell as a gift. As a teenager, Nabokov went on butterfly-hunting expeditions and carefully described the specimens he caught, imitating the scientific journals he read in his spare time. Had it not been for the Russian Revolution, which forced his family into exile in 1919, Nabokov said that he might have become a full-time lepidopterist.

In his European exile, Nabokov visited butterfly collections in museums. He used the proceeds of his second novel, “King, Queen, Knave,” to finance an expedition to the Pyrenees, where he and his wife, Vera, netted over a hundred species. The rise of the Nazis drove Nabokov into exile once more in 1940, this time to the United States. It was there that Nabokov found his greatest fame as a novelist. It was also there that he delved deepest into the science of butterflies.


Nabokov spent much of the 1940s dissecting a confusing group of species called Polyommatus blues. He developed forward-thinking ways to classify the butterflies based on differences in their genitalia. He argued that what were thought to be closely related species were actually only distantly related.

At the end of a 1945 paper on the group, he mused on how they had evolved. He speculated that they originated in Asia, moved over the Bering Strait, and moved south all the way to Chile.

Allowing himself a few literary flourishes, Nabokov invited his readers to imagine “a modern taxonomist straddling a Wellsian time machine.” Going back millions of years, he would end up at a time when only Asian forms of the butterflies existed. Then, moving forward again, the taxonomist would see five waves of butterflies arriving in the New World.

Nabokov conceded that the thought of butterflies making a trip from Siberia to Alaska and then all the way down into South America might sound far-fetched. But it made more sense to him than an unknown land bridge spanning the Pacific. “I find it easier to give a friendly little push to some of the forms and hang my distributional horseshoes on the nail of Nome rather than postulate transoceanic land-bridges in other parts of the world,” he wrote.

When “Lolita” made Nabokov a star in 1958, journalists were delighted to discover his hidden life as a butterfly expert. A famous photograph of Nabokov that appeared in The Saturday Evening Post when he was 66 is from a butterfly’s perspective. The looming Russian author swings a net with rapt concentration. But despite the fact that he was the best-known butterfly expert of his day and a Harvard museum curator, other lepidopterists considered Nabokov a dutiful but undistinguished researcher. He could describe details well, they granted, but did not produce scientifically important ideas.

Only in the 1990s did a team of scientists systematically review his work and recognize the strength of his classifications. Dr. Pierce, who became a Harvard biology professor and curator of lepidoptera in 1990, began looking closely at Nabokov’s work while preparing an exhibit to celebrate his 100th birthday in 1999.

She was captivated by his idea of butterflies coming from Asia. “It was an amazing, bold hypothesis,” she said. “And I thought, ‘Oh, my God, we could test this.’ ”

To do so, she would need to reconstruct the evolutionary tree of blues, and estimate when the branches split. It would have been impossible for Nabokov to do such a study on the anatomy of butterflies alone. Dr. Pierce would need their DNA, which could provide more detail about their evolutionary history.

Working with American and European lepidopterists, Dr. Pierce organized four separate expeditions into the Andes in search of blues. Back at her lab at Harvard, she and her colleagues sequenced the genes of the butterflies and used a computer to calculate the most likely relationships between them. They also compared the number of mutations each species had acquired to determine how long ago they had diverged from one another.

There were several plausible hypotheses for how the butterflies might have evolved. They might have evolved in the Amazon, with the rising Andes fragmenting their populations. If that were true, the species would be closely related to one another.

But that is not what Dr. Pierce found. Instead, she and her colleagues found that the New World species shared a common ancestor that lived about 10 million years ago. But many New World species were more closely related to Old World butterflies than to their neighbors. Dr. Pierce and her colleagues concluded that five waves of butterflies came from Asia to the New World — just as Nabokov had speculated.

“By God, he got every one right,” Dr. Pierce said. “I couldn’t get over it — I was blown away.”

Dr. Pierce and her colleagues also investigated Nabokov’s idea that the butterflies had come over the Bering Strait. The land surrounding the strait was relatively warm 10 million years ago, and has been chilling steadily ever since. Dr. Pierce and her colleagues found that the first lineage of Polyommatus blues that made the journey could survive a temperature range that matched the Bering climate of 10 million years ago. The lineages that came later are more cold-hardy, each with a temperature range matching the falling temperatures.
Nabokov’s taxonomic horseshoes turn out to belong in Nome after all.

"What a great paper," said James Mallet, an expert on butterfly evolution at University College London. "It's a fitting tribute to the great man to see that the most modern methods that technology can deliver now largely support his systematic arrangement."

Dr. Pierce says she believes Nabokov would have been greatly pleased to be so vindicated, and points to one of his most famous poems, “On Discovering a Butterfly.” The 1943 poem begins:

I found it and I named it, being versed
in taxonomic Latin; thus became
godfather to an insect and its first
describer — and I want no other fame.

“He felt that his scientific work was standing for all time, and that he was just a player in a much bigger enterprise,” said Dr. Pierce. “He was not known as a scientist, but this certainly indicates to me that he knew what it’s all about.”

Tuesday, January 25, 2011

Where have all the thinkers gone?

By Gideon Rachman
Financial Times, 24 January 2011


A few weeks ago I was sitting in my office, reading Foreign Policy magazine, when I made a striking discovery. Sitting next door to me, separated only by a narrow partition, is one of the world’s leading thinkers. Every year, Foreign Policy lists the people it regards as the “Top 100 Global Thinkers”. And there, at number 37, was Martin Wolf.

I popped next door to congratulate my colleague. Under such circumstances, it is compulsory for any English person to make a self-deprecating remark and Martin did not fail me. The list of intellectuals from 2010, he suggested, looked pretty feeble compared with a similar list that could have been drawn up in the mid 19th century.

This was more than mere modesty. He has a point. Once you start the list-making exercise, it is difficult to avoid the impression that we are living in a trivial age.

The Foreign Policy list for 2010, it has to be said, is slightly odd since the magazine’s top 10 thinkers are all more famous as doers. In joint first place come Bill Gates and Warren Buffett for their philanthropic efforts. Then come the likes of Barack Obama (at number three), Celso Amorim, the Brazilian foreign minister (sixth), and David Petraeus, the American general and also, apparently, the world’s eighth most significant thinker. It is not until you get down to number 12 on the list that you find somebody who is more famous for thinking than doing – Nouriel Roubini, the economist.

But, as the list goes on, genuine intellectuals begin to dominate. There are economists such as Joseph Stiglitz, journalists (Christopher Hitchens), philosophers (Martha Nussbaum), political scientists (Michael Mandelbaum), novelists (Maria Vargas Llosa) and theologians (Abdolkarim Soroush). Despite an inevitable bias to the English-speaking world, there are representatives from every continent including Hu Shuli, a Chinese editor, and Jacques Attali, carrying the banner for French intellectuals.

It is an impressive group of people. But now compare it with a similar list that could have been compiled 150 years ago. The 1861 rankings could have started with Charles Darwin and John Stuart Mill – On the Origin of Species and On Liberty were both published in 1859. Then you could include Karl Marx and Charles Dickens. And that was just the people living in and around London. In Russia, Tolstoy and Dostoevsky were both at work, although neither had yet published their greatest novels.

Even if, like Foreign Policy, you have a preference for politicians, the contrast between the giants of yesteryear and the relative pygmies of today is alarming. In 1861 the list would have included Lincoln, Gladstone, Bismarck and Garibaldi. Their modern equivalents would be Mr Obama, Nick Clegg, Angela Merkel and Silvio Berlusconi.

Still, perhaps 1861 was a freak? So let us repeat the exercise, and go back to the year when the second world war broke out. A list of significant intellectuals alive in 1939 would have included Einstein, Keynes, TS Eliot, Picasso, Freud, Gandhi, Orwell, Churchill, Hayek, Sartre.

So why does the current crop of thinkers seem so unimpressive? Here are a few possible explanations.
The first is that you might need a certain distance in order to judge greatness. Maybe it is only in retrospect that we can identify the real giants. It is certainly true that some of the people I have listed were not widely known or respected at the time. Marx worked largely in obscurity; Dickens was dismissed as a hack by some of his contemporaries; and Orwell’s reputation has also grown hugely since his death. But most of the giants of 1861 and 1939 were recognised as great intellects during their lifetime and some – such as Einstein and Picasso – became much-admired celebrities.

A second possibility is that familiarity breeds contempt. Maybe we are surrounded by thinkers who are just as great as the giants of the past, but we cannot recognise the fact because they are still in our midst. The modern media culture may also lead to overexposure of intellectuals, who are encouraged to produce too much. If Mill had been constantly on television; or Gandhi had tweeted five times a day – they might have seemed less impressive people and been less profound thinkers.

Another theory is that the nature of intellectual life has changed and become more democratic. The lists of 1861 and 1939 are dominated by that notorious species – the “dead white male”. In fact, “dead, white British males” seem to predominate. Perhaps there are intellectual giants at work now, but they are based in China or India or Africa – and have yet to come to the notice of Foreign Policy or the Financial Times.

In the modern world more people have access to knowledge and the ability to publish. The internet also makes collaboration much easier and modern universities promote specialisation. So it could be that the way that knowledge advances these days is through networks of specialists working together, across the globe – rather than through a single, towering intellect pulling together a great theory in the reading room of the British Museum. It is a less romantic idea – but, perhaps, it is more efficient.

And then there is a final possibility. That, for all its wealth and its gadgets, our generation is not quite as smart as it thinks it is.

Monday, January 17, 2011

Wake up and smell the jasmine

By David Gardner in London
Financial Times, 16 January 2011
Tunisia protester
A demonstrator in Tunis

The ignominious demise of Zein al-Abidine Ben Ali in Tunisia’s “Jasmine Revolution” has put a dent in the armour of the Arab national security state that will set tyrants trembling across the Middle East. The idea that Arab autocracies, with their backbone in the military and their central nervous system in the security services, are uniquely resilient to popular pressure has evaporated in the smoke of Tunis.

While that does not necessarily herald a wave of uprisings across the Arab world, such as those that swept across eastern Europe after the fall of the Berlin Wall, autocrats from Algiers to Amman and from Rabat to Cairo are at last aware that they now live in a different era. They will be on hyper-alert not only to stirrings among their usually cowed peoples but to any hint of change from a west that has acquiesced in their tyranny in the interests of short-term stability in a volatile and strategic region.
 
The west’s long connivance in this “Arab Exception” may be a welcome casualty of the Tunisian drama. The last 30 years have seen waves of democracy burst over almost every other despot-plagued region of the world, from Latin America to eastern Europe, and from sub-Saharan Africa to south-east Asia. Yet the Arab world remained marooned in tyranny. In the post-Communist era there is no other part of the world – not even China – treated by the West with such little regard for the political and human rights of its citizens.
The rationale has changed over time. In the late 19th and first half of the 20th century, France and Britain aborted the normal evolution of constitutional politics in the Arab colonies they carved out of the Ottoman Empire. For Britain the imperative was to secure the western approaches to India. After World War Two and the onset of the Cold War, the priority became to secure cheap oil, safeguard Israel and restrict the intrusion of the Soviets.

More recently, Arab regimes have frightened the west into believing that, but for them, Islamists (and Iran’s Shia theocrats) would take over the region. They maintain residual opposition parties – such as Egypt’s Wafd – as down-at-heel courtiers to exhibit to preachy westerners. Meanwhile they have laid waste to the political spectrum, leaving their opponents no rallying point except the mosque.

In the era of satellite TV and social media that has now changed. Tunisia was the second instance of this. Lebanon’s 2005 “Cedar Revolution” was a precursor, a civic uprising that ended three decades of Syrian occupation in less than three months. The digital revolution has reintegrated a fragmented Arab world in ways its technologically challenged leaders did not foresee and means socioeconomic grievances can quickly translate into broader political demands.

Economic hardship is, of course, the tinder that tends first to ignite, especially in a period of food- and fuel-price inflation. The lack of opportunity for young, increasingly educated populations, where between half and two-thirds are under the age of 25, is also a timebomb. The kleptocratic monopoly by most Arab regimes of resources as well as power is another.

But the narrative that economic reform must precede political reform – “let’s build the middle classes and then we’ll have some liberals to liberalise with” as one US ambassador once put it –is crudely determinist and an alibi for indefinitely postponing any political opening. Liberalising the economy quickly hits the wall of the national security states and the interests vested in them – which have no time for liberals.

President Hosni Mubarak of Egypt, under US pressure, in 2005 allowed the liberal Ayman Nour to stand against him. He restricted his majority to a mere 88 per cent, and then jailed his opponent on bogus charges. When Mr Mubarak took power three decades ago, 39 per cent of Egyptians were in absolute poverty; now 43 per cent are.

Mr Ben Ali was a western poster boy for economic reform, as his family fed on the economy.
Last week, as the fire in Tunisia raged, Hillary Clinton, US secretary of state, highlighted the region’s economic stagnation. Michelle Alliot-Marie, France’s foreign minister, even suggested sending French riot police to help. Wake up and smell the jasmine: it’s the politics, stupid.

Darkness on the Edge of the Universe















By Brian Greene
New York Times, 15 January 2011

IN a great many fields, researchers would give their eyeteeth to have a direct glimpse of the past. Instead, they generally have to piece together remote conditions using remnants like weathered fossils, decaying parchments or mummified remains. Cosmology, the study of the origin and evolution of the universe, is different. It is the one arena in which we can actually witness history.

The pinpoints of starlight we see with the naked eye are photons that have been streaming toward us for a few years or a few thousand. The light from more distant objects, captured by powerful telescopes, has been traveling toward us far longer than that, sometimes for billions of years. When we look at such ancient light, we are seeing — literally — ancient times.

During the past decade, as observations of such ancient starlight have provided deep insight into the universe’s past, they have also, surprisingly, provided deep insight into the nature of the future. And the future that the data suggest is particularly disquieting — because of something called dark energy.

This story of discovery begins a century ago with Albert Einstein, who realized that space is not an immutable stage on which events play out, as Isaac Newton had envisioned. Instead, through his general theory of relativity, Einstein found that space, and time too, can bend, twist and warp, responding much as a trampoline does to a jumping child. In fact, so malleable is space that, according to the math, the size of the universe necessarily changes over time: the fabric of space must expand or contract — it can’t stay put.

For Einstein, this was an unacceptable conclusion. He’d spent 10 grueling years developing the general theory of relativity, seeking a better understanding of gravity, but to him the notion of an expanding or contracting cosmos seemed blatantly erroneous. It flew in the face of the prevailing wisdom that, over the largest of scales, the universe was fixed and unchanging.

Einstein responded swiftly. He modified the equations of general relativity so that the mathematics would yield an unchanging cosmos. A static situation, like a stalemate in a tug of war, requires equal but opposite forces that cancel each other. Across large distances, the force that shapes the cosmos is the attractive pull of gravity. And so, Einstein reasoned, a counterbalancing force would need to provide a repulsive push. But what force could that be?

Remarkably, he found that a simple modification of general relativity’s equations entailed something that would have, well, blown Newton’s mind: antigravity — a gravitational force that pushes instead of pulls. Ordinary matter, like the Earth or Sun, can generate only attractive gravity, but the math revealed that a more exotic source — an energy that uniformly fills space, much as steam fills a sauna, only invisibly — would generate gravity’s repulsive version. Einstein called this space-filling energy the cosmological constant, and he found that by finely adjusting its value, the repulsive gravity it produced would precisely cancel the usual attractive gravity coming from stars and galaxies, yielding a static cosmos. He breathed a sigh of relief.

A dozen years later, however, Einstein rued the day he introduced the cosmological constant. In 1929, the American astronomer Edwin Hubble discovered that distant galaxies are all rushing away from us. And the best explanation for this cosmic exodus came directly from general relativity: much as poppy seeds in a muffin that’s baking move apart as the dough swells, galaxies move apart as the space in which they’re embedded expands. Hubble’s observations thus established that there was no need for a cosmological constant; the universe is not static.

Had Einstein only trusted the original mathematics of general relativity, he would have made one of the most spectacular predictions of all time — that the universe is expanding — more than a decade before it was discovered. Instead, he was left to lick his wounds, summarily removing the cosmological constant from the equations of general relativity and, according to one of his trusted colleagues, calling it his greatest blunder.
But the story of the cosmological constant was far from over.

Fast forward to the 1990s, when we find two teams of astronomers undertaking painstakingly precise observations of distant supernovae — exploding stars so brilliant they can be seen clear across the cosmos — to determine how the expansion rate of space has changed over the history of the universe. These researchers anticipated that the gravitational attraction of matter dotting the night’s sky would slow the expansion, much as Earth’s gravity slows the speed of a ball tossed upward. By bearing witness to distant supernovae, cosmic beacons that trace the universe’s expansion rate at various moments in the past, the teams sought to make this quantitative. Shockingly, however, when the data were analyzed, the teams found that the expansion rate has not been slowing down. It’s been speeding up.

It’s as if that tossed ball shot away from your hand, racing upward faster and faster. You’d conclude that something must be driving the ball away. Similarly, the astronomers concluded that something in space must be pushing galaxies apart ever more quickly. And after scrutinizing the situation, they have found that the push is most likely the repulsive gravity produced by a cosmological constant.

When Einstein introduced the cosmological constant, he envisioned its value being finely adjusted to exactly balance ordinary attractive gravity. But for other values the cosmological constant’s repulsive gravity can beat out attractive gravity, and yield the observed accelerated spatial expansion, spot on. Were Einstein still with us, his discovery that repulsive gravity lies within nature’s repertoire would have likely garnered him another Nobel prize.

As remarkable as it is that even one of Einstein’s “bad” ideas has proven prophetic, many puzzles still surround the cosmological constant: If there is a diffuse, invisible energy permeating space, where did it come from? Is this dark energy (to use modern parlance) a permanent fixture of space, or might its strength change over time? Perhaps most perplexing of all is a question of quantitative detail. The most refined attempts to calculate the amount of dark energy suffusing space miss the measured value by a gargantuan factor of 10123 (that is, a 1 followed by 123 zeroes) — the single greatest mismatch between theory and observation in the history of science.

THESE are vital questions that rank among today’s deepest mysteries. But standing beside them is an unassailable conclusion, one that’s particularly unnerving. If the dark energy doesn’t degrade over time, then the accelerated expansion of space will continue unabated, dragging away distant galaxies ever farther and ever faster. A hundred billion years from now, any galaxy that’s not resident in our neighborhood will have been swept away by swelling space for so long that it will be racing from us at faster than the speed of light. (Although nothing can move through space faster than the speed of light, there’s no limit on how fast space itself can expand.)

Light emitted by such galaxies will therefore fight a losing battle to traverse the rapidly widening gulf that separates us. The light will never reach Earth and so the galaxies will slip permanently beyond our capacity to see, regardless of how powerful our telescopes may become.

Because of this, when future astronomers look to the sky, they will no longer witness the past. The past will have drifted beyond the cliffs of space. Observations will reveal nothing but an endless stretch of inky black stillness.

If astronomers in the far future have records handed down from our era, attesting to an expanding cosmos filled with galaxies, they will face a peculiar choice: Should they believe “primitive” knowledge that speaks of a cosmos very much at odds with what anyone has seen for billions and billions of years? Or should they focus on their own observations and valiantly seek explanations for an island universe containing a small cluster of galaxies floating within an unchanging sea of darkness — a conception of the cosmos that we know definitively to be wrong?

And what if future astronomers have no such records, perhaps because on their planet scientific acumen developed long after the deep night sky faded to black? For them, the notion of an expanding universe teeming with galaxies would be a wholly theoretical construct, bereft of empirical evidence.

We’ve grown accustomed to the idea that with sufficient hard work and dedication, there’s no barrier to how fully we can both grasp reality and confirm our understanding. But by gazing far into space we’ve captured a handful of starkly informative photons, a cosmic telegram billions of years in transit. And the message, echoing across the ages, is clear. Sometimes nature guards her secrets with the unbreakable grip of physical law. Sometimes the true nature of reality beckons from just beyond the horizon.

Brian Greene, a professor of physics and mathematics at Columbia, is the author of the forthcoming book “The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos.”

Friday, January 14, 2011

Japan finds there is more to life than growth

By David Pilling

Financial Times, 5 January 2011
Is Japan the most successful society in the world? Even the question is likely (all right, designed) to provoke ridicule and have you spluttering over your breakfast. The very notion flies in the face of everything we have heard about Japan’s economic stagnation, indebtedness and corporate decline.

Ask a Korean, Hong Kong or US businessman what they think of Japan, and nine out of 10 will shake their head in sorrow, offering the sort of mournful look normally reserved for Bangladeshi flood victims. “It’s so sad what has happened to that country,” one prominent Singaporean diplomat told me recently. “They have just lost their way.”

It is easy to make the case for Japan’s decline. Nominal gross domestic product is roughly where it was in 1991, a sobering fact that appears to confirm the existence of not one, but two, lost decades. In 1994, Japan’s share of global GDP was 17.9 per cent, according to JPMorgan. Last year it had halved to 8.76 per cent. Over roughly the same period, Japan’s share of global trade fell even more steeply to 4 per cent. The stock market continues to thrash around at one-quarter of its 1990 level, deflation saps animal spirits – a common observation is that Japan has lost its “mojo” – and private equity investors have given up on their fantasy that Japanese businesses will one day put shareholders first.

Certainly, these facts tell a story. But it is only partial. Underlying much of the head-shaking about Japan are two assumptions. The first is that a successful economy is one in which foreign businesses find it easy to make money. By that yardstick Japan is a failure and post-war Iraq a glittering triumph. The second is that the purpose of a national economy is to outperform its peers.

If one starts from a different proposition, that the business of a state is to serve its own people, the picture looks rather different, even in the narrowest economic sense. Japan’s real performance has been masked by deflation and a stagnant population. But look at real per capita income – what people in the country actually care about – and things are far less bleak.

By that measure, according to figures compiled by Paul Sheard, chief economist at Nomura, Japan has grown at an annual 0.3 per cent in the past five years. That may not sound like much. But the US is worse, with real per capita income rising 0.0 per cent over the same period. In the past decade, Japanese and US real per capita growth are evenly pegged, at 0.7 per cent a year. One has to go back 20 years for the US to do better – 1.4 per cent against 0.8 per cent. In Japan’s two decades of misery, American wealth creation has outpaced that of Japan, but not by much.

The Japanese themselves frequently refer to non-GDP measures of welfare, such as Japan’s safety, cleanliness, world-class cuisine and lack of social tension. Lest they (and I) be accused of wishy-washy thinking, here are a few hard facts. The Japanese live longer than citizens of any other large country, boasting a life expectancy at birth of 82.17 years, much higher than the US at 78. Unemployment is 5 per cent, high by Japanese standards, but half the level of many western countries. Japan locks up, proportionately, one-twentieth of those incarcerated in the US, yet enjoys among the lowest crime levels in the world.

In a thought-provoking article in The New York Times last year, Norihiro Kato, a professor of literature, suggested that Japan had entered a “post-growth era” in which the illusion of limitless expansion had given way to something more profound. Japan’s non-consuming youth was at the “vanguard of the downsizing movement”, he said. He sounded a little like Walter Berglund, the heroic crank of Jonathan Franzen’s Freedom, who argues that growth in a mature economy, like that in a mature organism, is not healthy but cancerous. “Japan doesn’t need to be No 2 in the world, nor No 5 or 15,” Prof Kato wrote. “It’s time to look to more important things.”

Patrick Smith, an expert on Asia, agrees that Japan is more of a model than a laggard. “They have overcome the impulse – and this is something where the Chinese need to catch up – to westernise radically as a necessity of modernisation.” Japan, more than any other non-western advanced nation, has preserved its culture and rhythms of life, he says.

One must not overdo it. High suicide rates, a subdued role for women and, indeed, the answers that Japanese themselves provide to questionnaires about their happiness, do not speak of a nation entirely at ease with itself in the 21st century. It is also possible that Japan is living on borrowed time. Public debt is among the highest in the world – though, significantly, almost none of it is owed to foreigners – and a younger, poorer-paid generation will struggle to build up the fat savings on which the country is now comfortably slumbering.
If the business of a state is to project economic vigour, then Japan is failing badly. But if it is to keep its citizens employed, safe, economically comfortable and living longer lives, it is not making such a terrible hash of things.

Monday, January 10, 2011

Why Chinese Mothers Are Superior

Can a regimen of no playdates, no TV, no computer games and hours of music practice create happy kids? And what happens when they fight back?

By Amy Chua

Wall Street Journal, 8 January 2011

A lot of people wonder how Chinese parents raise such stereotypically successful kids. They wonder what these parents do to produce so many math whizzes and music prodigies, what it's like inside the family, and whether they could do it too. Well, I can tell them, because I've done it. Here are some things my daughters, Sophia and Louisa, were never allowed to do:

• attend a sleepover
• have a playdate
• be in a school play
• complain about not being in a school play
• watch TV or play computer games
• choose their own extracurricular activities
• get any grade less than an A
• not be the No. 1 student in every subject except gym and drama
• play any instrument other than the piano or violin
• not play the piano or violin.

CAU cover  
Erin Patrice O'Brien for The Wall Street Journal
Amy Chua with her daughters, Louisa and Sophia, at their home in New Haven, Conn.

I'm using the term "Chinese mother" loosely. I know some Korean, Indian, Jamaican, Irish and Ghanaian parents who qualify too. Conversely, I know some mothers of Chinese heritage, almost always born in the West, who are not Chinese mothers, by choice or otherwise. I'm also using the term "Western parents" loosely. Western parents come in all varieties.

All the same, even when Western parents think they're being strict, they usually don't come close to being Chinese mothers. For example, my Western friends who consider themselves strict make their children practice their instruments 30 minutes every day. An hour at most. For a Chinese mother, the first hour is the easy part. It's hours two and three that get tough.

Despite our squeamishness about cultural stereotypes, there are tons of studies out there showing marked and quantifiable differences between Chinese and Westerners when it comes to parenting. In one study of 50 Western American mothers and 48 Chinese immigrant mothers, almost 70% of the Western mothers said either that "stressing academic success is not good for children" or that "parents need to foster the idea that learning is fun." By contrast, roughly 0% of the Chinese mothers felt the same way. Instead, the vast majority of the Chinese mothers said that they believe their children can be "the best" students, that "academic achievement reflects successful parenting," and that if children did not excel at school then there was "a problem" and parents "were not doing their job." Other studies indicate that compared to Western parents, Chinese parents spend approximately 10 times as long every day drilling academic activities with their children. By contrast, Western kids are more likely to participate in sports teams.

chau inside  

From Ms. Chua's album: 'Mean me with Lulu in hotel room... with score taped to TV!'

What Chinese parents understand is that nothing is fun until you're good at it. To get good at anything you have to work, and children on their own never want to work, which is why it is crucial to override their preferences. This often requires fortitude on the part of the parents because the child will resist; things are always hardest at the beginning, which is where Western parents tend to give up. But if done properly, the Chinese strategy produces a virtuous circle. Tenacious practice, practice, practice is crucial for excellence; rote repetition is underrated in America. Once a child starts to excel at something—whether it's math, piano, pitching or ballet—he or she gets praise, admiration and satisfaction. This builds confidence and makes the once not-fun activity fun. This in turn makes it easier for the parent to get the child to work even more.

Chinese parents can get away with things that Western parents can't. Once when I was young—maybe more than once—when I was extremely disrespectful to my mother, my father angrily called me "garbage" in our native Hokkien dialect. It worked really well. I felt terrible and deeply ashamed of what I had done. But it didn't damage my self-esteem or anything like that. I knew exactly how highly he thought of me. I didn't actually think I was worthless or feel like a piece of garbage.

As an adult, I once did the same thing to Sophia, calling her garbage in English when she acted extremely disrespectfully toward me. When I mentioned that I had done this at a dinner party, I was immediately ostracized. One guest named Marcy got so upset she broke down in tears and had to leave early. My friend Susan, the host, tried to rehabilitate me with the remaining guests.

The fact is that Chinese parents can do things that would seem unimaginable—even legally actionable—to Westerners. Chinese mothers can say to their daughters, "Hey fatty—lose some weight." By contrast, Western parents have to tiptoe around the issue, talking in terms of "health" and never ever mentioning the f-word, and their kids still end up in therapy for eating disorders and negative self-image. (I also once heard a Western father toast his adult daughter by calling her "beautiful and incredibly competent." She later told me that made her feel like garbage.)

Chinese parents can order their kids to get straight As. Western parents can only ask their kids to try their best. Chinese parents can say, "You're lazy. All your classmates are getting ahead of you." By contrast, Western parents have to struggle with their own conflicted feelings about achievement, and try to persuade themselves that they're not disappointed about how their kids turned out.

I've thought long and hard about how Chinese parents can get away with what they do. I think there are three big differences between the Chinese and Western parental mind-sets.

[chau inside]  

Newborn Amy Chua in her mother's arms, a year after her parents arrived in the U.S.

First, I've noticed that Western parents are extremely anxious about their children's self-esteem. They worry about how their children will feel if they fail at something, and they constantly try to reassure their children about how good they are notwithstanding a mediocre performance on a test or at a recital. In other words, Western parents are concerned about their children's psyches. Chinese parents aren't. They assume strength, not fragility, and as a result they behave very differently.

For example, if a child comes home with an A-minus on a test, a Western parent will most likely praise the child. The Chinese mother will gasp in horror and ask what went wrong. If the child comes home with a B on the test, some Western parents will still praise the child. Other Western parents will sit their child down and express disapproval, but they will be careful not to make their child feel inadequate or insecure, and they will not call their child "stupid," "worthless" or "a disgrace." Privately, the Western parents may worry that their child does not test well or have aptitude in the subject or that there is something wrong with the curriculum and possibly the whole school. If the child's grades do not improve, they may eventually schedule a meeting with the school principal to challenge the way the subject is being taught or to call into question the teacher's credentials.

If a Chinese child gets a B—which would never happen—there would first be a screaming, hair-tearing explosion. The devastated Chinese mother would then get dozens, maybe hundreds of practice tests and work through them with her child for as long as it takes to get the grade up to an A.

Chinese parents demand perfect grades because they believe that their child can get them. If their child doesn't get them, the Chinese parent assumes it's because the child didn't work hard enough. That's why the solution to substandard performance is always to excoriate, punish and shame the child. The Chinese parent believes that their child will be strong enough to take the shaming and to improve from it. (And when Chinese kids do excel, there is plenty of ego-inflating parental praise lavished in the privacy of the home.)

chau inside 
Sophia playing at Carnegie Hall in 2007.

Second, Chinese parents believe that their kids owe them everything. The reason for this is a little unclear, but it's probably a combination of Confucian filial piety and the fact that the parents have sacrificed and done so much for their children. (And it's true that Chinese mothers get in the trenches, putting in long grueling hours personally tutoring, training, interrogating and spying on their kids.) Anyway, the understanding is that Chinese children must spend their lives repaying their parents by obeying them and making them proud.

By contrast, I don't think most Westerners have the same view of children being permanently indebted to their parents. My husband, Jed, actually has the opposite view. "Children don't choose their parents," he once said to me. "They don't even choose to be born. It's parents who foist life on their kids, so it's the parents' responsibility to provide for them. Kids don't owe their parents anything. Their duty will be to their own kids." This strikes me as a terrible deal for the Western parent.

Third, Chinese parents believe that they know what is best for their children and therefore override all of their children's own desires and preferences. That's why Chinese daughters can't have boyfriends in high school and why Chinese kids can't go to sleepaway camp. It's also why no Chinese kid would ever dare say to their mother, "I got a part in the school play! I'm Villager Number Six. I'll have to stay after school for rehearsal every day from 3:00 to 7:00, and I'll also need a ride on weekends." God help any Chinese kid who tried that one.

Don't get me wrong: It's not that Chinese parents don't care about their children. Just the opposite. They would give up anything for their children. It's just an entirely different parenting model.

Here's a story in favor of coercion, Chinese-style. Lulu was about 7, still playing two instruments, and working on a piano piece called "The Little White Donkey" by the French composer Jacques Ibert. The piece is really cute—you can just imagine a little donkey ambling along a country road with its master—but it's also incredibly difficult for young players because the two hands have to keep schizophrenically different rhythms.
Lulu couldn't do it. We worked on it nonstop for a week, drilling each of her hands separately, over and over. But whenever we tried putting the hands together, one always morphed into the other, and everything fell apart. Finally, the day before her lesson, Lulu announced in exasperation that she was giving up and stomped off.

"Get back to the piano now," I ordered.

"You can't make me."

"Oh yes, I can."

Back at the piano, Lulu made me pay. She punched, thrashed and kicked. She grabbed the music score and tore it to shreds. I taped the score back together and encased it in a plastic shield so that it could never be destroyed again. Then I hauled Lulu's dollhouse to the car and told her I'd donate it to the Salvation Army piece by piece if she didn't have "The Little White Donkey" perfect by the next day. When Lulu said, "I thought you were going to the Salvation Army, why are you still here?" I threatened her with no lunch, no dinner, no Christmas or Hanukkah presents, no birthday parties for two, three, four years. When she still kept playing it wrong, I told her she was purposely working herself into a frenzy because she was secretly afraid she couldn't do it. I told her to stop being lazy, cowardly, self-indulgent and pathetic.

Jed took me aside. He told me to stop insulting Lulu—which I wasn't even doing, I was just motivating her—and that he didn't think threatening Lulu was helpful. Also, he said, maybe Lulu really just couldn't do the technique—perhaps she didn't have the coordination yet—had I considered that possibility?
"You just don't believe in her," I accused.

"That's ridiculous," Jed said scornfully. "Of course I do."

"Sophia could play the piece when she was this age."

"But Lulu and Sophia are different people," Jed pointed out.

"Oh no, not this," I said, rolling my eyes. "Everyone is special in their special own way," I mimicked sarcastically. "Even losers are special in their own special way. Well don't worry, you don't have to lift a finger. I'm willing to put in as long as it takes, and I'm happy to be the one hated. And you can be the one they adore because you make them pancakes and take them to Yankees games."

I rolled up my sleeves and went back to Lulu. I used every weapon and tactic I could think of. We worked right through dinner into the night, and I wouldn't let Lulu get up, not for water, not even to go to the bathroom. The house became a war zone, and I lost my voice yelling, but still there seemed to be only negative progress, and even I began to have doubts.

Then, out of the blue, Lulu did it. Her hands suddenly came together—her right and left hands each doing their own imperturbable thing—just like that.

Lulu realized it the same time I did. I held my breath. She tried it tentatively again. Then she played it more confidently and faster, and still the rhythm held. A moment later, she was beaming.

"Mommy, look—it's easy!" After that, she wanted to play the piece over and over and wouldn't leave the piano. That night, she came to sleep in my bed, and we snuggled and hugged, cracking each other up. When she performed "The Little White Donkey" at a recital a few weeks later, parents came up to me and said, "What a perfect piece for Lulu—it's so spunky and so her."

Even Jed gave me credit for that one. Western parents worry a lot about their children's self-esteem. But as a parent, one of the worst things you can do for your child's self-esteem is to let them give up. On the flip side, there's nothing better for building confidence than learning you can do something you thought you couldn't.
There are all these new books out there portraying Asian mothers as scheming, callous, overdriven people indifferent to their kids' true interests. For their part, many Chinese secretly believe that they care more about their children and are willing to sacrifice much more for them than Westerners, who seem perfectly content to let their children turn out badly. I think it's a misunderstanding on both sides. All decent parents want to do what's best for their children. The Chinese just have a totally different idea of how to do that.

Western parents try to respect their children's individuality, encouraging them to pursue their true passions, supporting their choices, and providing positive reinforcement and a nurturing environment. By contrast, the Chinese believe that the best way to protect their children is by preparing them for the future, letting them see what they're capable of, and arming them with skills, work habits and inner confidence that no one can ever take away.

—Amy Chua is a professor at Yale Law School and author of "Day of Empire" and "World on Fire: How Exporting Free Market Democracy Breeds Ethnic Hatred and Global Instability." This essay is excerpted from "Battle Hymn of the Tiger Mother" by Amy Chua, to be published Tuesday by the Penguin Press, a member of Penguin Group (USA) Inc. Copyright © 2011 by Amy Chua.

Saturday, January 8, 2011

Faces of Enlightenment

Tricycle, Winter 2007

In 1977, as Americans searched for ways to cope with the residual anguish of the Vietnam War, twenty-five-year-old Don Farber began visiting the Vietnamese Buddhist Temple in Los Angeles to photograph its refugee community and learn about their Buddhist traditions. He eventually published the photographs as a collection, along with text by Rick Fields, in Taking Refuge in L.A.: Life in a Vietnamese Buddhist Temple. Over the years, Farber continued to visit Buddhist communities in the United States and traveled to Buddhist countries throughout Asia, producing three more books of photographs. His iconic portraits of Tibetan masters, including early images of the Dalai Lama, have earned him a reputation as a preeminent photographer of living Buddhism. Last year, Farber published Living Wisdom With His Holiness the Dalai Lama, a multimedia box set that includes over four hundred images of the Dalai Lama spanning twenty-five years. This past spring Farber and his family visited Tricycle’s New York office, where we spoke about his three decades of Buddhist photography.
—Alexandra Kaloyanides, Senior Editor

Kalu Rinpoche
When did you start taking portrait-style photographs of Tibetan masters? In 1988, I did a portrait of Kalu Rinpoche in New Mexico, a few months before he died. That picture became very special to his followers and went everywhere. At Kalu Rinpoche’s funeral in Darjeeling, I took a photo of two little monks that also became very well known. Right after the funeral, I rushed back to L.A. to photograph the Kalachakra given by the Dalai Lama in 1989. Two of the portraits I made of His Holiness then became classics, even icons. In fact, one of them is on thousands of pendants worn by people all over the world.
© Don FarberWhat is it about the portrait format that makes these images so powerful and popular? I’ve always loved great portraiture; whether it’s Rembrandt or Irving Penn, I’m moved by a great portrait. So if I can make one that moves me, maybe it’ll move other people. One powerful element of the portrait is having the subject against black, without any distractions. My photograph of Kalu Rinpoche from 1988 was the first one I did like that. It was a very sacred time because he was dying and we were surrounded by his disciples. Also, because this was my first formal portrait of a Buddhist master, I took it really seriously. I used all of the skills that I had developed doing corporate photography, and went all-out with studio lighting and a Hasselblad camera. When I was ready to shoot, I was looking through the camera and Kalu Rinpoche looked kind of distant, like he was in meditation. So I looked at him above the camera and tried to catch his attention. He seemed to understand what I was trying to say, and I got him beaming for the camera.

In Sogyal Rinpoche’s foreword to my book Portraits of Tibetan Buddhist Masters, he writes about traditional portrait sessions of Tibetan masters, about how the master would reveal the nature of mind in that moment sitting before the camera to benefit his students. The photos would then be used by the students in their practice. So this tradition had been going on long before I got involved.

The 14th Dalai Lama
Vietnamese Monks in LA
Image 1: Kalu Rinpoche, 1988. © Don Farber, buddhistphotos.com
Image 2: Yangsi Kalu Rinpoche, 1995. © Don Farber, buddhistphotos.com
Image 3: His Holiness the Dalai Lama, 1989. © Don Farber, buddhistphotos.com
Image 4: Vietnamese Monks in Los Angeles in 1977. © Don Farber, buddhistphotos.com


3 from Don Farber

In that book you have the portrait of Kalu Rinpoche, and across from it you have a portrait of the boy who has been recognized as his incarnation. What was it like to meet the young Yangsi Kalu Rinpoche? Let me give a little background first. Kalu Rinpoche’s funeral in 1988 was one of the most extraordinary things I’ve ever seen. The quality of the devotion of the disciples, the depth of their practice, their chanting, and the focus on his coming back and taking rebirth was remarkable. On the forty-ninth day of the funeral, his disciples were carrying his body in a box, preserved in salt, up the hill in a procession with horns and gongs and everything. And at that moment we saw this amazing halo around the sun. His disciples took this to mean that he’d entered the Pure Land and that the universe acknowledged his enlightenment.
When I met the young boy recognized as Kalu Rinpoche’s incarnation in L.A. in 1995, he tapped my belly as if to say, “Hey, you’ve put on weight.” I don’t know if he meant it that way; we’ll never know. But I hear that he’s really a remarkable young lama, that he’s in a three-year retreat, and all signs show that he’s developing as a great master in his own right. When I hear of or meet these young incarnate lamas who seem to have this brilliance, it just adds a sense of faith that this is the real thing, this reincarnation of masters.

So are you a committed Tibetan Buddhist practitioner? Yes, I am but I’ve also been a Zen practitioner since I got into Buddhism and this continues to be an integral part of my life and photography. Also, when I’m in Asia, I do the practice of the monastery where I’m staying and photographing—whether it’s a forest monastery in Thailand or a temple in Japan. Actually, I’ve received most of my teachings from the Dalai Lama because of all the time I’ve spent photographing him. It’s hard because I’m trying to listen to the teachings as I’m looking through the lens, so I don’t get to totally concentrate as I’d like to.

Do you feel as though your priority in those situations is to be more the photographer than the student? I don’t think of them as separate, but these days I’m more the student because now His Holiness wears a sun visor when he’s teaching and it’s just not very photogenic. So I get more time to take notes.

Much of your published photography is of the Dalai Lama and other Tibetan masters. Do you have many opportunities to photograph Western or other Buddhist teachers? I continue to photograph teachers from other traditions when I get a chance, but I guess photographing the Western teachers is one of the weaker points in my study. It’s something I’d like to do more of because they’re really pioneers. As I get older, I think more about documenting the elder Western and Asian teachers. I’ve been concentrating on the Tibetans because Tibetan Buddhism doesn’t have a country. The Japanese, the Thai, the Vietnamese, they all have countries with cultures that keep their traditions intact. Buddhists in Tibet have been severely persecuted by the Chinese government while Tibetans in exile have been preserving Buddhism in India, Nepal, and now in America and Europe. And so capturing the Tibetan Buddhist masters who are the last to have received their training in Tibet while it was a sovereign nation has been my priority. Another reason I’ve been concentrating on Tibetans is because, my wife, Yeshi, is Tibetan and has been my guide to the culture; I’m making a long-term study of Tibetan Buddhist life through the network of people we know. But there are also these great masters from other traditions who are really old and need to be photographed, so it’s something I hope to do more of.

When my first Buddhist teacher, Dr. Thich Thien-An, died from cancer, he was only 55. Having a close relationship with him and then experiencing the shock of his death taught me how precious these beings are. When you’re with an elder teacher, you know they’re not going to be around too long; they may be gone tomorrow. In the same way, when I’ve photographed Buddhist life in Asia, I know that, with the rapid changes in the world, these precious ways of life are also changing and being lost. So it has been a kind of mantra that I’ve had over the years—“Get it on film, get it on film.” ▼

Image 1: Ani Panchen, 1997. © Don Farber, buddhistphotos.com
Image 2: Togden Amting, 1997. © Don Farber, buddhistphotos.com
Image 3: Two young monks in Darjeeling in 1989. © Don Farber, buddhistphotos.com

Friday, January 7, 2011

The Grim Threat to British Universities

 Simon Head
New York Review of Books
13 January 2011

Strategic Plan, 2006–2011
by the Higher Education Funding Council for England (HEFCE)
52 pp., available at www.hefce.ac.uk/pubs/hefce/2008/08_15/
 
The American Faculty: The Restructuring of Academic Work and Careers
by Jack Schuster and Martin Finkelstein
Johns Hopkins University Press, 600 pp., $45.00
 
Academic Capitalism and the New Economy
by Sheila Slaughter and Gary Rhoades
Johns Hopkins University Press, 384 pp., $45.00

Head_1.jpg
 A memorial service for Harold Macmillan, Oxford University, 1987

The British universities, Oxford and Cambridge included, are under siege from a system of state control that is undermining the one thing upon which their worldwide reputation depends: the caliber of their scholarship. The theories and practices that are driving this assault are mostly American in origin, conceived in American business schools and management consulting firms. They are frequently embedded in intensive management systems that make use of information technology (IT) marketed by corporations such as IBM, Oracle, and SAP. They are then sold to clients such as the UK government and its bureaucracies, including the universities. This alliance between the public and private sector has become a threat to academic freedom in the UK, and a warning to the American academy about how its own freedoms can be threatened.

In the UK this system has been gathering strength for over twenty years, which helps explain why Oxford and Cambridge dons, and the British academy in general, have never taken a clear stand against it. Like much that is dysfunctional in contemporary Britain, the imposition of bureaucratic control on the academy goes back to the Thatcher era and its heroine. A memorable event in this melancholy history took place in Oxford on January 29, 1985, when the university’s Congregation, its governing parliament, denied Mrs. Thatcher an honorary Oxford degree by a vote of 738–319. It did so on the grounds that “Mrs. Thatcher’s Government has done deep and systematic damage to the whole public education system in Britain, from the provision for the youngest child up to the most advanced research programmes.”1

Mrs. Thatcher, however, disliked Oxford and the academy as much as they disliked her. She saw “state-funded intellectuals” as an interest group whose practices required scrutiny. She attacked the “cloister and common room” for denigrating the creators of wealth in Britain.2 But whereas the academy could pass motions against Mrs. Thatcher and deny her an honorary degree, she could deploy the power of the state against the academy, and she did. One of her first moves in that direction was to beef up an obscure government bureaucracy, the Audit Commission, to exercise tighter financial control over the universities.
From this bureaucratic acorn a proliferating structure of state control has sprung, extending its reach from the purely financial to include teaching and research, and shaping a generation of British academics who have known no other system. From the late 1980s onward the system has been fostered by both Conservative and Labour governments, reflecting a consensus among the political parties that, to provide value for the taxpayer, the academy must deliver its research “output” with a speed and reliability resembling that of the corporate world and also deliver research that will somehow be useful to the British public and private sectors, strengthening the latter’s performance in the global marketplace. Governments in Britain can act this way because all British universities but one—the University of Buckingham—depend heavily on the state for their funds for research, and so are in a poor position to insist on their right to determine their own research priorities.

Outside of the UK’s own business schools, not more than a handful of British academics know where the management systems that now so dominate their lives have come from, and how they have ended up in Oxford, Cambridge, London, Durham, and points beyond. The most influential of the systems began life at MIT and Harvard Business School in the late 1980s and early 1990s, moved east across the Atlantic by way of consulting firms such as McKinsey and Accenture, and reached British academic institutions during the 1990s and the 2000s through the UK government and its bureaucracies. Of all the management practices that have become central in US business schools and consulting firms in the past twenty years—among them are “Business Process Reengineering,” “Total Quality Management,” “Benchmarking,” and “Management by Objectives”—the one that has had the greatest impact on British academic life is among the most obscure, the “Balanced Scorecard” (BSC).

On the seventy-fifth anniversary of the Harvard Business Review in 1997, its editors judged the BSC to be among the most influential management concepts of the journal’s lifetime. The BSC is the joint brainchild of Robert Kaplan, an academic accountant at Harvard Business School, and the Boston consultant David Norton, with Kaplan the dominant partner. As befits Kaplan’s roots in accountancy, the methodologies of the Balanced Scorecard focus heavily on the setting up, targeting, and measurement of statistical Key Performance Indicators (KPIs). Kaplan and Norton’s central insight has been that with the IT revolution and the coming of networked computer systems, it is now possible to expand the number and variety of KPIs well beyond the traditional corporate concern with quarterly financial indicators such as gross revenues, net profits, and return on investment.

As explained by Kaplan and Norton in a series of articles that appeared in the Harvard Business Review between 1992 and 1996, KPIs of the Balanced Scorecard should concentrate on four fields of business activity: relations with customers, internal business process (for example, order entry and fulfillment), financial indicators such as profit and loss, and indicators of “innovation and learning.”3 It is this last that has yielded the blizzard of KPIs that has so blighted British academic life for the past twenty years. Writing in January 2010, the British biochemist John Allen of the University of London told of how “I have had to learn a new and strange vocabulary of ‘performance indicators,’ ‘metrics,’ ‘indicators of esteem,’ ‘units of assessment,’ ‘impact’ and ‘impact factors.’” One might also mention tallies of medals, honors, and awards bestowed (“indicators of esteem”); the value of research grants received; the number of graduate and postdoctoral students enrolled; and the volume and quality of “submitted units” of “research output.”4

An especially dysfunctional aspect of the British system, on display throughout its twenty-year existence, is that the particular KPIs that the British universities must strive to satisfy have varied at the whim of successive UK governments. John Allen’s reference to “impact factors” points to the final lurch in the Labour government’s thinking before it lost the recent elections. The Brown government particularly wished to promote research that would have an effect beyond the academy, above all in business. In the words of David Lammy, Gordon Brown’s minister of higher education:
Since these impacts are things that happen outside the academic realm…[we] propose that the panels assessing [research] impact will include a large proportion of the end-users of research—businesses, public services, policymakers and so on—rather than just academics commenting on each other’s work.5
Since the only major segment of the British economy that is both world-class and an intensive user of university research is the pharmaceutical industry, any UK government invitation to business “end-users” to take a more prominent part in the evaluation of academic research amounts to an invitation to the pharmaceutical industry to tighten its hold over scientific research in the UK.

This is an alarming prospect given the industry’s long record of abusing the integrity of research in the interests of the bottom line, well documented by Marcia Angell in these pages. The leading British pharmaceutical multinational, GlaxoSmithKlein, for example, features prominently in Angell’s research for its clandestine and improper payment to an academic psychiatrist in return for promoting the company’s drugs. For suppressing unfavorable research on its top-selling drug Paxil—to cite only one example—it agreed to settle charges of consumer fraud for a fine of $2.5 million.6

The new Conservative–Liberal coalition government that won the May election has endorsed the bureaucratic control of higher education by the central government, as did the conservative Thatcher and Major governments in the 1980s and 1990s. It is not yet clear whether the new government will adopt Brown’s “impact” KPIs, or come up with some new indicators of its own.

Whatever it does, this academic control regime with its KPIs will continue to apply as much to philosophy, ancient Greek, and Chinese history as it does to physics, chemistry, and academic medicine. The central government, usually the UK Treasury, decides the broad outlines of policy—the amount of money to be distributed to universities for research and the definition of “research excellence” that determines this allocation. The government has also set up a special state bureaucracy, situated between itself and the universities, that handles the detailed administration of the system. This bureaucracy, which continues under the new coalition, goes by the unappealing acronym HEFCE, or the Higher Education Funding Council for England.7

The intervention of the state in the management of academic research has created a bureaucracy of command and control that links the UK Treasury, at the top, all the way down to the scholars at the base—researchers working away in libraries, archives, and laboratories. In between are the bureaucracies of HEFCE, of the central university administrations, and of the divisions and departments of the universities themselves. The HEFCE control system has two pillars. The first is the “Research Assessment Exercise” (RAE), the academic review process that takes place every six or seven years when HEFCE passes judgment on the quality of the academic output of the UK universities during the previous planning period—and therefore on the funds eventually allotted to them. According to HEFCE’s rulebook for the RAE, the university departments must collect books, monographs, and articles in learned journals written by the department’s scholars.

For the assessment, four items of research output must be submitted to the RAE by every British academic selected by his or her university department. With 52,409 academics entered for the most recent RAE of 2008, over 200,000 items of scholarship reached HEFCE. For the previous RAE of 2001, this avalanche of academic work was so large it had to be stored in unused aircraft hangars located near HEFCE’s headquarters in Bristol.8 The items are then examined by the academics on panels set up by HEFCE to cover every discipline from dentistry to medieval history—sixty-seven in the 2008 RAE. Each panel is usually made up of between ten and twenty specialists, selected by members of their respective disciplines though subject at all times to HEFCE’s rules for the RAE. The panels must award each submitted work one of four grades, ranging from 4*, the top grade, for work whose “quality is world leading in terms of originality, significance and rigor,” to the humble 1*, “recognized nationally in terms of originality, significance, and rigour.”9

The anthropologist John Davis, former warden of All Souls College, Oxford, has written of exercises such as the RAE that their “rituals are shallow because they do not penetrate to the core.”10 I have yet to meet anyone who seriously believes that the RAE panels—underpaid, under pressure of time, and needing to sift through thousands of scholarly works—can possibly do justice to the tiny minority of work that really is “world leading in terms of originality, significance and rigour.” But to expect the panels to do this is to miss the point of the RAE. Its roots are in the corporate, not the academic, world. It is really a “quality control” exercise imposed on academics by politicians; and the RAE grades are simply the raw material for Key Performance Indicators, which politicians and bureaucrats can then manipulate in order to show that academics are (or are not) providing value for taxpayers’ money. The grades are at best measures of competence, not of excellence.

Head_2.jpg
Stan Laurel and Oliver Hardy in A Chump at Oxford, 1940

Nonetheless most British academics feel that they must go along with HEFCE, because its second pillar is the funding process that follows the announcement of the RAE results. It is this that gives the system real teeth, since academic departments receive less money if their RAE ratings fall short. There is a cleverness to these rules that points to their origins in the consulting world of McKinsey, Accenture, and Ernst and Young, and of course the Balanced Scorecard. As the deadline for the RAE approaches, university departments do not know the amount of the financial penalties to be imposed by HEFCE if they fail to receive the top grades; they know only that the penalties will be severe. Moreover, these penalties are linked to the performance of each academic entered in the RAE by his or her department. The pressure on academics in the months before the RAE deadline can therefore be intense. A friend at one of the humanities departments at Oxford faced them in the fall of 2007 when he struggled to finish for the RAE deadline a book that had been a lifetime project, with $120,000 of HEFCE funds thought to be at stake.

Things are usually done sotto voce at Oxford, and it didn’t take more than a couple of stiff, pointed telephone calls from my friend’s departmental “line manager” for the RAE (a fellow academic chosen to supervise his work for the RAE) to remind him of how much was riding on his performance. HEFCE’s financing process legitimizes this kind of micromanagement of research by both university departments and central university administrators. The system has therefore markedly shifted the balance of power in British universities from academics to managers. “Managers” is a category that now includes not only professional managers in central university administrations, but also those senior academics in university departments and divisions who have responsibility for submitting work to the RAE panels. They have become hybrid academics/managers and they have to worry about pleasing the agents of HEFCE, whether they like it or not.

What is it like to be at the receiving end of the HEFCE/RAE system, especially for a young academic starting out on his or her career? Here is the testimony of a young and very promising historian teaching at one of the newer universities in the London area:
The bureaucratization of scholarship in the humanities is simply spirit-crushing. I may prepare an article on extremism, my research area, for publication in a learned journal, and my RAE line manager focuses immediately on the influence of the journal, the number of citations of my text, the amount of pages written, or the journal’s publisher. Interference by these academic managers is pervasive and creeping. Whether my article is any good, or advances scholarship in the field, are quickly becoming secondary issues. All this may add to academic “productivity,” but is it worth selling our collective soul for?
A 2000 study carried out by the Universities UK, a body representing the vice-chancellors—executive heads of British universities—found that the frustration and demoralization expressed by the young historian were even then widespread among British academics. Its focus groups criticized “higher workloads and long hours, finance-driven decisions, remote senior management teams and greater pressure for internal and external accountability.”11 Some of the most telling testimony on the damage to British scholarship inflicted by the HEFCE/RAE regime has come not from an academic but from Richard Baggaley, the European publishing director of Princeton University Press, and an acute observer of the quality of British scholarly output.
Writing in the Times Higher Education Supplement in May 2007, Baggaley deplored what he saw as “a trend towards short-termism and narrowness of focus in British academe.”12 In the natural and social sciences this took the form of “intense individual and team pressure to publish journal articles,” with the writing of books strongly discouraged, and especially the writing of what he calls “big idea books” that may define their disciplines. Baggaley attributes this bias against books directly to the distorting effects of the RAE. Journal articles are congenial to the RAE because they can be safely completed and peer-reviewed in good time for the RAE deadline. If they are in a prestigious journal, that is the kind of peer approval that will impress the RAE panelists.

The pressure to be published in the top journals, Baggaley wrote, also
increases a tendency to play to what the journal likes, to not threaten the status quo in the discipline, to be risk-averse and less innovative, to concentrate on small incremental steps and to avoid big-picture interdisciplinary work.
In the humanities the RAE bias also works in favor of the 180–200-page monograph, hyperspecialized, cautious and incremental in its findings, with few prospects for sale as a bound book but again with a good chance of being completed and peer-reviewed in time for the RAE deadline. A bookseller at Blackwell’s, the leading Oxford bookstore, told me that he dreaded the influx of such books as the RAE deadline approached.

Baggaley doesn’t mention a further set of practices, above and beyond the RAE, that push British academics toward “short-termism and narrowness of focus” in their research. These are the reporting and auditing burdens imposed on them not only by HEFCE but also by its sister bureaucracies such as the Quality Assurance Agency for Higher Education (QAA) and by the administrators of the academics’ own university. This is the “pressure for internal and external accountability” to which the Universities UK refers in its report, and is known collectively as the “audit culture.” The audit culture requires academics to squander vast amounts of time and energy producing lengthy and pointless reports, drenched in the jargon of management consultancy, showing how their chosen “processes” for the organization of teaching, research, and the running of academic departments conform to managerial “best practice” as laid down by HEFCE, the QAA, or the university administration itself.

In HEFCE’s texts, words like “quality” and “excellence” have become increasingly empty. For the handful of British universities that are world-class—Oxford, Cambridge, and the various components of the University of London foremost among them—the HEFCE system is especially dangerous, because the reputation of these universities really does depend on their ability to do first-rate research, which is most threatened by HEFCE’s crass managerialism. In Britain there are scholars who will continue to produce exceptional work despite HEFCE and the RAE. But by treating the universities as if they were the research division of Great Britain Inc., the UK government and HEFCE have relegated the scholar to the lower echelons of a corporate hierarchy, surrounding him or her with hoards of managerial busybodies bristling with benchmarks, incentives, and penalties.

To what degree do such methods prevail in American academia itself? It would be surprising if practices so central to the American zeitgeist during the past twenty years had thrived only on foreign soil. In the US, higher public education is the responsibility of the individual states, and the power of private universities also ensures that there can be no American HEFCE exercising monopolist powers over the funding for research in all disciplines. The lifetime security of employment that academic tenure provides—and that no longer exists in the UK—gives the senior professors, who in 2007–2008 made up 48.8 percent of teachers in higher education,13 the power and confidence to stand up to university managers and head off an American version of the RAE. But their success in doing this also points to the dubious bargain that many of them have struck: relatively little teaching, especially undergraduate teaching, is usually required of them, and in return they are left in peace to carry on with their research.

The result has been that the burden of academic managerialism in the US has fallen on the teaching rather than the research side of university life, with university administrators achieving collectively what in the UK has been achieved by government fiat. The imposition of the industrial model on teaching, and especially the teaching of undergraduates, has been most damaging in the state universities below the elite level and in the two-year junior and community colleges that together, Jack Schuster and Martin Finkelstein remind us in The American Faculty: The Restructuring of Academic Work and Careers, make up the great majority of American institutions of higher education.

At this lower level the prolonged and continued decline of funding from state and local governments had had a pervasive effect even before the present financial crisis hit, forcing university managers to behave more and more like their corporate counterparts and to treat academic departments as “cost centers and revenue production units.”14 In the science, mathematics, and engineering departments of eleven public research universities examined by Sheila Slaughter and Gary Rhoades in Academic Capitalism and the New Economy, we find an assembly line where increased “student credit-hour production” has become the target of management’s “incentive based budget mechanisms.”15

Texas A&M University of College Station, Texas, provides an extreme example of a teaching factory in the making. For the academic year 2008–2009 each faculty member at Texas A&M was given a “profit and loss account” by the university administration, where the “loss” of the faculty member’s salary was or was not offset by teaching revenues brought in by the faculty member in the form of “semester credit hours.” Professors were in the red when their salary “loss” exceeded their teaching revenues. A professor’s research and publication record, and the value of research grants he or she might have received, did not figure in the profit and loss calculations. So Professor Chester Dunning, a tenured historian of Russia with a distinguished research and publication record, was nonetheless judged to be a $26,863 “lossmaker” for the university because his total salary plus benefits of $112,138 well exceeded the $85,275 he attracted in semester credit hours.16

In Academic Capitalism and the New Economy Slaughter and Rhoades draw on interviews with department heads at public research universities to give a sense of what the mass production of teaching can mean at the classroom level: “The whole thing is marketing. The whole thing is how many bodies do you process. Administrators actually use these terms.” Again, “Chemistry 101 is like a fast dentist. It can generate lots of revenue.” But while faculty output (that is, its teaching load) was scheduled by the administrators to increase, faculty numbers and its remuneration have to be strictly controlled. One department set up a professional masters program that was inexpensive to run because it could be taught “partly or largely by adjuncts and even doctoral students.”

These adjuncts and doctoral students belong to the contingent academic workforce, the expanding army of academics employed on short-term contracts, many of whom work part-time and have little by way of job security or benefits. This workplace “restructuring” is the subject of Schuster and Finkelstein’s monumental study of employment trends in American academia, The American Faculty, an exhaustive examination of the available data. They show that the growth of the “contingent” academic workforce—i.e., nontenured and without secure benefits—over the past thirty years has been spectacular and surpasses anything to be found in the corporate world.

Between 1993 and 2003 the proportion of all new full-time faculty appointments employed on short-term contracts and without prospect of tenure increased from 50 percent to 58.6 percent of those hired. This “restructuring” has been going on since the mid-1970s and shows no sign of slowing down: between 1976 and 2005 the full-time contingent academic workforce grew by 223 percent, the part-time contingent workforce grew by 214 percent, while the tenured and tenure-track workforce grew by just 17 percent.
The growth of the contingent academic workforce brings the labor economics of the call center and the Wal-Mart store to higher education. With these contingent academics, few of whom have firm contracts, managers now have at their disposal a flexible, low-cost workforce that can be hired and fired at will, that can be made to work longer or shorter hours as the market dictates, and that is in a poor position to demand higher pay.

With its “profit and loss” statement for every academic on its payroll, Texas A&M has provided detailed statistical evidence (inadvertently, one suspects) showing why this expansion of the contingent academic workforce appeals so strongly to university administrators. In 2008–2009 in the Communications Department of Texas A&M’s Commerce, Texas, campus, Stephanie Juarez, untenured, was said to be four times more “profitable” for the university than her tenure-tracked colleague Tony Demars. This was not just because Juarez brought in more “student credit hours” than Demars, $113,960 versus $98,838, but also because, untenured, her cost to the university in salary and benefits was just over half that of the tenure-tracked Demars, $43,447 versus $82,969, yielding a “profit” for Texas A&M of $86,411.17

In the concluding chapter of The American Faculty, Schuster and Finkelstein list the costs and benefits of “faculty restructuring” and seem to be looking ahead to what is essentially a post-tenure academic world dominated by the contingent academic workforce.18 Their concept of the academic future includes greater professional stratification for academics, reflecting distinctions between tenured and nontenured faculty. It also includes replacement of academic disciplines by “client services” as the organizing principle for “instructional delivery” (i.e., teaching); the corporatizating of academic life, with the faculty serving as managed professionals and with less emphasis on academic values; a “renegotiation” of the social contract between the faculty and the institution, with declining mutual loyalty and increased administrative oversight of academic affairs; promotion of academic star systems undergirded by a vast new academic proletariat; and diminished protection of academic freedom with fewer positions protected by tenure.

Might the scale of the global financial crisis, driven by the targeting mania of the Balanced Scorecard and by automated management systems, shake the confidence of those who think that these very same methods should be applied throughout to the academy? With the recession eating away at the budgets of universities on both sides of the Atlantic, the times are not propitious for those hoping to liberate scholarship and teaching from harmful managerial schemes. Such liberation would also require a stronger and better-organized resistance on the part of the academy itself than we have seen so far.
 
—December 16, 2010
  1. Statement by 275 Oxford academics opposing Mrs. Thatcher's nomination for an honorary degree, quoted in H.L.A. Hart, " Oxford and Mrs. Thatcher ," The New York Review , March 28, 1985. 
  2. Brian Harrison, "Mrs. Thatcher and the Intellectuals," in Twentieth Century British History , Vol. 5, No. 2 (1994), pp. 206–245 ff., pp. 224, 234, 237. 
  3. See particularly Kaplan and Norton, "The Balanced Scorecard: Measures that Drive Performance," Harvard Business Review, January–February 1992, and "Putting the Balanced Scorecard to Work," Harvard Business Review, September–October 1993. 
  4. John F. Allen, "Research and How to Promote It in a University," in Future Medicinal Chemistry , Vol. 2, No. 1 (2010), available at Allen's website at jfallen.org/publications/
  5. See Phil Baty, "Lammy Demands ‘Further and Faster' Progress Towards Economic Impact," Times Higher Education Supplement , September 10, 2009. 
  6. See Marcia Angell, " Drug Companies & Doctors: A Story of Corruption ," The New York Review , January 15, 2009, and Marcia Angell, The Truth About the Drug Companies: How They Deceive Us and What to Do About It (Random House, 2005). 
  7. Scotland, Wales, and Northern Ireland have their own mini-HEFCEs. The government-ordered "Independent Review of Higher Education and Student Finance," chaired by the former CEO of BP, John Browne, recommended in October 2010 that HEFCE be amalgamated into a "Higher Education Council," or super HEFCE, comprising all four bureaucracies responsible for higher education in the UK. The Browne Committee recommended no change in the HEFCE/RAE control regime described here, for the simple reason that the committee is dominated by the kind of academic bureaucrats and corporate efficiency experts who have either been building the HEFCE system over the past twenty years are steeped in the management theories that have produced it. 
  8. Political Quarterly , Vol. 74, No. 4 (October 2003). 
  9. For definitions of all four gradings for the 2008 RAE see http://www.rae.ac.uk/aboutus/quality.asp . HEFCE has renamed the RAE scheduled for 2013 the "Research Excellence Framework," or REF, but I see no reason to go along with HEFCE's recourse to Orwellian newspeak and will continue here to refer to the procedure as the RAE, as it has been known throughout its twenty-year history. 
  10. See John Davis, "Administering Creativity," Anthropology Today, Vol. 15, No. 2 (April 1999). 
  11. Universities UK (UUK), "New Managerialism and the Management of UK Universities," CVCP/SRHS Research Seminar, October 12, 2000, on "Shifting Patterns of State–University Relations," quoted in Philip Tagg's Audititis website, www.tagg.org/rants/audititis.html . Tagg is a professor of musicology at the University of Montreal, but was formerly a lecturer at the University of Liverpool in the UK. 
  12. Richard Baggaley "How the RAE is Smothering ‘Big Idea' Books," Times Higher Education , May 25, 2007. 
  13. National Center for Education Statistics, Table 264: available at nces.ed.gov/programs/digest/d09/tables/dt09_264.asp
  14. Sheila Slaughter and Gary Rhonds "Academic Capitalism and the New Economy" (Johns Hopkins University Press, 2006), p. 181. 
  15. Schuster and Finkelstein, The American Faculty , pp. 323–324; see also American Association of University Professors, "Increase in the Number of Employees in Higher Education Institutions, by Category of Employee, 1976–2005," available at www.aaup.org
  16. Stephanie Simon and Stephanie Banchero, "Putting a Price on Professors," The Wall Street Journal , October 22, 2010. For Professor Dunning's "profit and loss account" see "Texas A&M University System: Academic Financial Data Complication (AFDC), FY 2009," p. 116. In a letter to the Texas A&M board of Regents, dated September 13, 2010, Michael D. McKinney, M.D., chancellor of Texas A&M, told the regents that they could find the 265-page printout of the AFDC at www.tamus.ed/offices/communications/reports/afdc.pdf. This address now yields a "page not found" message, and the AFDC data is no longer available on the Texas A&M System's home page, where it has been withdrawn "as we continue to refine the data." The AFDC printout was made available to me by a faculty member at Texas A&M and is available from me at siheaduk@aol.com. 
  17. For Juarez and Demar's "profit and loss account," see Texas A&M's "Academic Financial Compilation Data, FY 2009," p. 177, and see footnote 16 for access to the document. 
  18. Schuster and Finkelstein, The American Family , pp. 340–341. For detailed descriptions of what it's like to be a member of the contingent academic workforce, see John W. Curtis and Monica F. Jacobe, AAUP Contingent Faculty Index 2006 , available at www.aaup.org/AAUP/pubsres/academe/2006/ND/AW/ContIndex.htm . See also Michael Dubson, Ghosts in the Classroom: Stories of College Adjunct Faculty—and the Price We All Pay (Camel's Back Books, 2001).