I was introduced to a fabulous writer, Ludmilla Petrushevskaya through her anthology, There Once Lived a Girl Who Seduced Her SIster’s Husband and He Hanged Himself: Love Stories.* One caveat: I have not read any Soviet-era Russian authors before. For better or for worse, the one  impression I have is that I encountered an utterly alien world. The people who populate Ms. Petrushevskaya’s stories are recognizable. The dramatic tension is in their making do with what they have. Their responses are dramatically logical and realistic in motivation. Their environment is claustrophobic and, if short of dehumanizing, then at least one that continually sandblasts the dignity of its inhabitants. It might be too simplistic a statement, but it is a wonder that the Soviet Union survived for so long, given the lives that Ms. Petrushevskaya portrayed.

*I was asked to review this book by a representative of Penguin Publishing. I received an advanced copy of the book and no other form of compensation.

The stories were translated by Anna Summers, who also penned the introduction. In this text, we learn that Ms. Petrushevskaya, when young, and her mother were without a home. They “lived under a desk in her insane grandfather’s room, while occasionally renting cots in nearby communal apartments.” Later, when she grew older and had a child of her own, she was left a widow and had to care for her mother as well. This is simply the first of many details that astound. The other detail we need to know is that, following the Russian Revolution, cities (re: Moscow) began burgeoning. The response was for the state to outlaw private housing. Once appropriated, the apartments were divided into the tiny rooms, ubiquitous in these stories. Much as one envisions tenements and ghettos in early 20th century New York and European cities, with extended families piled into small rooms, so these Russians lived. It is in this milieu that we encounter Ms. Petrushevskaya’s characters; her experience informs the stories in this collection.

I realize that  short stories depend on stereotypes and a presumptive common experience, so that the writer can convey her ideas with an economy of words. I realize that, whatever reaction I have, for the people to whom Ms. Petrushevskaya wrote, they would have different thoughts. The shock I experienced may not occur to them, as the events may be commonplace for them. Anna Summers, who grew up in Moscow, in those communal apartment buildings that these stories take place in, states as much in the introduction:

When her stories first circulated, the shock of recognition was terrible indeed among my parents’ generation. Petrushevskaya, it turned out, had been writing about their lives; it was their claustrophobic apartments that she described, their ungrateful children, their sick parents, their frustrated marriages.

The brevity of these short stories unfolded in unrelenting fashion, without nonsense. Each sentence hurls the reader forward. The first story, “A Murky Fate”, exemplifies this: a nameless, unmarried woman invites a man to her studio apartment, and so she needs to convince her mother to vacate the premises for the night. The object of her affection is a 42-year old man, married, no career prospects, ill-health. She knows that this is a one-night stand, but she wants more. The following day, she cannot bear the thought of facing her old life and proudly tells her co-worker that she has a man. Except now she realizes that she will be the other woman, pestering his home with phone calls. Despite this, she cannot help but be happy. The story is really not much longer than that (well, I exaggerate), but utterly compelling.

This is one instance of the lives that Ms. Petrushevskaya sketches, but it is illustrative of the whole collection. The feelings her story engendered are complex, and to be outraged at the indignities of life in the Soviet Union would be too simple and besides the point. There is a fair amount of sadness, hopefulness, and some measure of happiness. It is more proper to say that, her characters are simply living, although not striving to transcend their system or institution. It is in this living, despite their circumstances, that makes the stories so moving.

The common thread in all these stories is that life is claustrophobic, an irony given the spacious  geographic area of Soviet Russia. Entire families are crammed into a small space, and familiarity breeds contempt. There is no physical and psychic room to grow; physical boundaries are reflected in the interior lives of Ms. Petrushevskaya’s characters. The men seem infantile, or drunkards, or both. For the most part, the people seem powerless to effect change, so they focus on what is in front of them.  In some cases, they – and usually women –  are certainly aware (such as the nameless woman in “A Murky Fate”, or Pulcheria in “Eros’s Way”), and consciously pursue happiness that is temporary at best and illusory at worst. These characters are missing that spark of daring to accept no less than perfection that comes in a world with almost unlimited potential. Instead, there is just endless gray, and one must simply find the meager allotment of happiness.

I think a similar set of stories, written by an American author about life in the tenements, will have the converse distribution. Those heroes and heroines would likely be striving for the pinnacle of perfection and happiness, and the drama comes in their failing to do so. This is not an indictment of the way we might write the stories in that setting; it simply reflects the approach we might take. I think most characters, Russian or Soviet, or American or otherwise, tends to end up in the same place.

About the happiest story, I think, is “Like Penelope.” In this, the heroine does in fact have that bit of daring. Oksana lives with her mother, a woman who has remained kind-hearted and generous, to her daughter’s dismay. The mother takes in her her mother-in-law, Klava, from her first marriage. The older women mean well, but Oksana seems ungrateful (a recurring motif in these stories).  For her part, Klava was a tailor and wished to make Oksana a wonderful dress for her to go out in, as it is New Year’s Eve. While there are no better options, Oksana scoffs at the reworked clothes offered to her by Klava. It is not materialistic selfishness that causes Oksana to act out: it is the simple observation that she will end up like Klava or her mother. Old and alone, without romance or happiness, and toil her only company. She felt no future. But it does sound like a lovely dress, with embroidery that, for shame, will not be noticed. Something moves her, and she puts the dress and make-up on. She is now physically changed. The metamorphosis completes when Klava’s son, Misha shows up to visit (it was his clothes that had been re-worked for Oksana). He is stunned by the now ravishing Oksana. She introduces herself as Xenia (a plainer name that she had wanted) and moves with elegance; she seized the moment, remade herself, and constructed her happiness. The final sentences summed it up:

Mama Nina observed her daughter and wondered where this new slow grace in her movements had come from, the twinkles in her laughing black eyes, the wave in her hair, the gorgeous dress…. Of course: she had made it herself.

That is the lesson that most of Ms. Petrushevskaya’s characters do not learn. To be fair, most of them had been beaten down too much to see that; all that is left is to settle.

My favorites in this collection are “Two Deities” and “Ero’s Way”. The former is about the relatively successful life the heroine, Genya, leads – only to realize that all is for naught as all that she and her partner, Dima, have a son, who seems weak-willed and will likely lead a dissolute life despite his relative advantages. But the way that story unfolded was just stunning. I had said that each one of Ms. Petrushevskaya’s sentences propel the story forward. It was a simple, forward march; an uplifting story, really, to hear about Genya’s obstacles and how she overcame them. And just as we reached the summit, she tumbles us back down as we find out what the future holds for the couple. When I go to the end, it felt like Ms. Petrushevskaya pulled the rug out from under me. It stunned me.

“Eros’s Way” was a bit of the opposite. Most of the story is about a woman navigating the treacherous waters of office politics. She does so successfully; the tension came when she fell in love with the husband of her nemesis. The man seems like a layabout, a mathematician who might be lazy, or exploited, or both. A little bit mad, but who somehow might be finding his way to women who will mother him. Pulcheria, our heroine, could be simply the latest in a long line of surrogate mothers, but it is a role she takes gladly. There is a delicacy of feeling that Ms. Petrushevskaya preserves, along with some sharp observations along the way. The end is a little bit sad, rather tender, and made me sigh with relief that no further depredations await.

I really enjoyed seeing how these Russians lived in the Soviet Union. Anna Summers noted in her introduction that, despite some of the things the characters do in these stories, we tend to sympathize rather than to judge them harshly. I agree. Somehow, these stories are not self-pitying. Sure, some of them are sad, and some definitely would make you catch your breath in their naked, raw desperation. But the characters are memorable, they do their best, and we root for them.

I just read Arthur Krystal’s piece “Easy Writers” (behind paywall) in the May 28 (2012) The New Yorker, in which he examines the critical response to genre writers and makes some attempt to explain the differences between literary writers and mere story tellers.

Every time I read a piece such as this, whether it be by high-brow critics or writers, I can only become saddened by what seems to me their increasing irrelevance. I might not have the talent to be such a writer, but I can certainly see it as nothing less than self-sabotage by telling your potential readers that 1) they do not have the intellect to appreciate your verbiage describing the mundane and that 2) even if they think they do, they should not bother (as if one making money from one’s book precludes one from writing a literary masterpiece – because, you know, it means that the language is somehow too easy and accessible to the proles.)

Krystal rehashes the basic gripe against genre writers: by definition, they write with a formula in mind, and this formula is propelled by plot. The fact that a detective must catch the killer or that a lawyer must find evidence to exonerate his client limits the tools a genre writer can use. Because the writer needs to resolve the plot, the focus is lessened on dramatic closure or catharsis than on solving the case. More often than not (and the critics would argue, always), stereotypes reserved for short stories are transplanted into a full length book. The result is that heroes and villains are good and evil, black and white, with nary a shadow cast to suggest a more complex reality.

One final point Krystal makes is that the word-craft seems to be missing from churn-it-out modern day pulp (I mean, genre) writers. As the self-named guardians of quality (which I find ironic; I find today’s a great many literary authors today compare poorly to luminaries like Melville, Wharton, James, Thackeray, and Hemingway) continue to cycle towards irrelevance, in the very same issue of The New Yorker we find a brilliant surrogate for the plotless, psychological profile that Krystal suggests is the domain of the literary writer.

David Grann’s profile of William Alexander Morgan (“The Yankee Commandante“) is exciting, with all the elements of an adventure tale. Except that Grann also presented actual, documentary information from the FBI, CIA, and various intelligence personnel assessing Morgan’s use to them. In other words, we actually have evaluations of Morgan’s psyche, or at least opinions from people whose livelihoods depend on making judgments about people.

My point here is that, with the wealth of historical and biographical works available, drawing on real events and the analyses of people of significance, do we really need self-congratulatory high-lit writers teaching us about the human condition? And even if we disagree with the authors of these biographies, isn’t it desirable that we focus on actual historical persona, where we can rely on documentary evidence and not the imagination of a fiction writer?

Let us move on from this idea of the genre versus the human condition (or, plot versus characterization.)

Now, I happen to agree that, for the most part, most published books are dreck; it isn’t that we need to elevate genre writing, but we simply must recognize that good writing can come from many sources. It is the same heavy handed message at the end of Ratatouille, when Anton Ego, Remi’s nemesis, recognizes that popularizing cookery does not elevate all cooks, but that it makes the ground fertile to nurture more talent from non-traditional sources.

This point is, I believe, at the essence of the Jodi Picoult criticism of the high-brow crowd. Popular writing might be awash in mediocre writing, but we shouldn’t be surprised when we do find excellent writing from genre authors.

Hence we arrive again to Krystal’s thesis. He points to a 20th century literary giants, for example Auden, who felt Raymond Chandler to be a high-calibre talent, despite slumming it. Krystal echoes this sentiment, which I find condescending. Why should we grade Chandler’s writing on a curve, judging him against his peers? If literary standards were actually objective, then one can simply judge all authors by some criteria for good writing.

Either Chandler is a good writer, or he isn’t.

I was left annoyed by Krystal’s piece, not because of his opinion, but in that he seems unwilling to follow the high-lit stance to its conclusions. Krystal identified both the type of novel and the writing style as paramount to be considered worthy literature. We must delve into the psychology of a character using highly stylized language.

I would argue, as do most high-brow writers and critics, that the beauty of language is paramount (naturally, we differ in specifics). Where we truly differ is the idea that plot and story must take a back seat to laying bare the psychology of protagonists.

I wanted to have my say, but Charlie Stross has made similar points on his blog, in better way.

Interestingly, he launched some salvos against the perception that science fiction can be defined by the presence of technobabble and spaceships. His point can be summed up by this quote:

In fact, those people who are doing the “big visionary ideas about the future” SF are mostly doing so in a vacuum of critical appreciation. Greg Egan’s wonderful clockwork constructions out of the raw stuff of quantum mechanics, visualising entirely different types of universe, fall on the deaf ears of critics who are looking for depth of characterisation, and don’t realize that in his SF the structure of the universe is the character. On Hannu Rajaniemi’s brilliant “The Quantum Thief” — I have yet to see a single review that even notices the fact that this is the first hard SF novel to examine the impact of quantum cryptography on human society. (That’s a huge idea, but none of the reviewers even noticed it!) And there, over in a corner, is Bruce Sterling, blazing a lonely pioneering trail into the future. Chairman Bruce played out cyberpunk before most of us ever heard of it, invented the New Space Opera in “Schismatrix” (which looked as if nobody appreciated it for a couple of decades), co-wrote the most interesting hard-SF steampunk novel of all, and got into global climate change in the early 90s. He’s currently about ten years ahead of the curve. If SF was about big innovative visions, he’d need to build an extension to house all his Hugo awards.

Can you imagine? He’s criticizing reviewers (but also readers) who ignore that another approach to high-brow fiction might actually be the depth of characterization of the context surrounding the actors in a story.

In the same way that high-lit authors seem intent on showing us that humans are complicated, one can imagine a writer describing complex interactions with technology, with societal changes, with ethical dilemmas in medicine, and so on. Just as people are not saints or demons, our relationship to our culture is not simple. That an author chooses to make prominent a battle scene before detailing the devastation of his hero’s psyche does not mean he has become a writer of war stories.

Clearly, most critics do already focus on language. Gary Shteyngart and David Foster Wallace are two examples. The blending of science fiction and absurdist elements into their shrewd commentary on society hasn’t hurt their acceptance. Onion skin ™ pants? Augmented Reality updates as to one’s consensus f***-ability? Paraplegic Canadian commando assassins? Ending a novel with a firefight? I think Super Sad True Love Story and Infinite Jest were actually enjoyable stories, in addition to being a showcase for the talents of the authors.

My problem with the so-called gatekeepers of literature is that they confuse their form with what they wish to achieve through fiction. Their form is the novel; what they wish to achieve is understanding of the human nature. Clearly, there are many paths to this understanding; biographies, long-arc histories, a study of society are some of the other means. Since a novelist is not a scholar, the burden of proof, as it were, is relaxed.

Instead, the means of demonstrating the human truth lies in the aesthetics and beauty of language, and perhaps bitter and disquieting ideas can be made palatable by a bit of storytelling, of entertaining. To assume that the whole enterprise can succeed only when we drain the pleasure from novels (like seeing interesting things happen to interesting people) seems to mistake the novel for a dry social science text. If that is their goal, then there is actually no point in fiction.

Over dinner at Bobby Flay’s Mesa Grill, I was recommending Gordon Shepherd’s book, Neurogastronomy, to a friend, who is a foodie. He seemed really interested in it, having read Herve This’s Molecular Gastronomy and other books like it. I’ll say here what I told my friend.

Shepherd brings with him both expertise and experience on the subject, having actually worked in olfaction for many years. The people he works with are my friends and peers, as I have also worked in olfaction until recently.  The way this book is presented is a model I wish to emulate; it is a  synthesis of both scientific findings and their meaning to us. By combining these elements with clear descriptions of the experiments involved, Shepherd is able to place the mechanics of smell within the context of odor and flavor perception. How the system works, how quality of life can be impaired, possible evolutionary consequences, and ultimately how we can subvert human flavor perception to improve our diet, nutrition, and yes, pleasure. 

Gordon Shepherd has made a huge impact in neurophysiology and in the field of olfaction. I think it is wonderful that he has written this book, to emphasize that olfaction is an important sense, playing a role in shaping human culture by its role in flavor perception. This is a direct counter to the notion that the human (and primate) olfactory system compares “poorly” against other sensory systems because the amount of brain space devoted to processing olfactory data seem so small. It also counters the perception from an olfactory detector consideration, such as that other mammals have both a greater number and variety of odor sensors, and thus as a result that they are better smellers than humans.

For me, I also had the vicarious thrill of seeing people I know depicted in a book meant for a wider audience.

***

From the standpoint of a neuroscientist, it was refreshing to see how a distinguished scientists view as the most important pieces of neurophysiological evidence fitting into the concept of flavor perception.  This is the bit of curation that I am such an enthusiast for. We have a wealth of data, and often, scientific reviews are a great place to being reading about a field. Reviews are as much about synthesis of existing scientific threads as much as about historical perspective and charting future research directions (i.e. what hasn’t been yet addressed).  With so much great writing today, having forty or fifty years of experience may not be necessary to provide proper context for a given research environment.

With that said, it is always nice to see someone with the stature of Gordon Shepherd present such a broad picture of the field and to hew closely to underlying research.

He spends the first chapters discussing some anthropology findings, laying the groundwork for the importance of flavor in shaping human culture. It seems that cooking – with its transformation of food at the molecular level and in the unlocking of huge stores of nutrition – provides a huge impetus in humans retaining a strong smell sense. The rest of the book recounts both his own and others’ contributions to the field of olfaction.

His presentation of neural activity is that brain works by encoding and extracting information that can be described as literal, physical patterned activity. Evidence from open brain surgery, to anatomical tracing, to functional imaging supports this idea. In each case,  patterns arise from ephemeral neural activity, grouped into physically discrete locations on the brain. Hence one hears about the visual and audio cortices, the somatosensory cortex, the hippocampus as a site of early memory formation, and so forth.

For the olfactory system, this is also true: at increasing levels of topologic precision, we can say that the main olfactory processing structures include the olfactory bulb, the olfactory cortex, and the orbitofrontal cortex. As we progress to more microscopic descriptions, we can describe groups of active neurons within these structures. The whole point of the brain’s wiring is to funnel external stimuli into combinations of activated neurons.

The connections between these neurons tend to lead to reactivation of the same groups of neurons to the same stimulus. Brain centers located downstream than operate on these patterns, recognizing them, storing them, retrieving them, and matching them. At some point, this stream of information is combined with other sensory inputs (aural, visual, taste, smell, and touch), resulting in higher order, conscious thoughts.

What I say next is not meant as a criticism but as a way to understand why Shepherd is so effective at presenting the science behind “neurogastronomy”. He left out a significant area of research, that of timing. A full description of how the brain works will have to include not only which neurons are active, but when they are active. There is not enough space in such a book to detail the underlying mechanism of smell: the identity of active neurons, how they are connected, and the timing of their activity.

My old boss (among others) was combining smell discrimination-decision making behavior task with simultaneous neural recordings. He, and others, have shown that within a sniff a rat can gain sufficient information to make a decision. This is on the order of a quarter of a second. Such a system likely functions as a time-based code. This is a huge part of understanding how the brain works.

Yet I have to say, it isn’t necessary to Shepherd’s story. Shepherd paints a compelling picture by simply presenting neuronal activity as a pattern, allowing him to describe a huge arc in a few strokes. But this stroke does reveal his thinking; he clearly places a central role in the anatomical organization of the brain, which groups neural activity into patterns. At ever more minute levels, the specific connections underlie the feature extraction processes going on in the brain. In a sense, the fact that neurons, at some point, activate represents the mechanics of actualizing information processing that we had already determined to take place in these neurons, based simply on how they are connected.

Depending on your viewpoint, when the neurons activate may prove important in these processes. Is timing then a peripheral phenomenon, since the most important observation is how these neurons are wired, or could the same wires actually transmit different “information”, depending on the sequence of activity? These are questions researchers continue to spend entire careers answering.

I can imagine a different investigator may have written the same book, but emphasize the ephemeral nature of neural ensembles where the real significance may lie in timing of the activity. In this case, the sequence of neurons firing, how their activity coincide, and the precise synapses activated in downstream neurons are just a few of the parameters that affect perception.

It isn’t a matter of discrediting one versus the other; it is just a point about presentation. In no way am I suggesting that the viewpoint put forth by Shepherd as deficient, merely that he probably made an editorial decision to provide a coherent framework for the edification of non-scientists. I really admire this book, as an exemplar of a rigorous book meant for popular consumption. Most importantly, I feel that he has described the wealth of experimental detail about how current theories of olfaction and flavor perception were arrived at.

I spent most of my reading time reading back issues of The New Yorker, accumulating on my Nook Color since January. I found a few gems:

  • a Jonathan Franzen piece (2/13/2012 – 2/20/2012) on Edith Wharton’s “Big 3” novels,
  • a Jonah Lehrer essay (3/5/2012) on the mathematics of altruism,
  • an Adam Gopnik discussion (4/3/2012) of the philosophy of Albert Camus.
  • Ken Auletta (4/30/2012) on how Stanford University resembles a tech incubator more than a school.

I read Franzen’s The Corrections; I never thought much of it. He represents the best of the worst kind of modern fiction, confusing the ubiquity of the mundane with significant insight into a common human condition. I think Franzen wasted his talents; it accounts for something to have developed five unique personalities, each one an asshole, but each in his or her own way. His piece on Edith Wharton brings a sensitivity to literary nuance, a deep reading, and historical context to an overview of her works and their significance. In short, I really liked his essay; it felt like I learned something.*

Franzen makes a connection among Wharton’s great novels, The House of Mirth, The Custom of the Country, and The Age of Innocence, drawing attention to how Wharton maintains our interest in the novels is that she draws upon our capacity for sympathy. Ironically, Wharton herself, and, her protagonists, as Franzen reads it, are not sympathetic characters.

When asked, I cite Neal Stephenson’s Cryptonomicon and The Age of Innocence as my favorite novels. The former is somewhat stereotypical for a person of my background: I am a scientist, I like mathematical modeling and games, I enjoy programming, I actually like reading and writing about science and math, and I greatly admire feats of mega and micro-engineering.

Usually, I relegate things emotional to the sphere of other – that is, our Weltanschauung (philosophical, mystical, and religious perceptions and so-called human truths), to my mind, clearly belong in the realm of non-science, opinion, and meaning. As I had written, I believe this not to be a slight; it’s just that how we engage with empirical, materialistic Truth is every bit, and perhaps more important, than what that Truth may be. That I think so highly of The Age of Innocence is due to the fact that its theme, with a big pay off near the end, exemplifies the very best of this fuzzy, but rich and vibrant, realm.

I would not have characterized The Age of Innocence as a work that draws on our capacity to identify; the plot is simply of love requited but unconsummated. I can see how the reader might be drawn in, rooting for the eventual uniting of Newland Archer and Ellen Olenski. Regardless of how one might see Archer, I argue that he is the prototype of Don Draper of Mad Men. Archer is dissatisfied with his life and although he does not transgress the oath of marriage, he has, in an emotional sense, already left his wife for another woman. Don Draper is simply the apotheosis of this; a man who indulges in his every desire. Archer is a percursor of this, very much embedded in the social forms of his time. His emotional conflict can be viewed as tragic or shameless.

What I find most compelling about The Age of Innocence, and it is the thought and feeling that comes back to me time and time again, despite having read it many years ago, is that in the end, we find out that Archer’s wife, May, knew and even appreciates him for having stood by her and building a life together. In other words, she understood his sacrifice. Her reaction is rather traditional – and fantastical in our modern world – that she is so forgiving and actually thanks him for what can only be described as the only proper course of action.

No, the thing that I find unforgettable is that Archer’s wife knew. She understood him as much as one human being can of another. She sympathized with her husband, knew him fully and deeply. To be fair, I think that she might have appreciated that Archer did not cause a scandal or rupture her standing in their community – she is fully a creature of Gilded Age high-society. That is a recurrent theme in Wharton’s novels; the rich have customs and formalities that must be attended to. Her protagonists all try to enter that society or to make a life within it. Regardless, in essence, May’s understanding captures fully what novels should do for us; it gives us an opportunity to appreciate the mind and soul of another.

I remember feeling rather ambivalent about the novel until that scene. Part of it is because Archer’s behavior is atrocious. If he did not have the courage to buck against the pressure of making an approved match for his peer group, it can only be seen as cowardly for him to become an adulterer.  That is, he would be having it both ways; conforming to the customs and also satisfying his desires. Seeing the novel as a romance (between Archer and the Countess) seems to pervert that very ideal.

Instead, the would-be adulterers remained platonic – barely, and only after May decided she needed to defend her hearth. There is something to be said about not committing a physical sin and executing the oath one takes. It thus surprised me to find that the ending was so cathartic; I felt relieved and elated that May realized all of this. I hate to say it, but I did think that it would have been a waste if all this remained in Archer’s and Olenski’s heads. Having May realize helped the novel transcend its tawdriness. It became a tale of sacrifice, such as passed for it in New York high society.

*My reaction to it reminds me of another writer, whose fiction I did not care for: Margaret Atwood. I had written, about The Handmaid’s Tale:

I didn’t have a problem with this book, and then I did. The language is stilted, simplistic, and monosyllabic in this book, and at first, I thought that was great. The protagonist is a woman who is kept down, and the main tool is the withdrawal of education. I had actually thought the language reflected the mind of the handmaid. Then I thumbed through another Atwood book and to my chagrin, she wrote in that same stilted voice, and I revised my feelings for this book.

I had neglected to mention that I felt her tale to be overwrought, excessive, and without nuance. It is as if her talents were better spent on expository works and not novels. My opinion received some validation when I encountered her essay in Seeing Further, a retrospective and appreciation of the Royal Society. Her essay had the same quality as Franzen’s; erudite, nuanced, funny, and sharp. After this essay, I wound up reading Oryx and Crake. Despite the obvious nature of the cautionary tale against abuse of science and the concentration of power, I felt that the ending was haunting and the prose lively. 

Well, I’m not the only saying this (sadly, it’s behind a paywall):

The melding of science and statistics has often propelled major breakthroughs. Last year’s Nobel Prize in Physics was awarded for the discovery of the accelerating expansion of the universe. That discovery was facilitated by sophisticated statistical methods, establishing that the finding was not an artifact of imprecise measurement or miscalculations. Statistical methods also allowed the trial demonstrating that zidovudine reduces the risk of HIV transmission from infected pregnant women to their infants to be stopped early, benefiting countless children. Statistical principles have been the foundation for field trials that have improved agricultural quality and for the randomized clinical trial, the gold standard for comparing treatments and the backbone of the drug regulatory system.

I spent a little bit of time trying to present ways that scientists and laymen can engage each other. It seems that in calling for a policy change, either in raising the level of public funding or peddling statistics as a viable career choice, perhaps Science should have made these articles freely available? Otherwise, Marie Davidian and Thomas Louis, the authors of this editorial, are preaching to the choir.

****

This is as good a time as any to present my thoughts on Stephen Baker’s The Numerati. It is a serviceable introduction to the arenas where statistical analyses of large data sets are gaining prominence. Despite the title, the book does not really present leading scientists and statisticians who are at the forefront of converting our analog lives into computer friendly numbers. I would also have liked to see this book grapple more with issues such as how non-statisticians should come to terms with how we are all being quantified and analyzed.

The book presents this numerification without judgment. It is simply a description of what is already happening. By virtue of Mr. Baker’s matter-of-fact presentation, we can surmise that current uses of behavior quantification seem to be used to market products to us or to track on us. Politicians get to slice us into finer demographics; true believers are ignored while swing voters are targeted. Stores entice consumers to join rewards programs; the information that businesses gain is cheaply bought. The debris of our personal lives are vacuumed by governments, intent on identifying the terrorists among us. The workplace becomes more divided, first by cubicle walls and then by algorithms designed to flag malingerers.

Mr. Baker does not dwell on how power resides in those who have access to the information, although most of the researchers seem to think that their analyses will be used by laymen as much as by themselves. He presents two dissenting voices; one is a detective who utilizes the latest face recognition software  for casinos. The expert has become an advocate for the privacy that citizens deserve; it might be uncomfortable for one to receive targeted ads that presumes too much about our behavior. The other is Baker himself,  but only in the narrow scope of how numerification affects his own industry. He thinks there is a value in the role of editors in acting as a curator for news. Otherwise, that role will fall to the reader, who may be overwhelmed by the number of news items. More likely, that reader will defer to search engines (the very things supplanting editors).

Mr. Baker does not really push this issue, but search engines do not have to be value-neutral. They can very well reflect the political biases of their owners, or the function itself might be a value-add meant to drive up revenue streams (don’t forget, Google makes money by selling ads.) People tend to think of software as without bias and objective, due to its being based on algorithms, machine rules and mathematical models. I think one interesting aspect of numerification in that it in no way dismisses the need for judgment. This is especially important in selecting the mathematical rules to use, the filters and gates one applies to data, and the interpretation of results. A computer can crank out numbers, but humans decide what formulas to use.

A short while ago, I was discussing this very issue with a director of analytics at a marketing firm. We got to discussing cluster analysis; we both felt that while its result is perfect for what we want to do with our data, there is a surprising amount of ambiguity involved. In MatLab, one function used for finding groups of data points is k-means clustering. To use it, you have to specify how many clusters the function should slice the data. The process itself is straightforward: a number of positions are selected at random. The algorithm then proceeds to reposition these points so that it is equidistant from the group of points that will form the cluster. Everything about it works as advertised, expect for the part where the user needs to tell the program how many clusters there are. Not much help if you are looking for a computational method to find the clusters “objectively.” The director and I moved onto other topics, such as formulating the machine rules and vetting them.

Let’s leave aside the loss of dignity and individuality entailed in numerification; the subtle points  not addressed in The Numerati are how models are built and how metrics are validated. This touches directly on the things that can go wrong with numerifying society. The most obvious example is bad data – either typos or out of date information – leading to misclassification. It’s not identify theft, but the result is the same: some agent attributes some notoriety to the wrong person. The victim gets stuck with a bill or worse, labeled a terrorist and detained by authorities. Another possible error is that the wrong metric is used, leading to even more inefficiencies than had numbers been ignored.  Simply, are the measures used really the most relevant ones, and how likely are we to settle on the wrong formula?

Dave Berri, a sports statistician, has been a bellweather in this regard. He has spent significant space in two books, The Wages of Win and Stumbling on Wins, as well as on his website and on the Huffington Post, documenting how even people with a vested interest in using statistics do not always come to scientifically consistent conclusions. He is able to use sports statistics to give us insight into the decision making process. His observations and models, and frankly most models in general, have been met with two criticisms: 1) math models do not capture something as complicated as basketball and 2) his findings have deviated from existing opinion – that is, his models seem wrong. Answering these questions get at some issues at data-mining and correlation analysis that The Numerati avoided.

***

Both the objections speak of the confusion people have between determinism and the predictions one can make with a model. First, there are actually few deterministic physical laws. Quantum mechanics happen to be one, but the effects can only be seen in reduced systems – the level of single electrons. As we include more of the universe, at the scales relevant to human experience, our deterministic laws take on a more approximate character. We being to model empirical effects and not so much deriving solutions based on first principles (with a few important caveats.) The point is that we can use Newton’s Laws just fine in sending our space probes to Jupiter, with the laws modeled after observation. We do not need to use a unified field theory to figure out how the subatomic particles of the molecules of a spacecraft interact with the like particles making up Jupiter to help us aim.

Models based on empirical findings can only predict events prescribed within the boundaries of observation. This is even more true of statistical models based on data mining. New conditions can arise such that they invalidate the assumptions (or the previous observations) used to build the model. The worst case scenario is when some infrequent catastrophe occurs – Nicholas Nassim Taleb’s “black swan” event.

That’s part of the art of working with models. We must understand their limitations as well as their conclusions. As the system becomes more complex, so do our models (generally). The complexity of our models may be linked to both the system and to the precision which we require. For example, one can model Texas Hold’em in terms of the probability of receiving a given hand and deriving optimal betting strategies. But that ignores the game theory aspect of the game: players can use information gained during the course of play, bluffing, and alterations in strategy by plain ignorance. There are also emotional aspects to play that might lead players to deviate from optimal strategy or miscalculate probabilities. For models that are based on observations, their predictions pertain to the likelihood of outcomes. Over many trials, I would expect the frequency of outcomes to conform to the model, but I cannot predict what the immediate next result will be. It’s the same as knowing that throwing 7’s is the most common event when playing craps, but I can’t say whether the next throw will in fact be a 7.

So why build these models? Because the process allows us to make explicit our ideas. It allows us to specify things we know, things we wish we knew, and possibly to help us identify thing we were ignorant of. Let us use sports as an example. Regardless of what we think about statistics and models, all of us already have one running in our heads. In the case of basketball, we can actually see this unspoken bias: general managers, sportswriters, and fans tend to name players as above average the more points per game they score. This is without consideration of other contributions, like steals, blocks, turnovers, fouls, rebounds, and shooting percentage. We know this because of empirical data: the pay scale of basketball players (controlled by GMs),  MVP voting (by sportswriters) and All-Star selections (by coaches and the fans). The number of points scored best explains why someone is chosen as a top player.

The upshot is that humans have a nervous system built to extract patterns. This is great for creating heuristics – general rules of thumbs. Unfortunately, we are influenced not only by the actual frequency of events but also by our emotions. Thus we do not actually have an objective model, but one modified by our perceptions. In other words, unless we take steps to count properly – that is, to create a mathematically precise model – we risk giving our subjective biases the veneer of objectivity.  This is a worse situation than having no model; we would place our confidence in something that will systematically give us wrong answers, rather than realizing we simply don’t know.

There are even more subtle problems with model building. Even having quantifiable events and objective observational data do not guarantee that one will have a good model. This problem can be seen in the NFL draft; the predictors that coaches use – this time published and made explicit, such as Wonderlic scores and NFL combine observations – do not have much value in identifying players who will be average, let alone be superstars. Berri has presented a lot of data on this, ranging from original research published in economic journals to more informal channels such as his books and web pieces. So how do we conclude that we have a good model?

***

Here is where it gets tricky. In the case of sports, we can identify good output metric, such as a team’s win-loss record. If you start from scratch, you might argue that a winning team must score more points than an opponent. You would test this by performing a simple linear regression analysis, and you would find that it is in fact the case. As a matter of fact, the first model is an obvious one: score more points than your opponents and you win. So obvious that is sounds like it is the definition of a win. In this case, it becomes apparent that the win-loss record is a “symptom”, a reflection of the fact that for  a given game, players do not make wins, but they do make points. Points-scored and points-against (point differential) become a more elemental assumption.

This isn’t too novel a finding; most sports conform to some variant of Bill James’s Pythagorean expectation (named as such because its terms resemble the Pythagorean relation a^2 + b^2 = c^2.) If we start at the assumption that everything a player does to help or hurt his team is to score points, then we can begin to ask whether all points are equal and whether other factors help or prevent teams from scoring. As it happens, Berri has done a credible job of building a simple formula using basketball box scores (rebounds, personal fouls, shooting percentage, assists, turnovers, blocks, and steals.) Here, we have obvious end goal measures: point differential and ultimately, win-loss.

But what if there are no obvious standard to judge the effectiveness of our models? That is the situation encountered by modelers who try to identify terrorists or to increase worker productivity. Frankly, the outcomes are confounded by the fact that terrorists take steps to hide their guilt, and workers might work much harder at giving the appearance of productivity than to actually do work. In this case, deciding which parameters are significant predictors is only half the job; one might need to perform an empirical investigation in order to establish the outcome. The irony is that despite the complicated circumstances in a sports contest, the system remains well-specified and amenable to analysis.  Life, then, is characterized by having more parameters and variables, being less defined in  outcome, and with much greater noise associated with their measures.

Nevertheless, some analysis can be done. Careful observation will allow us to classify the most frequent outcomes. This is most clear in the recommendations from Amazon: “Customers who purchased this also bought that.” If that linkage passes some threshold, it is to Amazon’s benefit to suggest it the customer. Thus the parallels between basketball (and sports) statistics and the numerification of life are clear. The key is to find a standard for performance. For a retailer, it might be sales. For a biotech company, it could be the number of candidate drugs entering Phase I clinical trials. Some endpoints might be fuzzier (what would one say makes a productive office worker? The ratio of inbox to outbox?) Again, identifying a proper standard is hard, combining both art and science. This is another point ignored in Baker’s book: there are many points for humans to exert an influence in modeling.

Basketball can again serve as an illustration. The action is dynamic, fast-paced, and has many independent elements (that is, the 10 players on the court.) However, just because we perceive a system to be complex does not imply that the model itself needs to be. Bill Simmons, a vocal opponent of statistics in “complicated” games like basketball, makes a big deal about “smart” statistics – like breaking down game footage into more precise descriptions of action, such as whether a shooter favors driving to one side over the other, if he has a higher shooting percentage from the corners, how far he plays off his man, and so on. In other words, Simmons would say that there is a lot of information ignored by box scores. Ergo, they cannot possibly be of use to basketball personnel. As Berri and colleagues have shown, box scores do provide a fair amount of predictive value – with regard to points differential.

What critics like Simmons miss is that these models most definitely describes the past, or, what the players have done, but the future is quite a bit more open ended. These critics confuse “could” with “will.” A  model’s predictive value depends on not only its correlation with the standard and how stable it is across time. Again, despite the rather complicated action on the court, basketball players performance, modeled using WP48, is fairly consistent from year to year. Armed with this information, one might reasonably propose that LeBron James, having this level of performance last year, might reach a similar level this year.

As any modeler realizes, that simple linear extrapolation ignores many other variables. One simple confound is injury. Another is age. Yet another is whether the coach actually uses the player. In other words, the critics tend to assume past performance equals future returns. The statistical model, even WP48, does not allow us to say, with deterministic accuracy, how a player will perform from game to game, let alone from year to yea. At the same time, the model does not present a “cap” on a player’s potential. Used judiciously, it is a starting point for allowing coaches and GMs to identify the strengths and weaknesses of their players, freeing them to devise drills and game strategies that can improve player performance. Interpreted in this way, WP48 allows coaches to see whether their strategies have an impact on overall player productivity, which should lead to more points scored and fewer points given up.

How would we deal with competing models? The standard of choice, in sports – the points differential – also allows us to compare Berri’s formula with other advanced statistics. Berri’s “Wins Produced Per 48 minutes” (WP48) stat correlates with point differential, and hence wins. Among many competing models, John Hollinger has presented a popular alternative, the Player Efficiency Rating (PER). PER is a proprietary algorithm and by all accounts, “complicated”. That’s fine, except Berri showed that the rankings generated by PER differs little from ranking players according to their average points scored per game. In other words, you can get the same performance as PER by simply listing a player’s Points-per-game stat. Interestingly, Points-per-game has lower correlation to the points-differential than WP48: by the measure with the standard, simply scoring points actually does not lead to wins. On an intuitive level, this makes sense, because you also need to play defense and keep the opponents from scoring more than you.

A shrewd reader might also have realized that there can be “equivalent” models. This was emphasized by showing that two metrics are highly correlated to each other (such as points-per-game and PER). Coupled with correlation to our standard, we know have a technique for comparing models both on how well they perform and if we have redundant formulas. This is useful; if we have two alternatives that tell us the exact same thing, then wouldn’t we rather use the simpler one?

Recently, an undergraduate student undertook a project to model PER, resulting in a linear equation that allowed for analysis of the weightings that John Hollinger most likely used. In turn, this lays bare the assumptions and biases that Hollinger used in constructing his model. An analysis of the simplified PER models suggest that PER is dominated by points scored. All the other variables in PER only give the pretense of adding information. There are underlying assumptions and factors that prove overwhelming in their effects. But this isn’t such a novel finding given the suspiciously high correlation with points-per-game (and lower correlation with point-differential.) In this sense, then, “good” only implies correlation with the standards the modelers used. It isn’t “good” in the sense of being compared against what we feel a good model should look like.

***

I’ve been writing essays trying help non-scientists deal with scientific findings. When reporters filter research, much information gets trimmed. Emphasis is usually given to conclusions; the problem is that good science is a direct function of the methods. Garbage in, garbage out still holds, but bad methods will turn gold into garbage as well.

The paper I will next discuss highlights this issue: correlation and causation are two different beasts, and mistaking the two can take a very subtle form. Venet and colleagues recently published an article* in PLOS Computational Biology showing how, even when care is taken to identify the underlying mechanism of disease, the very mechanism of disease pathology may not prove to be specific enough of a metric to help clinicians diagnose the disease. They write,

Hundreds of studies in oncology have suggested the biological relevance to human of putative cancer-driving mechanisms with the following three steps: 1) characterize the mechanism in a model system, 2) derive from the model system a marker whose expression changes when the mechanism is altered, and 3) show that marker expression correlates with disease outcome in patients—the last figure of such paper is typically a Kaplan-Meier plot illustrating this correlation.

*Venet D, Dumont JE, Detours V (2011) Most Random Gene Expression Signatures Are Significantly Associated with Breast Cancer Outcome. PLoS Comput Biol7(10): e1002240. doi:10.1371/journal.pcbi.1002240

This is essentially the same method other mathematicians and modelers will use to identify target markets, demographics, terrorists,  athletic performance, and what have you. In this case, one would assume that the wealth of research in breast cancer will yield many “hard” metrics by which one can identify a patient with the disease. Venet and colleagues show that this is not the case; the problem is,

… meta-analyses of several outcome signatures have shown that they have essentially equivalent prognostic performances [35][36], and are highly correlated with proliferation [7][8][37], a predictor of breast cancer outcome that has been used for decades [38][40].

This raises a question: are all these mechanisms major independent drivers of breast cancer progression, or is step #3 inconclusive because of a basic confounding variable problem? To take an example of complex system outside oncology, let us suppose we are trying to discover which socio-economical variables drive people’s health. We may find that the number of TV sets per household is positively correlated with longer life expectancy. This, of course, does not imply that TV sets improve health. Life expectancy and TV sets per household are both correlated with the gross national product per capita of nations, as are many other causes or byproducts of wealth such as energy consumption or education. So, is the significant association of say, a stem cell signature, with human breast cancer outcome informative about the relevance of stem cells to human breast cancer?

Scientific research is powerful because of its compare-contrast approach – explicit comparisons of test case with a control case. We can take a sick animal or patient, identify the diseased cells, and do research on it. All the research generally revolves around taking two identical types of cells (or animals, or conditions), but with one crucial difference. For the case of cancer, one might reasonably select a cancer cell compare it to a normal cell of the same type. In this way, we can ask how the two differ.

If the controls were not well-designed, then one might really be testing for correlation, not causation. As one can imagine, even if a few things go wrong, the effects might  be masked by many disease-irrelevant processes – this is what we would call noise. Venet and colleagues looked at studies that used gene expression profiles. The idea is that a diseased cell will have some different phenotype (i.e. “characteristic”), whether it be in the genes it expresses, or the proteins that it uses, or in its responses to signals from other cells, or in its continual growth, or in its ignoring the cell-death signal, and so on. One characteristic of cancerous cells is that it grows and divides. The signature that researchers had focused on was simply the genes expressed by cancer cells, which presumably will not be expressed in non-cancer cells. Remember this point; it becomes important later.

Further, it was reasonable to hypothesize that the power of this test would grow when more and more genes from the diseased state are incorporated in the diagnostic. Whatever differed between cancer and normal cells should, in theory, be used as either a diagnostic marker or a potential target for drug action. As Venet and colleagues point out, many genes actually play a role in the grow-and-divide cycle (“proliferation”) of normal cells. While these genes may have increased expression in cancer cells their elevated levels will key them as being different from normal cells. In this case, that isn’t enough; the underlying attribute of these genes reflect an aberrant state, but only by degree. Even normal cells proliferate; it so happens that the genes involved in this process are relatively numerous. Thus there are two problems: one is that the markers are no good because they do not provide enough uniqueness or separation from the normal state. Second, a related problem is that if one were to pick a number of cells at random to use as a diagnostic (in this case for breast cancer), one will end up with a gene related to proliferation, since these genes are enriched. Even a random metric will show correlation to breast cancer diagnosis since chances are, a gene related to proliferation will be chosen. The problem is that the metric assumed that cancer cells has a gene expression profile that consists of genes expressed only in cancer cells (an on-off versus a more-less distinction.)

In the words of Venet and colleagues,

Few studies using the outcome-association argument present negative controls to check whether their signature of interest is indeed more strongly related to outcome than signatures with no underlying oncological rationale. In statistical terms, these studies typically rest on [the null hypothesis] assuming a background of no association with outcome. The negative controls we present here prove this assumption wrong: a random signature is more likely to be correlated with breast cancer outcome than not. The statistical explanation for this phenomenon lies in the correlation of a large fraction of the breast transcriptome with one variable, we call it meta-PCNA, which integrates most of the prognostic information available in current breast cancer gene expression data. (emphasis mine)

The method was simple; Venet and colleagues compared previously published gene expression profiles vetted for breast cancer diagnosis and gene-signatures from other biological processes (such as “social defeat in mice” and “localization of skin fibroblasts”) and also from a random selection of genes from the human genome. All these metrics, regardless of relation to oncological significance, showed “predictive” value for breast cancer. What that means is that if your cells express these genes, you will be diagnosed with breast cancer. Hence the title of the paper, “Most Random Gene Expression Signatures Are Significantly Associated with Breast Cancer Outcome.”

***

How do we deal with this study? Does it suggest that biomarkers are a waste? No. For one, the only test presented in this paper is one where a randomized signature is compared to a breast-cancer diagnostic based on gene expression. That a specific test does no better than chance only allows us to conclude the test is deficient in some way. The point is that the existing test may be keying in on “proliferation”, except that Venet and colleagues showed that removing such genes did not worsen the performance of the randomized gene set in “diagnosing” breast cancer. It may be that the gene expression data has not been sufficiently de-noised. One can certainly try to “clean” up the model, but new tests must be shown to differ from the baseline (or, control) level of performance of a randomized gene set.

And how does this relate to the earlier points about basketball statistics? Only in that modeling effectiveness depends on how good a standard is, how well the variables are characterized, and how independent the relationships among the variables really are. Having testable hypotheses and experiments help too (although it seems a shame that gene expression profiles may not prove to be the key factor in this specific scenario). Even leaving aside the question of whether a model is good or bad, being able to show statistical correlation between models is powerful. Before, I had written that Dave Berri showed that John Hollinger’s PER model has no significant difference from simply looking at points-per-game (in fact, the correspondence is nearly one to one.) This conclusion was revealed by the types of statistical analyses that allowed Venet and colleagues to show the equivalence between existing “breast cancer” gene signatures and a randomized one. While correlation does not imply causation, in the case of models, they can certainly help us identify equivalent models with redundant information.

I spent too much time on my last post, but I really wanted to push it out. I realized I never came out and stated my idea. Partly, it’s because it might sound controversial unless I develop it properly. I’m glad I waited, because Joshua Timmer, of Ars Technica, pointed to a new study that is relevant to my points here.

In  my previous essay, I had presented Stephen Jay Gould’s idea of dual magisteria, which addresses how one engages with the world. In Rocks of Ages, Gould places undue emphasis on religion as a major counter-point to the scientific descriptions of the physical world. He does mention that there could be other domains of thought, but Gould writes they would encompass other magisteria. Here, Gould did not go far enough; it suffices to group religion as one of many intuitive, personal ways of finding meaning in the world. Simply, there should be two magisteria: science, and all of non-scientific, intuition based ways of looking at the world.*

*The distinction is clear: there is a way to examine the material world, with experiments (although not necessarily their interpretation) providing a common frame of reference. Experiments are placed in this rarified realm because it is expressly constructed so that when methods are made available, other investigators can observe the same results. That is why one of the worst things you can say about a scientist’s findings is that it is not reproducible. 

In this context, my idea for how one might deal with science is that, functionally speaking, they can ignore it. This is possible since science itself, in the realm of meaning in one’s life, may have a lesser impact than other emotional, intuition based thinking. Second, when one aims to counter scientifically based policies, it is more about risk/benefit analysis, trade-offs, and marshalling political support, which actually has less to do with the underlying science and more about rational discourse. In other words,  it is possible to arrive at policies that are directly opposed to the recommendations based on scientific findings.

There is a distinct lack of courage from those who are opposed to science and distrust it because it is considered a Liberal domain (i.e. American Liberals tend to favor governmental intervention in regulating markets but  less  in one’s personal and social lives. The story goes that academia is populated by these liberal types.) These anti-science laymen lack courage because they avoid saying that their policies are at odds with the scientific consensus because they thought other considerations were more important.

So they couch their objections in scientific terms, and rather shoddily*. The proper argument for creationism in school isn’t in to make it an alternate scientific theory in biology class, but in a social studies or literature class, perhaps even an actual religious study. The goal of religion and these classes, as Neil Postman and Joseph Campbell realized, is to attempt to connect the impersonal world to human perception. At best, it can be as invigorating as a philosophy course and as an art appreciation course. It is interesting to me that so many myths do share elements in how they describe how the world began, with many such stories pre-dating even the gods of pharoanic Egpyt, let alone Christianity. There is power in these stories because while they are rudimentary attempts at explanation, in actuality they help us deal with the mysterious and the fear of dying at an approachable human level.

* Hence this strange, if not ironic call, for more facts. Again, my last essay about experts focused on this idea of splitting hairs, where all of a sudden hyper-specific observations are used and not the general theory. My point in that essay is that just simply emphasizing specific examples over the rule is not a simple act. The divergence may be due to chance – acceptable within almost all scientific frameworks – or may indicate an actual alternate cause. If that’s the case, not only must the new model explain why this observation happened, but it must also address why all the other observations we’ve seen arose and was addressed by the other theory. Some type of analytical closure is needed to address how we could have gotten things so wrong. One might argue that Occam’s Razor helps us avoid this situation where we go with an explanation that agrees with most observations – one clear example of this is the models for an earth-centric versus a heliocentric model of the solar system.

I would only point out that the religious studies curriculum would be at odds with what the American (Religious) Conservatives (i.e. less government regulation for economic markets but more constraints in personal lives) desire because any such comparison of religion would naturally lead students to question how Christianity, Islam or (religious) Judaism differ from the myths that had been so callously discarded.* Again, these zealots lack the courage to say that the strength of religion lies specifically in helping believers come to terms with the cycle of life and death and the harshness of the world. By continually using stories of a Jewish guru, who lived 2000 years ago, as a basis to counter scientific findings made from observations with modern equipment, it cheapens Christ and makes the religious look silly. Are we really to think that these Holy Books are relevant to how one interprets molecular biology data showing how closely related humans are to primates and mice? Scientific interpretation and finding meaning germane to our emotional needs (or explaining the human condition) are two different things. There are any such stories one can concoct from religion, because so many stories in holy books are allegorical. We can change stories to fit the facts.

* I once asked my friend, who is a scientist and evangelical Christian, why he believes in Christianity and not, say, Zeus. He replied that Christianity is real and Zeus isn’t. He pointed to the archaeological evidence for the history of the Jews in the Old Testament and of the documentary evidence of Christ and his Apostles for the New. To which I can only suggest that, there is also evidence that the Trojan War happened. We have many stories of the Greek gods and much archaeological evidence of the beliefs of the Homeric Greeks. Does that in itself proves that the Greek pantheon of gods exists? 

****

At this point in American political culture, we are overly concerned with expertise, the irony of which is that we tend to pigeonhole these distinct voices, rather than to heed their advice. The pushback from scientists is that they tend to dismiss laymen as cranks. These approaches are antagonistic.

On occasion, I hear fellow scientists, when they get annoyed with lay people, brush off by claiming that their bit of science is hard and that laymen shouldn’t comment. That may be true, but as I wrote in my last essay, the tradeoff from academia is that at some point, we publicize our research to other scientists. I go further and suggest that if we are already doing this, we might as well write explicitly to laymen. In the end, it is hoped that our research is of significance and worth including in curricula – for educational purposes.

Naturally, scientific discussions tend to be easier between scientists, even if they ply their trade in different fields. We know the lingo. More importantly, we recognize that there are benchmarks for good research (control experiments, multiple trials, randomized sample sets, published methods and analysis techniques, blind trials where necessary, experiments specifically designed to test for alternate explanations, and so forth) and generally scientists do tend to read broadly. As a result, they do tend to ask questions, not as pointed as an expert might, but they aren’t at a rudimentary level either.

My own background can provide an example. My undergraduate background was biochemistry; my graduate work and one post-doc stint focused on neuroscience, specifically olfactory physiology. My current work as a staff scientist is as a cell-biologist/image analyst, running cell based assays and writing high-content analysis algorithms. Lately, my group has pushed our technology for clinical application in clinical immunology. This is not to say that I understand all these fields to the same degree as people who have spent many more years than I. The common skillset of doing science allows people like me to expand into new fields.

It isn’t as if I ignore biochemistry concepts even now, nor did my work in olfactory physiology meant I simply looked at neuronal function. The point of the latter research was to show how animals use different sniffing patterns to elicit specific neuronal response types that might be important for the animal’s understanding of its odor environment. Being aware of the overarching questions driving specific aims is crucial to a scientist’s success. Another example: Gordon Shepherd, an important researcher in olfaction, recently published a book on flavor perception titled Neurogastronomy. In it, he synthesizes olfactory and taste physiology, fluid dynamics and modeling of air flows in the human nose, the physico-chemical properties of food molecules, and human perception. His bread and butter, however, was in neuronal circuits, with emphasis on the olfactory bulb. Although his ultimate interests is in the mechanism by which neurons give rise to perception, much was unknown and so one must settle on sub-systems (such as olfaction, in “lower” life forms like honeybees, tiger salamanders, fruit flies, and rodents) for research and begin there.

So yes, I firmly believe that even if one is ignorant of a subject, one can come up to speed. It takes work and time. I am not arrogant enough to think that I am exempt from the Kruger-Dunning effect, but I do think that having the ability to identify gaps in knowledge, knowing what to read, finding experts to talk to, one can work to gain a competence in unfamiliar fields. If thinks that this cannot be the case, then there is no point in talking to one another or in reading.

I’ve only lately come to realize that science can be interpreted as a method for communication. We do this a very precise and stylized manner – introducing new ideas, detail methods, publicizing results, and discussing how our observations fit extant theory. Again, this has partly to do with the most basic elements of experimental design, geared to helping scientists remove their biases during analysis. The assumption here is that we argue interpretation and whether experiments were designed correctly. This can only work if the “recipe” and “results” are reported faithfully and  reproducible by anyone else.

Thus science differs from other forms of communication because we work to make transparent our work. Other fields have the luxury of using allegorical, indirect language. Scientific ideas are hard enough without putting some artistry in our language: for example, think of “as an object accelerates, it cannot reach the speed of light since its mass increases” or “if we know the position of an electron, we cannot know its momentum” or that “mass and energy are equivalent.” Because we scientists do try to simplify descriptions, we cannot turn around and tell laymen that what we do is hard to understand. Science is hard to do, especially to do well, but the telling of it can be straight-forward (I’m thinking of essay level exposition, not sound-bites.)

Despite science being a means of communication, it is not a debate in the sense of law; the point of distinction is not in whether the rhetoric is convincing, but whether the data best explain an idea that describes reality. There is no audience per se. Rather, the “audience” is whether the next experiment is consistent with the older findings. This is the predictive aspect of science: If what this other scientist published is true, than it affects my idea like so, and thus I should see this in my experiment.

But as soon as we step away from the realm of validating theories, we have descended into the muck surrounding the ivory tower. This isn’t bad at all; while basic research may be a worthwhile pursuit, I see no contradiction in having to justify that concept to the tax payers. While other scientists might scoff at having to consider applied research, I see this as necessary. In my field, we apply to grants from the National Institutes of Health. In fact, we must always suggest ways in which the research will ultimately benefit the clinicians who treat patients.

My bias is that I see applied research as compelling, and I see, as a red herring, the idea that all research must be pure and unsullied. In other words, I see the realm, or domain, or magisteria of science, as a rather small one. As soon as we start talking about funding, applicability, significance, whether we should pursue a line of research, we get into that fuzzy idea of the “other” magisteria.

This is the part where laymen falter. Laymen tend to argue from a grounding based more on non-scientific criteria than any scientific objections (based on methods, findings, or analysis). I have a very definite view that scientific discussions require the language and methods of science. It helps scientists tease apart assumptions, biases, and the empirical findings. It isn’t that all scientists can compartmentalize their thoughts, or that personal politics, background and temperament do not affect their thinking. It is that the whole system is set up to at least force scientists to justify their ideas (or biases) with data. Questioning scientific findings can only concern methods, analysis, interpretation, counter-evidence, and alternate hypotheses. Alternate ideas are always there; best idea or consensus by no means imply 100% certainty. It might simply be that the idea is the best of the worst.

However, if one were to discuss why the research is worthwhile, why a scientist pursued it, why something should be funded, what applications does it have, what are the resulting policy recommendations: all these are subject to debates. We have facts, as discovered by science, and then there is how we deal with facts. All of us must come to grip with them.  That is why I modified Gould’s opposed magisteria to contain two domains – science and not-science. The former speaks to objective truth, or at least a description of the material world that can be replicated by any sufficiently educated experimenter. The latter has to do with how humans perceive these hard truths.

While it seems like science is given a preeminent position, I would say that it is a rather small domain. Its language and methods are  precise – it is limited. The not-science magisterium encompasses everything else: our experience, our philosophical bent, our religious background, and so forth. These are bundled together because its “truth” is but an interpretation of how we look at the world. At the same time, it is much richer because it is unbounded by hard facts – it can be as fanciful as whatever the imagination can come up with. Its purpose is to help us with that vague concept of “meaning.” It is from this sphere that we might find compelling arguments and vivid imagery to help convince a lay audience.

Non-scientists can lay claim to the other half of the problem, that of receiving the message. Even if scientists write for the public, interested laymen need to listen. When laymen apply the label of “expert”, it is done with opprobrium, suggesting that the expert has narrow knowledge, but no “real world” experience. The ivory tower as therefore a prison rather than a place for undisturbed rumination. Non-scientists can apply the rigid standard to voices they do not like, simply by claiming that one’s expertise is not in the topic at hand. Naturally, the point is to keep experts corralled and voiceless. It is every bit the same exclusionary tactic that some scientists take in keeping laymen out of the realm of science.

My problem with it is that it allows opponents to treat each other not as individuals but as a belonging to the “other”, and eventually as caricatures. Instead of engaging with the science, it is the scientist who is attacked and demonized as mad or playing god and the laymen portrayed as ignorant, religious zealots. If nothing else, people are generally shrewd. Even if they do not appreciate the nuance of an experiment, they are probably experts in some other domain. This goes for scientists and non-scientists. Are we to suggest that they cannot do anything else, simply because they are competent in one field? Surely, all of us at various times and on numerous topics can hold incorrect opinions, but we can learn enough to become informed. To say that this is not possible is to argue that education is pointless.

No one claims that we can all become experts, but we can all learn enough to appreciate the current thinking. So the problem in how laymen and scientists relate to one another is that there is a vested interest in ignoring the fact that we all live in the world. In that sphere of public influence, rightly or wrongly, scientific facts and religious thoughts are just two of many points of view. In examining the greater good, one cannot argue in isolation.

For example, coal-fired and nuclear generated electricity provide one such example. Science and engineering have both resulted in these plants providing the most power efficiently. We already know that burning coal leads to increases in greenhouse gases. Nuclear power is generally cleaner at the point of origin, but it sure is spectacular when things go wrong or when we dispose of spent radioactive fuel. Science will not help us decide which power source to use, or whether we should re-wire our electrical grid and redesign our houses and appliances to consume less power, or whether we should build up hydroelectric power, wind farms, and solar power plants, or whether the trade-offs are worth it. Wisdom and knowledge is a tapestry. We would all do well to remember that we must argue using appropriate tools.

When arguing scientific points, it makes sense to ask about the assumptions, previous empirical evidence, the methodologies, and current findings. It is a fair question to ask for clarifications between current findings and facts that seem contradictory. But scientific validity is argued from empirical evidence, not from rational arguments like two opposing lawyers. There is no such thing as “all evidence.” There is curated best evidence. And while that is still no guarantee of the scientist being right, it will certainly take a bit of work for anyone to identify the actual problems with the model (and see my previous essay on experts for some examples.)

When arguing significance, we would do well to remember that matters of judgment can be based on personal experience and informed opinions. Benefit and risk can be of equal weight, with personal caution being the only guide as to what one prefers to emphasize. It would be great if we all have informed opinions, and that is all we can aspire to when we haven’t had the luxury of time spent cultivating an expertise on a topic. It is partly the scientists’ job to make available the resources to  help citizens become informed. Telling them to trust us is a non-starter; we argue that an argument from authority (and mostly with regard to religious authority) is no argument at all.

Scientists need to set an example and show laymen our actual methods; a fact is believed so because we see it – and you can too if you do exactly as we specify. The other component to this is to realize how  quickly we step outside of our scientific domain. Facts and coherent theory are not sufficient to inspire. Rhetoric becomes an important factor. If you don’t think language matters, just recall  “irradiated foods” and the public misperception. The reference is to light, not nuclear radiation, but consumers rebelled.

For laymen, they need to be more honest about the basis for their objections. Since society pays lip service to the idea of experts being good (when they argue in your favor), it is supposed that the only way to take themselves seriously is to argue from facts, even if their strongest arguments might be based on personal experience and circumstances. The result is that even non-scientists make a push into the domain of science, not realizing it that ideas are not weighted equally. One needs affirmative evidence to show the possibility that a theory can be a valid alternative. Pointing out the holes in global warming mechanisms or evolution can at best weaken those theories. In no way does criticizing science show why creationism is valid.*

*I try to avoid being snide, but I can’t help it. Please answer me this: does taking host during Communion result in the transubstantiation of the wafer and wine into the body and blood of Christ?  A favorite question that Protestants tweak Catholics with. You would think there is an verifiable answer here. Whose creation story – excuse me, theory – should we teach? The Sumerians’? The Egyptians’? The Greeks’? The Zoroastrians’? The Buddhists’? The Hindis’? For that matter, let me know which set of gods to pray to. Maybe before we even consider teaching creationism as an alternate theory to evolution and cosmology – a distinctly American phenomenon – the religious ought to figure out which story best “fits” the data.

The point of this essay is to suggest a more constructive way to talk about science. I see no issues with using compelling imagery to push scientific ideas. This is not acquiescing. I am recognizing the fact that no one likes their beliefs challenged. But scientific facts are as they are; they change only because of more precise observations from better tools and experiments. Our personal worldviews are what must change, if the two are ever at odds. We scientists should take advantage of the metaphors and allegories allowed us by the non-science domain, showing that even something as contentious as religious ideas can be reformulated, not necessarily refuted, and make palatable the bitter pill of hard-won scientific facts.

It’s annoying, but Goodreads hasn’t been updating my blog with the books I listed as finished. I realize that it might be better to give an update as a blog post anyway, to force myself into regular postings.

In the past 2 months I finished  Huraki Murakami’s 1Q84, John le Carre’s Tinker, Tailor, Soldier, Spy, Stephen Jay Gould’s Rocks of Ages, Joseph Campbell’s The Power of Myth, Jaron Lanier’s You Are Not a Gadget, Julian Barnes’s A History of the World in 10 1/2 Chapters and The Sense of an Ending, Suzanne Collin’s The Hunger Games, Catching Fire, and Mockingjay, Robert Anton Wilson’s  The Illuminatus! Trilogy, Jussi Adler-Olsen’s The Keeper of Lost Causes, Stephen Baker’s The Numerati, and Philip Kerr’s A Philosophical Investigation.

Since I’m butting into various topics in literature, what’s one more, even if it’s about feminism? I read a really interesting essay by Rachel Stark over at Tor.com; she focuses less on the physical strength of the protagonist of The Hunger Games and proposes that Katniss cultivates positive relationships, playing to a feminine strength while also relying on and encouraging the strength in others, especially other females. My thoughts about Katniss are about the same; I thought that she is a strong role-model, and like Ms. Stark, I did not think that her physical strength and resilience were the only traits to recommend Katniss.

In The Hunger Games, Peeta plays the traditional damsel in distress and moral compass roles. The most obvious examples of this is that he is rescued by Katnis and nursed back to health by her. He is a moral compass because he continually chooses  to sacrifice himself for Katniss. In a subtle, almost toss-away, scene, Peeta also shows this  when he says to Katniss that he wishes to somehow stay true to himself during the Games. He isn’t sure what he meant by it. After all, they are fighting to the death. Perhaps he simply desires to acquit himself honorably, killing not out of joy but to survive. Or maybe he does not want to run, abandoning Katniss.  Katniss snickers. Katniss isn’t one for talk, but later in the Games – and in the rest of the series – her actions show that she stays true to herself. It is for this that the character struck such a chord with me.

A lot of this was covered by Ms. Stark, except I interpreted Katniss’s actions differently. Rather than thinking of Katniss as a strong female character, I thought that she is a strong person with traits that are generally identified as feminine ones.  My take on Katniss is that she is a positive role model for anyone because she did not buckle to the desires of those around her.

In a sense, I think that is what feminism is about: the woman has desires and wishes and has the freedom to pursue them, some of which may contradict what society has determined to be appropriate for a girl or woman. I had thought the end point of this is that it should no longer surprise society if a woman decides she wants to become a scientist, an engineer, a nurse, a business woman, a politician, or even a stay at home mother.

Naturally, we might be curious as to why she made such a career choice, but the questions are “How did your circumstances and experience lead you to this choice? Where do you see yourself going?” We are then dealing  the woman as she is – and yes, she may decide to work a family into the mix. But we are essentially placing her interests above those of society. The wrong question that a not-quite-post-feminist society asks is usually, “How will you fit a family around that?” The problem here is that one is implying a proper society view that should be imposed on the individual. It’s as if the questioner is now a proxy for society with a vested interest in maintaining some social mores.

That’s goal I see of identity politics is, ironically enough, to de-identify the individual from some obvious affiliation. Sure, one might still choose to be a vocal member of a minority, gender or sexual group. And I might still ask them questions that reflect how society perceives and has treated him or her. But I would place that person’s story above the group. What one is born with is probably the least important part of one’s story (at least to me). I’m more concerned with what one has accomplished, what one thinks, and how one has responded to difficulties.

Whether Suzanne Collins meant it this way or not, that’s how I saw Katniss. Not as a heroine, but a hero who happens to be a teen-aged girl. Actually, I do think Ms. Collins intended this; I don’t think she dwells on Katniss’s gender (or Rue’s race) so much. There’s a cursory description, and then she moves on with Kat’s thoughts and deeds. Yes, Ms. Collins does focus on the looks of the Capitol citizenry and the subjugated populace in the Districts, but that is a way to show how superficial beauty can mask mental vapidity and unconsidered cruelty.  And even that isn’t such a huge point; after all, the Districts watch the Hunger Games. Probably the worst are the Vichy collaborators, in District 2 who supply eager contestants and also the troops used to pacify the other Districts. A few others are well-off enough to train Careers, teenagers who vie for the honor to volunteer in the Games.

The plot, themes and motifs are well-trodden. Where the book shines is in the first-person narration. We know what Katniss thinks, and yet her conclusions can still be surprising. And this goes to my point: Katniss does not merely act the way a female would. She acts in the best way possible, the way all of us wish we would act when faced with such life-or-death situations.

The reason I buy into Katniss’s heroic stature is due to the skill of Ms. Collins.  Ms. Collins did a fantastic job of showing us Katniss’s inner monologue. She thinks logically consistent and coherent thoughts. She does the right thing to support her family before the Tribute happened. Her reactions to how others treat her seem appropriate and sensical. She is quite intelligent, shown by her ability to play to the cameras and deduce what Haymitch might be communicating to her. Even in the later books, we are always aware that Katniss is constantly thinking, trying to figure out her best move, and questioning  the motives of those around her. We also see her grow more understanding towards her mother. Thus none of Katniss’s behavior come off as dramatically expedient (despite some questionably convenient plot devices). We can generally see how specific acts follow from coherent, fully formed character.

More so than any other reason, that is why I see Kat as strong and worthy of admiration. She is a person first and foremost. While I am glad that she can serve as a feminist role model for girls, I would not hesitate to tell my sons that Katniss can be equally admired by them.

I recently heard a fun episode of This American Life, called “Kid Politics”. Ira Glass presented three stories about children being forced to make grown-up choices. The second story is an in-studio interview of Dr. Roberta Johnson, geophysicist and Executive Director of the National Earth Science Teachers Association, and Erin Gustafson, a high-school student. The two represented a meeting of minds, between a scientist who is presenting the best evidence demonstrating human induced climate change and a student who, in her words, does not believe in climate change.

It is worth listening to; Ms. Gustafson is certainly articulate, and she is entitled to think what she wants. I simply emphasize that, Ms. Gustafson uses language that suggests she is engaged in a defense of beliefs rather than an exploration of scientific ideas.

Ira Glass, near the end of the interview, asks Dr. Johnson to present the best pieces of evidence arguing in favor of anthropogenic climate change. Dr. Johnson speaks of the analysis of ice cores, where carbon dioxide levels can be detected. This can be correlated to evidence of temperature. Ms. Gustafson points out that apparently, in the 1200s, there was human record of a warm spell – I gathered it was somewhere in Europe, although the precise location and the extent of this unseasonably hot weather was not mentioned –  where low CO2 levels at the time.

Clearly, Ms. Gustafson has shown enough interest in the topic to find some facts or observations to counter a scientific conclusion. She then calls for scientists to show her all evidence, after which she herself will get to decide. I suppose at this point, I’m going to trespass into Kruger-Dunning territory and speak about expertise, evidence, and the use of logic.

In general, I do not think it is a good approach for scientists to simply argue from authority. I admit, this comes from a bias in my interests in writing about science to a lay audience. I focus on the methods and experiment design, rather than the conclusions; my hope is that by doing that, the authority inherent in the concept of “expertise” will be self-evident. That is, I show you (not just tell you) what others (or I) have done in thinking and investigating a problem. By extension, I hope I informed myself sufficiently before I prepare some thoughts on the matter, shooting specifically for fresh metaphors and presentation. (As an aside, I suppose that this might be a mug’s game, given the findings of Kruger and Dunning.)

If a scientist has done his or her job, one is left with a set of “facts”. These facts populate any school textbook. But the facts are more than that: they can act as, with a bit of thought and elaboration, as models. I dislike the distinction people make when they argue that we need to teach kids how to think and not a set of facts. I argue that learning “how to think” depends crucially on how well a student had been taught to deal with facts. These skills include how to deal with facts by using them as assumptions in deductive reasoning, weighing whether a fact has solid evidence behind it, and using facts as if they were models.

Here’s my issue with how Ms. Gustafson, and other anti-science proponents (like anti-evolutionists), argue. Let’s say we were told that gas expands upon heating. One might take this as a given and immediately think of consequences. If these consequences are testable, then you’ve just made up an experiment. Inflate a balloon and tie it off. If temperature increases lead to volume increases, one might immerse the balloon in hot water to see if it grows larger. One might choose to examine the basis of thermal expansion of gas, and he’ll find that the experiments have been well documented since the 1700’s (Charles’s Law). A reasonable extrapolation of this fact is that, if heating gas increases its volume, then perhaps cooling gas will lead to a contraction? One might have seen a filled balloon placed in liquid nitrogen (at – 196 deg C) solidify, but it also shrivels up.

Depending on how well facts are presented, they can be organized within a coherent framework, as textbooks, scientific reviews, and  introductions in peer-reviewed articles already do. My graduate advisor characterized this context fitting as  “provenance.” No idea is truly novel; even if one does arrive at an idea through inspiration and no obvious antecedents, it is expected that this idea have a context. It isn’t that the idea has to follow from previous ideas. The point is to draw threads together and if necessary,  make new links to old ideas. The end point is a coherent framework for thinking about the new idea.

Of course, logic and internal consistency is no guarantee of truth; that is why a scientist does the experiment. What hasn’t been really emphasized about science is that it is as much about communication as it is about designing repeatable experiments. Although scientists tend to say, “Show me,” it turns out that they also like a story. It helps make the pill easier to swallow. The most successful scientists write  convincingly; the art is choosing the right arguments and precedents to pave the way for the acceptance of empirical results. This is especially important if the results are controversial.

The error Ms. Gustafson makes is that she thinks by refuting one fact, she can refute an entire tapestry of scientific evidence and best models (i.e. “theory”). She points to one instance where carbon dioxide levels do not track with the expected temperature change. But in what context? Is it just the one time out of 1000 such points? I would hazard a guess that the frequency of divergence is probably higher than that, but unless the number of divergences is too high, one might reasonably suppose that the two correlate more often than not. (Causation is a different matter;  correlation is not causation.)

But let us move on from that; a more elemental disagreement I have with Ms. Gustafson’s point is that, let’s say that one agrees that carbon dioxide is a greenhouse gas. A simple model is that this gas (and other greenhouse gases such as water vapor, methane, nitrous oxide) absorbs heat in the form of infrared radiation. Some of this energy is transferred into non-radiative processes. Eventually, light is re-emitted (also as infrared radiation) to bring the greenhouse molecule to a less energetic state. Whereas the infrared light had a distinct unidirectional vector, radiation by the greenhouse molecule will occur in all direction. Thus some fraction of light is reflected back towards the source while some other light essentially continues on its original path. If infrared light approaches earth from space, then these gases act as a barrier, reflecting some light back into space. Absorption properties of molecules can be identified in a lab. We can extend these findings to ask, what would happen to infrared heat that is emitted from the surface of the planet?

A reasonable deduction might be that just as out near the edge of the atmosphere, greenhouse gases near the Earth surface also absorb and reflect  a  fraction of heat. Only this time, the heat remains near the Earth’s surface. One logical question is, how does this heat affect the bulk flow of air through the atmosphere? (An answer is that the heat may be absorbed by water, contributing to melting of icebergs. Another related answer is that the heat may drive evaporation and increasing kinetic energic of water vapor, providing energy to atmospheric air flows and ultimately to weather patterns.

For someone who ignores greenhouse gas induced global warming, dismissing the contribution of carbon dioxide isn’t just a simple erasure of a variable in some model. What the global warming denier is really asking that the known physical property of carbon dioxide be explained away or modified. Again, the point is that carbon dioxide has measurable properties. For it not to contribute in some way to “heat retention” is to say that we must ask why the same molecule won’t absorb infrared radiation and re-emit infrared radiation in the atmosphere, in the same way that was observed in the lab. In other words, simply eliminating the variable would require us to explain why there are two different sets of physical laws that apply to carbon dioxide. In turn, this would require a lot of work to provide context, or, the provenance to the idea.

Yes, one might argue that scientists took a reductionist approach that somehow removed some other effector molecule if they measured carbon dioxide properties using pure samples. Interestingly enough, the composition of the atmosphere is well known. Not only that, one can easily obtain the actual “real-world” sample and measure its ability to absorb unidirectional infrared and radiate in all directions. This isn’t to say that thermodynamics of gases and their effects on the climate of Earth is simple. But it is going to take more than a simplistic type of question, such as to posit that there is some synergistic effect between carbon dioxide and some other greenhouse gas or some as-yet unidentified compound, so that we actually modify the working model physicists and chemists have about absorption and transfer of energy.

If you think that it seems rather pat for a scientist to sit and basically discriminate among all these various counter-arguments, I am sorry to disabuse you of the notion that scientists weigh all facts equally. Ideally, the background of the debaters ought not to matter. Hence, you will get scientists to weigh your criticisms more heavily if you show the context of the idea. The more relevant and cohesive your argument, the more seriously you will be taken. Otherwise, your presentation may do you the disservice of giving the appearance that you are simply guessing. That’s one problem with anti-science claimants: all too often it sounds like they are trying to throw as many criticism as possible, hoping that they will get lucky and have one stick.

Take evolution: if one suggests that mankind is not descended from primates, then one is saying that mankind was in fact created de novo. That is fine, in and of itself, but let’s fill out the context. Let’s not focus on the religious texts, but instead consider all the observations we have to explain away.

If we were to go on and to try and explain mankind as a special creation, how would we go about explaining mankind’s exceptionalism? Can we even show that we are exceptional? Our physiology is similar to mammals. We even share physical features as primates. Sure, we have large brains, among the largest brain mass to body mass ratios in the animal kingdom. Yet we differ in about 4% of our genome compared to chimpanzees. Further, at a molecular level, we are hard pressed to find great differences. We simply work the same way as a lot of other creatures. We have the same proteins, despite the obvious differences between man and mouse, a weak similarity between our proteins mean that we have only 70% sequence homology. It seems to me that at multiple levels, at a physiological level, at the level of physical appearances, and at a genomic level, we are of the same mettle as other life on earth. Yes, the fact is that we do differ from these other lifeforms, but it seems to be more logical to suggest that mankind is one type of life in a continuum of the possible lifeforms that can exist on Earth. It just seems likely that by whatever process that led to such a variety of creatures, man must also have been “created” from such a process.

 I hate to harp on this, but a fellow grad student and I had such arguments, while we were both doing our thesis work. My friend is a smart guy, but he still makes the same mistake that anti-evolutionists make: by disproving natural selection, one  therefore has provided some support for creationism. We argued about Darwin’s theory and whether it can be properly extended from a microscopic domain. He was willing to concede evolution occurs at a microbiotic level – such as for “simple” organisms, evolution makes sense, since fewer genes mean less complexity and therefore changes can be just as likely to be beneficial and deleterious.

I thought the opposite. If an organism is “simpler” – namely because it contains a smaller genome – it is even more crucial for a given organism that a mutation be beneficial. A larger genome, from empirical data, generally contains more variants of a given protein. While this in itself reflects the appropriation of existing genes and their products for new functions. Perhaps one possibility is that   an increase in isoforms of a protein also suggests how mutations can occur without the organism suffering ill effects directly. There is a redundancy of protein and function. Also, my friend seems to regard fitness as a “winner takes all” sort of game – as in the organism lives. I merely saw the “win” as an increase in probability that the animal will have a chance to mate, not organismal longevity. Sure, this is a just so story; I think his argument is better than the usual creationist claptrap, but only in the  trivial sense that, yes we need to take care not to over interpret our data or models and yes,  scientific theories – althoughswa they are our best models –  are temporary in the sense that we can revise them when better evidence comes along.

To go back to the way Ms. Gustafson and my friend argue, it behooves them to explain the exceptional circumstances by which we, or carbon dioxide, can act differently from our best model (i.e. theory) and yet conform to it most of the time.

Thus, despite Ms. Gustafson’s call for “all the evidence”, I somehow was left thinking no amount of evidence will persuade her. Part of the problem is that, like the religious who misapply ideas of meaning found in their bibles to the physical evidence generated by scientists, she misapplies her political views to provide the context through which she views scientific evidence about global warming. Whereas she should have used logic to deduce that global climate does not predict local weather and scientific principles  to determine whether global warming is part of a normal cycle for the Earth or is in fact due to circumstances like an increase in greenhouse gases, she probably thought of global warming in terms of regulations and taxes pushed, generally in the United States, by Democrats. Thus, Ms. Gustafson speaks, in Stephen Jay Gould’s term, from the magisteria of meaning (as defined by her political and religious beliefs) and not from the magisteria of science. In this case, she isn’t defending her theory about how the world works; her motivation is to fit the observations to her political and religious ideals.

Can we really separate the political from the scientific? If some scientist argues that there is a problem, it seems difficult to find ways to argue against them. My only suggestion is that Ms. Gustafson and others like her consider their arguments more carefully. Nitpicking specific examples is counter-productive. All theories can be criticized in this way. However, integrating the counter-example is not a straight-forward process, especially if simplistic criticism is at odds with some other firmer, more fundamental observation that even Ms. Gustafson has no problems accepting.

 

Some time back, Jodi Picoult and Jonathan Franzen were focal points for pundits and self-proclaimed gate-keepers in arguing whether popular literature can ever be Literature. Naturally, one might expect popular authors* who lack critical praise or who write genre novels to take exception.

Rudy Rucker, a scientist and well-known fiction author, has recently called attention to this matter. He is generally classified as a science fiction writer. In a recent stint as a guest blogger on Charlie Stross’s blog, Rucker expresses dissatisfaction at being pidgeonholed in such a way, labeling it a “category mistake”. His point is somewhat reminiscent of Picoult’s: categories do narrow perception**. By placing writers into “literature” and “popular/genre” bins, such distinctions frame discussion around whether the work has value, rather than examining the ideas, themes, motifs, plots, and characters in a novel.

 

Rucker himself cites Kurt Vonnegut and Jonathan Lethem as examples of high-brow literary authors who managed to transcend their genre labels. Most recently, I finished Haruki Murakami’s 1Q84, which has a fantasy setting. As far as I understand it, Murakami is also considered a high-lit author. In 1Q84, he uses some standard fantasy/sci-fi tropes, such as multiple worlds/parallel universes and teleportation. However, most of the novel is spent in the heads of the two main characters. There is a lot of rumination in the novel, ranging from why an author writes, to ethics, and to fate and sacrifice. These are among the standard thematic elements for any literary author.

This seems to be the main discrimination point between high-lit and everything else: literary authors focus on the so-called human condition. If an exciting story falls out from it, than one gets this feeling that it would be a happy accident. Generally, the gripe against non-literary works is that the opposite is true: the characters are secondary to other story elements. While this distinction is fair, I disagree, as strongly as does Rucker and other authors, that writing a novel about the human condition puts it on the only track to beatification.

Despite the subjectivity inherent in engaging with art, I find it ironic that literary critics and editors act as if the line between high- and low-brow is so distinct. I can appreciate the fact that anyone involved in art (by which I mean all such endeavors: music, movies, paintings, sculptures, books, etc.) will have an immense amount of experience due to their continual exposure to it. They can be quite informed with how a given work can be placed into the context of an epoch, and they are certainly in a position to recognize its uniqueness. But this must be tempered with an understanding that, in this milieu of constant exposure, what piques their interest and what they regard as a distinguishing feature may not be the same as how the public perceives the work.

Even if I don’t read as much as I do, I would still have opinions about what passes for schlock. But I happen to think that judgment is not as interesting as discussing the bits of a novel or story that are interesting. The simplest analogy I can make is that, in the realm of science articles, one rarely comes across terrible papers without any intellectual value. Sure, some papers over reach, and others lack proper controls. The sense here is that the paper could be good, if the researchers had only done a little more work. So the reader is left with feeling ambivalent. But because science is like a tapestry, the reader will probably stitch this imperfect work into his understand and outlook. This is what I mean when I say that it is rare to find some bit of science that cannot be integrated in this way. Instead of a smoking gun, a “bad” paper may only provide circumstantial and suggestive data.

I suppose I take this approach in my reading of literature. That is, I would rather focus on the parts of the novel that left a great impression on me, for whatever reason. Once, I read a profile  of Bob Rines in The New Yorker, by Larissa MacFarquhar, about his search for the Loch Ness monster. To be frank, I was infuriated by the presentation of Rines and his search, in that it did not focus on the fact that none of Rines’s tools had ever recorded evidence of the monster. Instead, the article was immensely sympathetic to Rines while dismissive of the skeptics who opposed him, portraying them as a bunch of killjoys.

However, the other thing that I remember is that the piece was so well-written, that I had thought it was clear what MacFarquhar had to say, leading to my becoming so exercised. It was as if she had perverted her talent to peddle ignorance. Yet, if I had to choose a model to emulate and to learn from, this essay would rank among the top of the form that I had encountered.

This is simply an example of the ambivalent feelings one can get on reading, and is quite peripheral to any so-called  objective quality one can supposedly perceive. My bias is that I find these thoughts more interesting than a simple yay/nay verdict.

In much the same way, I do not find it constructive to sort books into high-lit or genre. I find it destructive to promote that there is such a difference. With that said, there are books that lend themselves to having more depth and rewarding deep readings. Like the Murakami novel 1Q84. I feel that the novel, being about the deeds, thoughts, and growth of Aomame and Tengo, do not need the fantastical elements to work. And yet I found the fantastic and mundane integrate so nicely, that I am fully engaged in thinking about why he used that particular device (i.e. the “Little People”.) The inverse can be found in a novel like Anathem, by Neal Stephenson. He uses character archetypes so he can ruminate on the nature of knowledge, thinking, and time. I feel deeply that both novels are extremely fun to read and think about, for entirely different reasons. And yet the most important thing is that these two authors have made a connection with me; does it matter whether a novel probes the deepest recesses of human emotion or tries to show how humans understand?

*Yet another occupier of the literary ghetto is the popular author, consistently atop best sellers list. Again, I find the ivory tower distinction that there is somehow a separation of motives, between those who pursue the highest form of literature and those who wish to make money, a red herring. John Logan, a playwright and screenwriter, said this best:

[he] turned to the list of actors in Shakespeare’s troupe. ‘I also love this, because it shows that Shakespeare was not writing for the ivory tower,’ he said. ‘He was writing to put asses on seats, the same way I am.’

He was followed (paid subscription required) by Tad Friend, of The New Yorker. Logan was in a rare books shop in NYC, deciding on pieces to add to his Shakespeare library. He was looking through a folio when he said that.

I am intensely aware that a profit motive tends to drive art to a wasteland (see: summer block buster movies, TV sitcoms, Britney Spears, and crank-them-out authors.) But it seems strange to say that art can be divorced from commerce. Artists need to subsist; their labor happens to be more ephemeral, and their paymasters more fickle, than for an office worker. If one already agrees that an author should be entitled to recompense, and he or she already is contracted by a publishing firm, then what does it matter that an author strikes it rich or not?

Yes, I suppose one might say then the next work might become corrupted, angled to sell more copies. I find it hard to see how one might separate the commercial aspects of book or art production, to that of seeking an audience. Money is simply a proxy for eye-balls and attention. How else might one see if their works are actually engaging readers, rather than serving as doorstops or as a coffee table adornment? 

**In much the same way my recent post on violin quality and preference suggests, simply identifying a violin as a Stradivari or setting a high price on a wine creates expectations. Because we are told something has a higher sale price or a lower “worth” (or higher rarity, or is in demand), we are likely to take on those  impression. That is why blind wine tastings (and similar “tests”) are a better way to let us gauge our preference.

Of course, there is also a related issue of palate, and whether all tasters will judge based on a similar set of criteria. That is a separate matter entirely. What I am proposing here isn’t a scientific tool,  but simply an informal and easy way to remove expectations and bias for entertainment purposes. This ought to allow tasters to make a decision based on their own ideas, skills, talents, etc, rather than simply agreeing with some existing opinion. This by no means guarantee independent assessments. Humans have a tendency to herd and become more likely to select the more popular verdicts, as they are made known what their peers think. Saving the revelation until the end might help here.

There might be something to this: one such blind-test was performed for literature by The Sunday Times of London. Opening chapters from two Booker Prize winners (Stanley Middleton and V.S. Naipaul, the latter having received the Nobel Prize in literature) were sent to 20 publishers and agents, with names and titles masked so that their provenance couldn’t be known. These “new” submissions were rejected by all but one of the recipients. Regardless of whether this “test” was done in earnest or as a joke, the result is telling. 

Both Naipaul and Middleton took a dim view of the result; they had toiled to produce the works, and they consider both books to be superb. After all, they were awarded the Booker Prize for those works. They conclude that the publishers and agents no longer understand what makes a good novel or literature. That’s one view and they are entitled to it. However, one might draw other conclusions,  that there is no objective marker for what passes for literary quality. Or that tastes and the appeal of styles may have simply shifted. This latter point is slightly different from simply a lack of objectivity. It may be that for a given generation, with a shared education and cultural background, they may in fact have come to a consensus. However, this group opinion would shift, when compared to other cohorts, as they have different points of references and intellectual development.