Archive

Tag Archives: Neil Postman

Rachel Kushner’s The Flamethrowers:  Congratulations! The writing style is evocative of the best elliptical, modern writing! I would peg the book somewhere in between David Foster Wallace (and Roberto Bolano – loved Infinite Jest and 2666) and Don DeLillo (I didn’t care much for Underworld). Although the plot is generally of the implied variety – forward movement is achieved by simply having historical events sweep over characters – Ms. Kushner does animate her protagonist. She doesn’t simply reacts – but her plans and personality does change in response to what happens around her.

***********************

I read a really engaging book from Alan Sepinwall: The Revolution Was Televised. I became aware of his work through Grantland, which mixes up culture and sports in a fantastically smart and enjoyable way. The book is an ode to the current “Golden Age” of of television, exemplified by shows like Breaking Bad and The Wire, where viewers are essentially treated to a unified form of story telling, over years, and that broke the convention of simply resetting the table after the episode ends.

In other words, the Golden Age is synonymous with the novelisation of television.

I never cared much for movies or shows, mostly because I am enamored of depth. My issue with television and movies as a medium to convey important information is best summed up by the criticism of Neil Postman. Visual medium appeals to emotions readily. Memorable images lingers; narrated text does not. The speed of the medium also discourages single, static shots. I mean, the most ludicrous example I can think of are the edits made to dance shows, like So You Think You Can Dance. By definition, dance is movement, and yet we are still subjected to dramatic cuts – different angles, facial shots, and different zooms – as if the very moves in the dance are not sufficient to maintain our interest.

Visual media are geared for high-impact by engaging multiple senses, in the shortest amount of time. Even in seminars, the advice I’ve received all suggest reducing the amount of information in slides. This either means editing out all the secondary points, or, in a much more difficult way, condense the information. The former everyone should be able to do; the latter practically merits a course – Or at least this set of books (1, 2, 3, 4). Verbally, we keep to the point, refer to the point, and ideally, repeat the point using simple language.

Each form of communication has its strengths and weaknesses. Postman’s criticism of television is nuanced: supposedly similar forms of media (like a YouTube video of a seminar) may not be so similar (i.e. the “real” seminar). Hijacking one medium known for short form, highly dynamic images and aural stimulation (i.e. TV) to engage in long form discussions about government policy or presenting scholarly works may actually lead both to suffer.

In other words, Postman felt that the real problems arose when we try “translating” the medium to do other things. Postman enjoyed television: as entertainment. He worried about the misguided attempts to make TV good by simply having it broadcast educational material. In this sense, he felt fine with arguing that, quite possibly one of the worse development for television is the rise of PBS. This allowed people to mistake TV as a medium for all purposes – from entertainment to a learning forum.

I think the issue is even more nuanced than how Postman described it. I think he focused too much on how medium limits the audience, but ignored the adaptability of the viewer. For a start, the viewer can just select another medium. He can pick up a book. While TV (and movies, and music) implies a broadcast, with a single emitter but multiple receivers, we, as the audience, might become amenable to altering our expectations. It may be that, properly done, there is no such thing as too long (the trick is editing down to the proper length).

It wasn’t until I was moved to think about Sepinwall’s observation that I began to appreciate how different the current television landscape is from when Postman made his criticisms. Sepinwall points out that over the past 10 years, the audience and television writers have implicitly altered their viewership/producer pact. Instead of expecting things to reset week after week, with no overarching development of characters, the audience now is willing to accept more openness and lack of episodic resolution – in expectation of a payoff for the story. It is now de rigeur for shows to tackle big ideas, or at least have complicated plots, to make things interesting. As many writers have noted, what we saw in Breaking Bad is really a 13 hour miniseries, broken up into 1 hour bits. The seasons are really chapters in the story of Walter White.

By altering TV in this way, television can in fact invade the space occupied by writers: we can know what the characters are thinking.

The sea-change is that we get to know what they are thinking the same way that humans understand and empathize with one another: by inference of intent from word and deeds, a bit at a time. Isn’t the hourly appointment viewing almost like seeing a friend once a week and catching up?

The novel does not work like that. Its form is highly stylized, where the novelist needs to specify much more information to build the world-context so that she can put forth her true point. Gorgeous verbosity continues to appeal to me, but dramatic depth is no longer owned by novels.

***********************

On Salon, Laura Miller writes about “What makes a book a classic“. The “problem” Ms. Miller describes is an old one, and is certainly not resolved in her essay: No one argues that there is such a thing as a classic, but issues arise when your classics do not match my classics.

I wish she spent a bit more time developing the throwaway comment that books may remain a classic even if a large minority (or perhaps even a majority) of readers do not like it. That, I think, sums up the disagreement between the popular sentiment (i.e. sales) and the critical and historical context that surrounds a book.

Recent, visible battles between Jonathan Franzen and the duo of Jennifer Egan and Jodi Picoult. Frankly, each camp has a point: good books need not imply a poor sales record, nor is every novel penned by a Brooklyn resident an instant classic.

I generally see arguments boil down to “sales should at least allow me to enter the conversation” and “proles are the worse judge of quality”.  Both arguments – and I wouldn’t even call them that – are bad, arrogant, and lacking sufficient humility.

The difficulty isn’t trashing something; it is much more difficult to defend an affirmative statement. What makes something good despite flaws? Why, despite the imperfections, should we continue nurturing an audience for that book?This is inherently an uphill battle, because the marginal effect of finding something bad in the good is greater than finding something good in the bad. This asymmetry in value perception comes about because in the former case, we start at what we term the summit and move away from it, with every flaw. In the latter approach, we are literally trying to bring something closer to “good”.

To actually write a compelling piece supporting the value of a good book means that we need to expend energy on salesmanship. Um… and no, a cluster of adjectives and superlatives does not cut it. I’m looking for detailed contextual arguments (how it relates to contemporary literature), how it extends and responds to previous works (i.e. the historical arguments), and, frankly, how well it reads. Sorry: this isn’t Garrison Keillor’s Lake Woebegone, where, to hijack his satirical comment, every (“literary”) writer is above average. The understanding that, even if we gave it our best, some readers will simply not agree, and I will guarantee that their reasons will not be objective.

It is that element of salesmanship that must be borne by critics, authors and those who are forever trying to define a Hall of Fame for books. What? Muck around, perhaps even beg for attention, so that you can convince the unwashed masses why they ought to put down their JD Robb, Danielle Steele, James Patterson, and Robert Ludlum? Precisely. Because, as I keep pointing out, we aren’t a literary culture. We expend more energy reading about starlets entering rehab and not novels about our humanity, discussing the artistry in computer games and movies, and most distressing for the literary novelists – having our brilliant critics devote their time to wax eloquently about the prestige, televised “novels” than on the latest from authors who are on the shortlist for the Man Booker Prize.

Going back to Grantland, it is telling that they find the intersection of sports and culture to include movies, television, music and games. Sure, these guys are fantastic writers who love to read, but they talk about books peripherally, in support of their social commentary and critical efforts*.

*By the way: read this “mailbag” feature by Andy Greenwald at Grantland. One reader asks

During the first season of The Bridge, I devoured Charles Bowden’s Murder City (on your very astute recommendation) and so I was wondering what novel/work of nonfiction might pair well with the upcoming second season of The Americans? For context, I am about to begin Nic Pizzolatto’s Galveston, which I hope pairs well with True Detective. In the past, I barreled through the A Song of Ice and Fire series and two of Elmore Leonard’s Raylan novels for Game of Thrones and Justified, respectively, so I’m eager to hear your take on the issue of book/TV pairings.

To me, the question and Mr. Greenwald’s answer exude a healthy love and appreciation for print and other media. This is what I want my book culture to look like, integrated into readers’ lives and not set off in an increasingly distressed mansion on a hill.

Why am I harping on this? Grantland is bait for the coveted male, 18-42 demographic. Where they go, so goes the money. People like Franzen can continue to live in a bubble, pissing on people who dare to sell things and make money so they can support the cozy, insular culture of the dwindling number of editors, publishers and authors. Where is the value in literary novels about a family of assholes?

The literary authors’ competition isn’t the group of top-15 Amazon bestselling authors: their lot is competing with documentarians and longform article writers. What conceivable value is there in reading characters in invented, mundane problems passing for insight into the human condition? I’d rather focus on real people.

And Ms. Egan and Ms. Picoult don’t have to worry: they write books people enjoy reading. Even though I haven’t read them, how do I know? Because people keep buying their books despite the finger-wagging critics.

 

 

Advertisement
I spent too much time on my last post, but I really wanted to push it out. I realized I never came out and stated my idea. Partly, it’s because it might sound controversial unless I develop it properly. I’m glad I waited, because Joshua Timmer, of Ars Technica, pointed to a new study that is relevant to my points here.

In  my previous essay, I had presented Stephen Jay Gould’s idea of dual magisteria, which addresses how one engages with the world. In Rocks of Ages, Gould places undue emphasis on religion as a major counter-point to the scientific descriptions of the physical world. He does mention that there could be other domains of thought, but Gould writes they would encompass other magisteria. Here, Gould did not go far enough; it suffices to group religion as one of many intuitive, personal ways of finding meaning in the world. Simply, there should be two magisteria: science, and all of non-scientific, intuition based ways of looking at the world.*

*The distinction is clear: there is a way to examine the material world, with experiments (although not necessarily their interpretation) providing a common frame of reference. Experiments are placed in this rarified realm because it is expressly constructed so that when methods are made available, other investigators can observe the same results. That is why one of the worst things you can say about a scientist’s findings is that it is not reproducible. 

In this context, my idea for how one might deal with science is that, functionally speaking, they can ignore it. This is possible since science itself, in the realm of meaning in one’s life, may have a lesser impact than other emotional, intuition based thinking. Second, when one aims to counter scientifically based policies, it is more about risk/benefit analysis, trade-offs, and marshalling political support, which actually has less to do with the underlying science and more about rational discourse. In other words,  it is possible to arrive at policies that are directly opposed to the recommendations based on scientific findings.

There is a distinct lack of courage from those who are opposed to science and distrust it because it is considered a Liberal domain (i.e. American Liberals tend to favor governmental intervention in regulating markets but  less  in one’s personal and social lives. The story goes that academia is populated by these liberal types.) These anti-science laymen lack courage because they avoid saying that their policies are at odds with the scientific consensus because they thought other considerations were more important.

So they couch their objections in scientific terms, and rather shoddily*. The proper argument for creationism in school isn’t in to make it an alternate scientific theory in biology class, but in a social studies or literature class, perhaps even an actual religious study. The goal of religion and these classes, as Neil Postman and Joseph Campbell realized, is to attempt to connect the impersonal world to human perception. At best, it can be as invigorating as a philosophy course and as an art appreciation course. It is interesting to me that so many myths do share elements in how they describe how the world began, with many such stories pre-dating even the gods of pharoanic Egpyt, let alone Christianity. There is power in these stories because while they are rudimentary attempts at explanation, in actuality they help us deal with the mysterious and the fear of dying at an approachable human level.

* Hence this strange, if not ironic call, for more facts. Again, my last essay about experts focused on this idea of splitting hairs, where all of a sudden hyper-specific observations are used and not the general theory. My point in that essay is that just simply emphasizing specific examples over the rule is not a simple act. The divergence may be due to chance – acceptable within almost all scientific frameworks – or may indicate an actual alternate cause. If that’s the case, not only must the new model explain why this observation happened, but it must also address why all the other observations we’ve seen arose and was addressed by the other theory. Some type of analytical closure is needed to address how we could have gotten things so wrong. One might argue that Occam’s Razor helps us avoid this situation where we go with an explanation that agrees with most observations – one clear example of this is the models for an earth-centric versus a heliocentric model of the solar system.

I would only point out that the religious studies curriculum would be at odds with what the American (Religious) Conservatives (i.e. less government regulation for economic markets but more constraints in personal lives) desire because any such comparison of religion would naturally lead students to question how Christianity, Islam or (religious) Judaism differ from the myths that had been so callously discarded.* Again, these zealots lack the courage to say that the strength of religion lies specifically in helping believers come to terms with the cycle of life and death and the harshness of the world. By continually using stories of a Jewish guru, who lived 2000 years ago, as a basis to counter scientific findings made from observations with modern equipment, it cheapens Christ and makes the religious look silly. Are we really to think that these Holy Books are relevant to how one interprets molecular biology data showing how closely related humans are to primates and mice? Scientific interpretation and finding meaning germane to our emotional needs (or explaining the human condition) are two different things. There are any such stories one can concoct from religion, because so many stories in holy books are allegorical. We can change stories to fit the facts.

* I once asked my friend, who is a scientist and evangelical Christian, why he believes in Christianity and not, say, Zeus. He replied that Christianity is real and Zeus isn’t. He pointed to the archaeological evidence for the history of the Jews in the Old Testament and of the documentary evidence of Christ and his Apostles for the New. To which I can only suggest that, there is also evidence that the Trojan War happened. We have many stories of the Greek gods and much archaeological evidence of the beliefs of the Homeric Greeks. Does that in itself proves that the Greek pantheon of gods exists? 

****

At this point in American political culture, we are overly concerned with expertise, the irony of which is that we tend to pigeonhole these distinct voices, rather than to heed their advice. The pushback from scientists is that they tend to dismiss laymen as cranks. These approaches are antagonistic.

On occasion, I hear fellow scientists, when they get annoyed with lay people, brush off by claiming that their bit of science is hard and that laymen shouldn’t comment. That may be true, but as I wrote in my last essay, the tradeoff from academia is that at some point, we publicize our research to other scientists. I go further and suggest that if we are already doing this, we might as well write explicitly to laymen. In the end, it is hoped that our research is of significance and worth including in curricula – for educational purposes.

Naturally, scientific discussions tend to be easier between scientists, even if they ply their trade in different fields. We know the lingo. More importantly, we recognize that there are benchmarks for good research (control experiments, multiple trials, randomized sample sets, published methods and analysis techniques, blind trials where necessary, experiments specifically designed to test for alternate explanations, and so forth) and generally scientists do tend to read broadly. As a result, they do tend to ask questions, not as pointed as an expert might, but they aren’t at a rudimentary level either.

My own background can provide an example. My undergraduate background was biochemistry; my graduate work and one post-doc stint focused on neuroscience, specifically olfactory physiology. My current work as a staff scientist is as a cell-biologist/image analyst, running cell based assays and writing high-content analysis algorithms. Lately, my group has pushed our technology for clinical application in clinical immunology. This is not to say that I understand all these fields to the same degree as people who have spent many more years than I. The common skillset of doing science allows people like me to expand into new fields.

It isn’t as if I ignore biochemistry concepts even now, nor did my work in olfactory physiology meant I simply looked at neuronal function. The point of the latter research was to show how animals use different sniffing patterns to elicit specific neuronal response types that might be important for the animal’s understanding of its odor environment. Being aware of the overarching questions driving specific aims is crucial to a scientist’s success. Another example: Gordon Shepherd, an important researcher in olfaction, recently published a book on flavor perception titled Neurogastronomy. In it, he synthesizes olfactory and taste physiology, fluid dynamics and modeling of air flows in the human nose, the physico-chemical properties of food molecules, and human perception. His bread and butter, however, was in neuronal circuits, with emphasis on the olfactory bulb. Although his ultimate interests is in the mechanism by which neurons give rise to perception, much was unknown and so one must settle on sub-systems (such as olfaction, in “lower” life forms like honeybees, tiger salamanders, fruit flies, and rodents) for research and begin there.

So yes, I firmly believe that even if one is ignorant of a subject, one can come up to speed. It takes work and time. I am not arrogant enough to think that I am exempt from the Kruger-Dunning effect, but I do think that having the ability to identify gaps in knowledge, knowing what to read, finding experts to talk to, one can work to gain a competence in unfamiliar fields. If thinks that this cannot be the case, then there is no point in talking to one another or in reading.

I’ve only lately come to realize that science can be interpreted as a method for communication. We do this a very precise and stylized manner – introducing new ideas, detail methods, publicizing results, and discussing how our observations fit extant theory. Again, this has partly to do with the most basic elements of experimental design, geared to helping scientists remove their biases during analysis. The assumption here is that we argue interpretation and whether experiments were designed correctly. This can only work if the “recipe” and “results” are reported faithfully and  reproducible by anyone else.

Thus science differs from other forms of communication because we work to make transparent our work. Other fields have the luxury of using allegorical, indirect language. Scientific ideas are hard enough without putting some artistry in our language: for example, think of “as an object accelerates, it cannot reach the speed of light since its mass increases” or “if we know the position of an electron, we cannot know its momentum” or that “mass and energy are equivalent.” Because we scientists do try to simplify descriptions, we cannot turn around and tell laymen that what we do is hard to understand. Science is hard to do, especially to do well, but the telling of it can be straight-forward (I’m thinking of essay level exposition, not sound-bites.)

Despite science being a means of communication, it is not a debate in the sense of law; the point of distinction is not in whether the rhetoric is convincing, but whether the data best explain an idea that describes reality. There is no audience per se. Rather, the “audience” is whether the next experiment is consistent with the older findings. This is the predictive aspect of science: If what this other scientist published is true, than it affects my idea like so, and thus I should see this in my experiment.

But as soon as we step away from the realm of validating theories, we have descended into the muck surrounding the ivory tower. This isn’t bad at all; while basic research may be a worthwhile pursuit, I see no contradiction in having to justify that concept to the tax payers. While other scientists might scoff at having to consider applied research, I see this as necessary. In my field, we apply to grants from the National Institutes of Health. In fact, we must always suggest ways in which the research will ultimately benefit the clinicians who treat patients.

My bias is that I see applied research as compelling, and I see, as a red herring, the idea that all research must be pure and unsullied. In other words, I see the realm, or domain, or magisteria of science, as a rather small one. As soon as we start talking about funding, applicability, significance, whether we should pursue a line of research, we get into that fuzzy idea of the “other” magisteria.

This is the part where laymen falter. Laymen tend to argue from a grounding based more on non-scientific criteria than any scientific objections (based on methods, findings, or analysis). I have a very definite view that scientific discussions require the language and methods of science. It helps scientists tease apart assumptions, biases, and the empirical findings. It isn’t that all scientists can compartmentalize their thoughts, or that personal politics, background and temperament do not affect their thinking. It is that the whole system is set up to at least force scientists to justify their ideas (or biases) with data. Questioning scientific findings can only concern methods, analysis, interpretation, counter-evidence, and alternate hypotheses. Alternate ideas are always there; best idea or consensus by no means imply 100% certainty. It might simply be that the idea is the best of the worst.

However, if one were to discuss why the research is worthwhile, why a scientist pursued it, why something should be funded, what applications does it have, what are the resulting policy recommendations: all these are subject to debates. We have facts, as discovered by science, and then there is how we deal with facts. All of us must come to grip with them.  That is why I modified Gould’s opposed magisteria to contain two domains – science and not-science. The former speaks to objective truth, or at least a description of the material world that can be replicated by any sufficiently educated experimenter. The latter has to do with how humans perceive these hard truths.

While it seems like science is given a preeminent position, I would say that it is a rather small domain. Its language and methods are  precise – it is limited. The not-science magisterium encompasses everything else: our experience, our philosophical bent, our religious background, and so forth. These are bundled together because its “truth” is but an interpretation of how we look at the world. At the same time, it is much richer because it is unbounded by hard facts – it can be as fanciful as whatever the imagination can come up with. Its purpose is to help us with that vague concept of “meaning.” It is from this sphere that we might find compelling arguments and vivid imagery to help convince a lay audience.

Non-scientists can lay claim to the other half of the problem, that of receiving the message. Even if scientists write for the public, interested laymen need to listen. When laymen apply the label of “expert”, it is done with opprobrium, suggesting that the expert has narrow knowledge, but no “real world” experience. The ivory tower as therefore a prison rather than a place for undisturbed rumination. Non-scientists can apply the rigid standard to voices they do not like, simply by claiming that one’s expertise is not in the topic at hand. Naturally, the point is to keep experts corralled and voiceless. It is every bit the same exclusionary tactic that some scientists take in keeping laymen out of the realm of science.

My problem with it is that it allows opponents to treat each other not as individuals but as a belonging to the “other”, and eventually as caricatures. Instead of engaging with the science, it is the scientist who is attacked and demonized as mad or playing god and the laymen portrayed as ignorant, religious zealots. If nothing else, people are generally shrewd. Even if they do not appreciate the nuance of an experiment, they are probably experts in some other domain. This goes for scientists and non-scientists. Are we to suggest that they cannot do anything else, simply because they are competent in one field? Surely, all of us at various times and on numerous topics can hold incorrect opinions, but we can learn enough to become informed. To say that this is not possible is to argue that education is pointless.

No one claims that we can all become experts, but we can all learn enough to appreciate the current thinking. So the problem in how laymen and scientists relate to one another is that there is a vested interest in ignoring the fact that we all live in the world. In that sphere of public influence, rightly or wrongly, scientific facts and religious thoughts are just two of many points of view. In examining the greater good, one cannot argue in isolation.

For example, coal-fired and nuclear generated electricity provide one such example. Science and engineering have both resulted in these plants providing the most power efficiently. We already know that burning coal leads to increases in greenhouse gases. Nuclear power is generally cleaner at the point of origin, but it sure is spectacular when things go wrong or when we dispose of spent radioactive fuel. Science will not help us decide which power source to use, or whether we should re-wire our electrical grid and redesign our houses and appliances to consume less power, or whether we should build up hydroelectric power, wind farms, and solar power plants, or whether the trade-offs are worth it. Wisdom and knowledge is a tapestry. We would all do well to remember that we must argue using appropriate tools.

When arguing scientific points, it makes sense to ask about the assumptions, previous empirical evidence, the methodologies, and current findings. It is a fair question to ask for clarifications between current findings and facts that seem contradictory. But scientific validity is argued from empirical evidence, not from rational arguments like two opposing lawyers. There is no such thing as “all evidence.” There is curated best evidence. And while that is still no guarantee of the scientist being right, it will certainly take a bit of work for anyone to identify the actual problems with the model (and see my previous essay on experts for some examples.)

When arguing significance, we would do well to remember that matters of judgment can be based on personal experience and informed opinions. Benefit and risk can be of equal weight, with personal caution being the only guide as to what one prefers to emphasize. It would be great if we all have informed opinions, and that is all we can aspire to when we haven’t had the luxury of time spent cultivating an expertise on a topic. It is partly the scientists’ job to make available the resources to  help citizens become informed. Telling them to trust us is a non-starter; we argue that an argument from authority (and mostly with regard to religious authority) is no argument at all.

Scientists need to set an example and show laymen our actual methods; a fact is believed so because we see it – and you can too if you do exactly as we specify. The other component to this is to realize how  quickly we step outside of our scientific domain. Facts and coherent theory are not sufficient to inspire. Rhetoric becomes an important factor. If you don’t think language matters, just recall  “irradiated foods” and the public misperception. The reference is to light, not nuclear radiation, but consumers rebelled.

For laymen, they need to be more honest about the basis for their objections. Since society pays lip service to the idea of experts being good (when they argue in your favor), it is supposed that the only way to take themselves seriously is to argue from facts, even if their strongest arguments might be based on personal experience and circumstances. The result is that even non-scientists make a push into the domain of science, not realizing it that ideas are not weighted equally. One needs affirmative evidence to show the possibility that a theory can be a valid alternative. Pointing out the holes in global warming mechanisms or evolution can at best weaken those theories. In no way does criticizing science show why creationism is valid.*

*I try to avoid being snide, but I can’t help it. Please answer me this: does taking host during Communion result in the transubstantiation of the wafer and wine into the body and blood of Christ?  A favorite question that Protestants tweak Catholics with. You would think there is an verifiable answer here. Whose creation story – excuse me, theory – should we teach? The Sumerians’? The Egyptians’? The Greeks’? The Zoroastrians’? The Buddhists’? The Hindis’? For that matter, let me know which set of gods to pray to. Maybe before we even consider teaching creationism as an alternate theory to evolution and cosmology – a distinctly American phenomenon – the religious ought to figure out which story best “fits” the data.

The point of this essay is to suggest a more constructive way to talk about science. I see no issues with using compelling imagery to push scientific ideas. This is not acquiescing. I am recognizing the fact that no one likes their beliefs challenged. But scientific facts are as they are; they change only because of more precise observations from better tools and experiments. Our personal worldviews are what must change, if the two are ever at odds. We scientists should take advantage of the metaphors and allegories allowed us by the non-science domain, showing that even something as contentious as religious ideas can be reformulated, not necessarily refuted, and make palatable the bitter pill of hard-won scientific facts.

I had been thinking about ebooks lately; the upcoming Nook Tablet and Kindle Fire are the final nails in the coffin for the book publishing industry. Ebooks are simply a commodity, and one that produces less revenue than either music or video. Static ebooks, resembling ink-on-paper, will be superceded by a product that fully embraces the possibilities given it by tablet based computers. What I am about to say comes sounds reactionary, perhaps even downright Luddite in today’s world: the reader demographic I belong to is being discarded.

I think I am that rare beast (20-35 years of age, male) who had avoided action movies, sports books (well, not sports statistics, but I would argue that falls under economics and math), video games (ever since I received my bachelor’s degree), and cars (I’m a big fan of public transport.) When I was ten, I made a conscious decision to listen only to classical music (I have been a listener of WGBH radio and Classical Radio Boston since that time). What I bugged my parents most about was shuttling me to the library, at least until I was old enough to get there on my own. When I was a teenager, I took piano lessons from a jazz and blues teacher; however, I asked him if he wouldn’t mind teaching me Beethoven instead. So yes, I admit I’m a strange duck. The point is that I’m probably not the mainstream market.

With regards to reading technology, I agree absolutely with people like Neil Postman and Nicholas Carr (among others), who argue that we are leaving the era of densely organized, linear narrative literacy into a mode of literacy designed for cursory scanning, link following, and 140-character phrasings. Although both these writers took pains to explain things in neutral terms, one cannot help but see their disgust and despair at the world having the opportunity to choose either a rich internal, mental life or one based on satisfying emotional impulses, pitch headlong to the latter.

Both these men phrased the argument in these terms: literacy, that of constructing arguments based on weaving facts and rhetoric and placing it into a written form enabled the formation of intellect and wisdom. The act of writing onto paper rendered a permanence to thoughts that required the author to consider and respect each and every word. Once published, the words cannot be so easily retracted and fixed. But the true innovation is that readers can refer to previous statements made in the text, something that simply cannot be done had the author been limited to speaking. Thus the book enabled complex arguments due to its ability for cross-referencing. Complex arguments can be delivered as one coherent unit of thought.

One might argue that web-enabled documents should enhance the concept of books, providing ease in cross-referencing and primary source access. However, that is generally not the case. While reading texts on screens is as efficient as reading on paper, when hypertext links are introduced, reading comprehension decreases. Once links are present, it seems readers can’t help but to follow them, especially to other articles. Ironically, overall, reading comprehension for the text they were originally reading goes down. It isn’t clear that the readers could distinguish between the multiple articles they read. Thus source attribution becomes problematic. It might not matter in every day speech, but if one wishes to write informed op/ed pieces or scholarly works, one might see how it can be inefficient.*

* Source misattribution need not be limited to heavy internet users; I remember a flap with Doris Goodwin Kearns, who was found to have plagiarized from another scholar in her book on the Kennedys. She accepted responsibility for her misuse of quotes, although her defense was carelessness and not malice. She lost track of her notes and mixed up passages she wrote with those written by others. I think, for a scholar, carelessness is a greater offense than stealing, but that’s like arguing whether being killed in a hail of bullets is worse than being killed by a single gunshot to the head.

Despite these episodes, one can see that books lend themselves to being literal dividers among different “thoughts”. On the web, HTML addresses serve that purpose. Who among us, however, pay that much attention to them as identifiers? At any rate, taking care might compensate for these lapses, but apparently, a majority of people do not take care (hence the likely decrease in comprehension and increase in confusing different authors.) Online texts become a data slush.

Books engage readers unimodally, making it difficult to access other source materials. Follow-up probably requires some selection and distinction by readers, to waste the least amount of time. One might save himself a lot of time in a library or bookstore by simply thinking about what was written and identifying the most fruitful line of research, then acting on it. This is probably the “virtuous cycle” engendered by books. Without the distraction of clicking on links or even doing something else (like playing games, watching YouTube, or making iTunes playlists), one’s attention is captured by the writer and his own thinking processes. If you value choosing, more often than not, things that make you think, you might place a premium on identifying works of distinction for reading, constantly sorting books into the “literary” and “genre” piles or marking non-fiction as “scholarly” or “polemical” (you know, like being an elitist snob?)

Neither Carr nor Postman feels  technology is neutral, because new technology brings the possibility of drastic change into an existing culture. This can certainly be tempered by our exercising selectivity when adopting and using new tech. But we don’t do that; Americans in particular have rarely felt need to hinder acceptance of technology, generally feeling new machines and methods can be simply mapped to existing routines. It is rare for humans to realize the potential that the new has to overwrite the old.

We also seem unable to accept that there are such things as intrinsic differences, and that describing the differences do not necessarily imply “goodness” or “badness”. Instead, contrasts are turned into a line of battle, and one list must vanquish the other. I have noticed that when people talk about the pros and cons of ebooks, they focus on the differences between reading on a screen and reading black-text-on-white-paper. People talk about ink versus pixel density, smells, tangibles, marginalia, and so forth. I had always found these arguments silly. Both systems convey words quite well and aesthetics is a secondary concern, while the true difference is that one is reading a relatively invariant, single-purpose object or from a computer.

One might want to keep in mind the differences that each engender, and then take steps to choose the right tools for the job. For example, web pages lend themselves to displaying news. The style of brevity, the need for immediacy, and the ability for multimedia presentations offer readers a variety of reports and primary audio/visual supporting materials. Interactive pictures (Flash animals, Javascript, and so on) suit themselves into longer magazine like articles. The very nature of link heavy, multimedia extravaganza, however, detracts from the single purpose and mindedness of books. In my experience reading on e-readers, computers, laptops, and tablets, I have not come across the “all-business” approach of book. A writer can simply tell his story, present evidence, furnish a descriptive table of contents, and simply get out of my way.

But there is one difference between my take on screen based reading technologies from those of Postman and Carr. I have never had the problem of distraction when reading or writing with tablets or desktop PCs. One reason is that I spend time to find the best piece of software to let me read in the way that I want. Sure, I am particular about minutiae like margin width, font face and font size, but once set, I do not fuss with settings. I’m more concerned with optimizing my ‘read-flow”. On my Nook color,  I got rid of stock Nook Color (well, I run CM7.1 off an SD card. I have a soft spot for B&N right now, considering that I think that Amazon will drive them out of business) because the Nook’s reading software does not hook into Evernote. Also, the Nook version of Evernote was at an earlier version than for the general Android market (it’s been corrected since, but I still can’t share directly into it from the NC reader).

The base NC OS also does not allow for widgets; one thing I love about Evernote for the Android market is the expanded widget, which shows 3 previews of your most recent notes. You can either add a new note or tap the preview to go directly to that note. With rich text formatting, I can now copy and paste interesting quotes from the ereader software, change formatting, write a few thoughts, and go back to my e-book.

This might seem inefficient. It is. For all-in-one note taking, I like MoonReader. However, it does not support CSS for formatting ebooks properly. Well, it has a mode where it displays CSS using Android’s native HTML renderer. Unfortunately, the book is essentially displayed as a web page, and the user can no longer highlight or annotate text. WIth that said, most ebooks can be read with basic paragraph indentations. For non-fiction, it’s a non-starter. It can be hard to tell whether the author wrote something or if he set something of into block-quotes. I’ve gone back to Aldiko. Selecting text across pages is no longer a problem. I just copy/paste in two segments.

Why do I like this inefficiency? I’ve found that when it’s easy to make notes, I made morethan when I composed my notes carefully. I realized that using the internal highlighting function creates the college textbook phenomenon: where a large portion of the text is highlighted by the student, but without marginalia. I intentionally made it less convenient for myself by copying the relevant passsage in another program. This means I have introduced an immediate cost in my note taking, and everytime I do so, I have to disrupt my reading flow. As a result, I annotate in ebooks the way I mark up actual paper books. I do bracket an important passaage or two, but write a summary, objections, citations to contrary arguments and publicstions, or questions in the margin. I try reading larger segments of the book, pick a salient quote, and write a small summary of the arguments.

I see this as adapting technology to my reading, as opposed to letting tools change the way I read. Yes, I spent a little time with metareading, trying to find my comfort and thinking about what I want in my software. Before I got my NookColor, I read on the phone, and before that, I had a Palm/Compaq/HP PDA. Then, I tended to read fiction (due to terrible inefficencies with annotations – this was the PalmPilot days) on the PDA, and reserved non-fiction readings to books I owned. I also read a lot more library books than I do now.

So I never wrung my hands over reading ebooks, because I made a conscious effort to do my e-reading in a way where my ability to think, ruminate, and write is preserved. I spent some time adjusting the software for my comfort. But I don’t see that as any different from setting up a reading nook, a den, or a writing desk. I’ve seen writers do something similar with their computer setups, selecting programs that simply shows text: no menus, no fonts, no styles, maximized workspace so that they  cannot see the other programs that beckon for their mouse clicks. It is unlikely that most consumers will make that effort to use ther tools in such a precise manner. That is the very distinction between power users and regular consumer users.

And this is what I mean by the Nook Tablet signaling the end of a bookcentric reading demographic. No, I am not discounting genre readers. But I think the segmentation is between “literary”/”genre” readers vs. the mass of non-readers. it has always been thus. Think about the books on the NY Times bestsellers: I am sure any aficionado can suggest better alternatives within a particular genre. The most popular books are not always the best books.

However, I’m not worried about some abstract notion of quality; I do worry that book sellers and publishers who treat literature as a market (and it is their right to do so) will not exercise the selectivity that could result in good literature. I am not saying that popular books are all bad, and that good literary works can’t be popular. I am saying that shrewd business men who happen to have chosen to make their money in publishing (although one might argue that making money from books isn’t so shrewd) will make decisions based on the bottom line. This will lead to decisions that won’t make sense for book lovers, but might sense if one is simply trying to capture dollars from non-readers.

So we will see sellers use the internet to the best they can: data mining, marketing to niches, fine slicing of market segments, i.e. a continuation of market splintering. The mass market  won’t be literary minded people, who enjoy reading for its own sake, and perhaps with broad interests. I guess I am saying that there are fewer cross genre readers than there are consumers who are simply satisfy their narrow interests among different media types. Hey, we can’t expect anything from free market econonmics, geared to finding the cheapest way to pull in the biggest income.

And so a book seller sees a need to build an ereader that is actually a general purpose computer (i.e. Nook Color and Nook Tablet.) I suppose the Nook Color is a book-centric tablet, but with each update, it’s gained multimedia functions. That is what is demanded and needed to compete with Amazon and Apple’s iPad. This point is underscored by popular e-book bloggers and publishing industry watchers. In fact, Nate Hoffelder of The Digital Reader prefers the Nook Tablet to the Kindle Fire because of the Nook’s media capabilities.

There is no conspiracy or malice. Just a bunch of smart people, doing the best they can to make the most money for their companies. They wind up selecting the biggest market segments and cater their wares to them. Consumers will buy products that they think will serve them best. And so we’ll see more e-books that are app like, with embedded videos, music, links, and interactive figures. The concept of a linear, comprehensive narrative will be superceded by apps amenable to updates and upgrades. I’m not immune to this; I can see how attractive having an interactive Cat in the Hat story app would be to a child. But then these aren’t “books” in the traditional sense, nor are they even “e-books”. They are book based apps.

Is this bad? Not from a market standpoint. It’s probably where the money’s at. And I’d be the last person to begrudge another his ability to make money to feed his family. I’m not even decrying the fact that we aren’t practicing wisdom, but leaning on market research, in choosing the products we make and sell. It’s like choosing a web search engine (Bing? Google?), an app market ecosystem (Google’s? Amazon’s? Barnes and Noble’s? Apple’s?), a cloud storage service (Dropbox? Ubuntu One?), or an OS (Win? Linux? MacOS?). Each choice leads to a different array of probabilities and paths one can take. It also tells companies how they can behave in order to capture our choices and dollars. And because the momentum of the mass market is towards apps and general purpose computers – not ebook readers, that’s exactly where we will end up.

I have no desire to rehash arguments made by many others, in and out of publishing, or who have published with big or small press, about the good and the bad of e-books. Instead, I offer some observations from Teleread (e-books continue to show an increase in sales and that, as a form, books are undergoing changes – thank you, Chris Meadows and Paul Biba for the links) and The Digital Reader.

****

Yesterday, I went to Porter Square Bookstore to attend a reading by Tom Perotta (The Leftovers). I am a fan of Perotta’s (I had some reviews from Goodreads that I haven’t yet reproduced here. I managed to repost my essay on Perotta’s Little Children and The Abstinence Teacher.) While self-contained (it was one about two men, one of whom reaches out to the other to provide comfort), it did not seem too compelling to me. Instead, I found the book jacket description to be more interesting: a lot of people vanish (Rapture style). How do the people who are left behind cope (in the absence of an explanation as to why the vanishing happened?)

There were not many questions about his books, per se. There were two involving the profit motive: one person asked if it was any easier to get a second book published. Another asked if he now writes with an eye to screen adaptations. For the latter, Perotta noted that, after Election, the movie, was released, Hollywood seemed excited by the prospects of his College Joe. The book disappointed that crowd in that it was not the slapstick, raunchy comedy people were expecting. As for Little Children, Perotta would have marked that as one of the least likely books to be adapted (an ensemble piece, with a plot about a child molester). The director, however, really wanted it made.

To tie it into this post: One woman asked Perotta how he thought about ebooks, whether he feels they provide an opportunity or if he sees it as a threat. Perotta, like in his books, seemed to give a fair answer. He acknowledged that there are opportunities for authors: new authors can be published, while established authors will never go out of print. His tone, posture, and rushed ending to that statement suggested to me that he understood the virtues of ebooks rationally, he did in fact feel threatened. He did not rail against ebooks. He realized that the medium is undergoing a transition; in the short term, he is satisfied that there is a place for books. His evidence? He gave his reading in a bookstore, which is acting as a forum for readers and authors to interact. More emphasis was given to the fact that he was comfortable in the publishing world. He grew up reading words on paper, and that’s his comfort level. It seems his point is that paper book readers have a culture, and that e-book readers will eventually form a different sort of culture from the one he has known.

I think our current conception of e-books is actually limited, to some extent, by the adoption of the Kindle. The Kindle is a translation of paper to screen. A number of features mimic what people can do with paper (marking pages, writing notes) while improving on others (such as whole book search, storing large collections of titles). But the e-ink technology (in the current black/white, slow screen refresh state) lends itself to be treated like a book.

With the iPad and NookColor, we are beginning to see reshaping of content to fit the color screen of a portable computer. The popularity of the Kindle may have stemmed from its familiarity to the printed word. Sooner or later, e-books will diverge from this current form (book like presentation), turning into slick interactive, multifaceted presentations (probably some hybrid wiki-page/HTML5/video/music extravaganza). We are already seeing that in the Dr. Suess books being converted to iPad and Android apps. It is ironic in that many have tried to expand on the book form (think Griffin and Sabine books, and the Dragonology series) only to bypass it altogether.

I think what is lost in attacks and defense of ebooks is the concept of technology creating culture. Neil Postman, Mark Helprin, and Nicholas Carr have made these points. Technology is neutral in the sense that humans can decide on its immediate use. We also have the ability to select among a great number of tools. However, the authors I cited here make compelling arguments that we are also shaped by our tools. We may not select the proper tool (if we are holding a hammer, it won’t help us with set-screws.) And tools can limit how we approach a task (hence the cliche of, when you have a hammer, everything looks like a nail.) They take the argument a step further; technologies that alter language can literally alter how we think.

I don’t think it is controversial to say that humans are generally intellectually adaptable. Postman et al. argue that we are much more malleable than assumed, and to our detriment. Online activity in the mobile age, googling, clicking links, video-centric delivery, and short texts (shorthand, abbreviations, two-sentence paragraphs) tend to promote shallow scanning. One might counter that, if a person is inclined, he will delve deeper. Postman et al. counter, no, they won’t. The nature of Internet presentation, they argue, will make it less likely for people to ruminate, to read deeply, and to think in the silence of their own heads. It is easier to follow the next link.

Of the three, I think Postman gave a framework for dealing with technology. In both Amusing Ourselves to Death and Conscientious Objections, he argues that new technology is here to stay (at the time, he was writing about the pervasiveness of television), and we need to be aware that all such communication altering technologies has the capacity to reshape the way we think. We must take care to exploit its virtues while limiting its disadvantages. In other words, control the technology lest it controls us. What was interesting is that he argued that TV isn’t bad because it provides salacious entertainment. TV is most pernicious when it aspires to teach and to serve as a forum public discourse.

Not just television, but effective television presentation, comes with visual excitement and change. This is the opposite of the arguments one can develop in excruciating detail in a book. One can compare a book (even better, read many books) on global warming to an Al Gore movie or to inane 5-minute segments in television news. Postman would simply prefer that we realize that a 5-minute segment is the worst way  of dealing with complex arguments. It simply isn’t enough, especially given the scientific literature on the subject matter. What TV is suited for, Postman notes, is an entertaining 5 minute segment. Something to make you laugh or cry and enjoy; something with impact, translatable into sensational imagery – sound is no longer enough. Instead, we are concluding that audio-visual presentations (whether on TV or in Youtube videos) comprise  the main solution, rather than a portion. It isn’t that we do not what the limits of technology are; we do not ask if we are using the right tool.

I agree with this assessment. Now, when I peruse textbooks that are written for college students (in neuroscience), I note all the missing pieces of information. Not just nuanced counterarguments, but  complete series of compelling experimental evidence that points to alternative theories. And that happens even in a 700-page textbook. Imagine how much can be lost by reduction into sound-bytes (not compressed, since it implies that the total information is there but reformed into a more efficient notation.) Television has shortened political debates into  short oral bursts (hopefully, with visuals), because its strength is in providing ever changing stimulation. The Internet will reshape reading on a screen, emphasizing scanning, clicking and instant look-up, not necessarily understanding or retention, since the information is always at hand. The new “smart” will be in constructing proper search terms.

I don’t think there is anything wrong with that, though. As Postman and Carr suggest: be aware of what is happening to you (although I am paraphrasing liberally; they devalue this type of intelligence. I am willing to redefine what intelligence ought to be in this brave new world of ours). Maybe, one can simply use the search engine to find the proper book.

As a final aside: here’s another take on what we can lose. Scintillating intellectual conversation. I was browsing through the stacks at Porter Square Books and saw that there is a new collection of essays from Christopher Hitchens. The book jacket blurb seemed to have a pertinent statement: Hitchens combines intelligence, wit, a huge store of knowledge, the ability to recall from this “offline” repository, and charm. That description does sound like someone who would make a wonderful dinner companion. I can certainly see how conversational flow can be ruined if all of us are googling into our phones. But I sense a hint of elitism in that; for my part, I have a (I hope relatively idiosyncratic) collection of stories about science, quantum mechanics, Richard Feynman, mathematical gambling analysis, gadgets, statistical analysis, novels, World War II, microscopy techniques, and 19th-century European history running in my head. And that’s just a thin slice of what I know. Whether I am good company depends on the people I am with, how well I present my thoughts, and how receptive they are to them. I think the point is that, simply, Hitchens and I (and others) have chosen to remember different things. Maybe the cultural gatekeepers are just annoyed so many people choose to remember something different than they do?

Is curation important? I think so, but only in the sense that it plays to our virtues. We are not indexing machines like Google’s data containers. What we do remember are things associated with great emotional impact. That helps us perform single-trial learning (to, if we are lucky, avoid in the future things that hurt or almost killed us), but in this age, it can help us identify meaningful cultural objects. It may be reflected in the fact that we prefer people tell us of formative events that shaped their lives, rather than a considered answer as to the sequence of life’s happenings that let their lives unfold the way it did.

All this is a way of saying that, I agree with Perotta that reading culture will change. Since I am so comfortable with both paper and digital screen, I do not feel the same loss that Perotta does. I know there are readers out there like me: those who feel comfortable in a library, a bookstore, or on bn.com/ebooks. I pack paper books and my NookColor for trips. I write marginalia in books I own, and I upload my notes to Evernote when I read e-books. But are we the most common sort of e-book readers? No idea; I am not sure what the dominant form of e-book reading culture will be.

There is a recent flap, documented by Roger Ebert, regarding movie reviews of Inception, a Christopher Nolan film. The principals are Ebert and New York Press critic Armond White. It was generally accepted as a good movie. The Internet mob took issue with the few critics who panned the film. Ebert found himself defending the right of a negative review, provided that the review brought some insight that transcended a rating. The true issue is whether the critic was simply being contrarian, seeking to drive interest in his musings. Ebert also made the point that the mob mentality online is driven less by interest in the artistic quality of a movie than by the base desire to belong to a tribe. In this case, the tribe one joins have like opinions.

The short attention span promoted by web interfaces also feeds into the need for quick verdicts. Ebert and the contrarian critic both made the point that “me too” comments are drivel. There is a need for sensible and intelligent commentary. However, Armond inflames the discussion by saying that this commentary should only be supplied by gatekeeper critics. I think that this is the absolute wrong place to draw a line.

For one, the two are talking about movie criticism. It isn’t rocket science. There is very little basis in fact; most reviews worth reading seem to involve interpretation. I read through some of Armond White’s reviews; they appeal to me because he seems to engage the film as is. Sure, his verdict seems clear, but he treats the film as something worthwhile to discuss. Even in a simple actioner, such as Angelina Jolie’s Salt, White manages to find the political stance in the film to be atrocious, nevermind the plot holes and muddled action shots. The film has Jolie killing American CIA and FBI agents, who are bumbling idiots. White is outraged that this point of view, such as it is, isn’t explored in any way aside from being the backdrop for fantastic fight sequences.

White picked the wrong fight, I think. Commentary is open to anyone with who can see and can write. The additions that a professional writer offers pertain to facts about how the movie is constructed, access to the participants, and historical perspective and context. I have my doubts about the primacy of critics to the last item: perspective can come from anyone who has made intensive study of film. No one can see every movie made. In a sense, the fact that critics must choose among films open themselves up to the possibility that a dedicated amateur may actually know more than the critic in some limited sphere. This is the nature of the beast. Movies are made to be seen, and many people have access. I am not discounting the role of critics. I am suggesting that the difference between an amateur commenter and a professional critic is a matter of degree. The professional will in general have seen more films and read more and talked to more actors and directors than an amateur. They will generally have a better idea of the evolution of technical aspects of movie making, and of the philosophies governing how shots are framed, how actors and objects are blocked, and how edits are decided.

Despite the professional’s likely possession of an immense store of experience, it still  would not surprise me to see dedicated amateurs provide professional quality insight. One might think that since I am a scientist, I may actually exclude a few favored domains from this idea that an amateur can accomplish something useful. That is not the case. The history of science is littered with serious amateurs, who nonetheless gave much care in framing testable hypotheses, designed pertinent experiments, and had made careful observations and calculations about the data. Of course, the level of precision in gathering scientific data has increased due to both the quality of equipment and the wealth of scientific knowledge that requires integration. These factors limit a modern dilettante’s access to perform science.

But access to scientific literature remains, and in some cases has increased, from even 5 years ago (think of open access journals like PLoS One). There is much room for amateurs, and even scientists, to comment on fields outside of their specialty. As matter of fact, this is healthy, as it promotes awareness in the state of science as well as providing a shared basis for intellectual discourse.

What struck me as the wrong note, then, is that Armond’s dismissal of Internet commenters is that it smacks of elitism, rather than a defense of merit. Elitism assumes a position of superiority, while merit requires one to earn that privileged level. Only in form does Armond’s argument seem to defend intellectual discpurse. I would hazard that his type of discourse is the antithesis of intellectualism, calling for argument from authority and not through reason and rhetoric.

I have been a fan of the written Roger Ebert for sometime. I had always thought his written reviews conveyed a better sense of experiencing and watching the reviewed movie than his capsules on Siskel and Ebert or Roper and Ebert. Armond specifically decries this latter form of review, with simple descriptors followed by a thumbs up/thumbs down verdict. Thus, when I came across a toss-away line in Jacques Barzun’s From Dawn to Decadence, about how modern culture is sliding to its nadir because of a movement away from  dramatic tension and catharsis, I disagreed with him. The main issue isn’t whether Ebert is qualified or not. The issue is that, in the course of making a television version of a film criticism, the form provides constraints that, in essence, “dumb down” the review.

Ebert, and White, and Barzun, all come across as thoughtful people who are excellent writers and who are passionate about their subjects. Something is lost when these experts go in front of television cameras. As Neil Postman points out in Amusing Ourselves to Death, television isn’t bad because of a sinister TV and advertising executives. TV can be bad because the process that make compelling, watchable TV shows isn’t the same for what makes for a compelling book. Postman augments Marshall McLuhan’s statement that the “Medium is the message” by clarifying that the medium is crucial in framing how the message is conveyed. TV, and movies, generally require cuts, motion, changing camera angles. The setting needs to change frequently. Most importantly, speech cannot be in the form of lectures; they must be short phrases, captions for the attendant graphics and music. In short, this is the opposite of what occurs in textbooks or scholarly works – and even in magazine or newspaper articles.

Again, the point isn’t that TV has no redeeming value. The fact is that different media have different advantages and limitations. Such limitations would restrict even the most serious scholar who wished to elevate discourse. TV isn’t suited to intellectual discourse. How can 5 minutes of discussion, in a round table format, equal the depth in a few chapters of a history, or economic and political analyses? Thus, I found White to have picked the wrong fight. It isn’t that there are better critics than Ebert; White’s issue is, I think, with the television format.

On a final note, related to how TV changes the presentation, I will recount my conversation with a dancer. I had just met her, and since she told me she is a dancer, I asked her what she thought of shows like So You Think You Can Dance (one of my favorite shows.) Her complaint is similar to just about every afficionado who sees his subject thrown up on the screen: the television doesn’t convey the depth, the technique, and the nuances of the subject. She thought that the 3 minute dance segments did not convey the technical aspect of dance, one important component of which is that some pieces are long – stamina and attention are requirements. The voting system, by untrained audience members, skew votes to flashy choreography. Hip hop and modern dance pieces are favored (and I’ve rarely seen a true classical piece on the show.) This echoes some of the criticisms I’ve read about American Idol, political bully pulpits, and science shows. Interestingly, the New York Times had reported on how Broadway singers, directors, and producers were finding that audiences no longer applaud unless singers end with a big finish. They blamed shows like American Idol where all singers end on  a high note. Again, what plays well on television isn’t what works in theater, at a dance recital, or in a book.

The binary-Ebert is not the one I am familiar with. The Ebert I found wrote thoughtfully about film. As with White, I get the feeling that Ebert thinks that whether a film is good is besides the point. What is most interesting is whether there is some thought or emotion that the movie evoked. Ebert engages the movie as it is, veering away from measuring movies by some artificial Platonic ideal f what a movie should be.

%d bloggers like this: