Archive

Social commentary

The best novels focus on true to life characters, who swim and thrive against the tide of events. What has been interesting is how limited the purview of modern literati is in terms of identifying novels of note. Books that step outside the boundaries of wringing the profound out of the mundane living  American Rural or the Big City tend to be scoffed at.

What I found so stunning and effective about Ramez Naam’s Nexus Trilogy: Nexus, Crux, and Apex, is because of how recognizable the motivations and actions of his characters are. First and foremost, the series is a thriller that explores ideas about not yet existing technology that can very much arrive in the next few decades. But the novels encapsulate, more so than white papers, policy articles from think tanks, or academic research, the human tensions of a new telepathy/mind-link/brain control technology.

If one were to ask what humans would do with such new devices, one needs to look no further than Nexus in order to get a realistic snapshot.

What made the novel so thought provoking? Probably because Naam did not shy away from the abuses of the technology. Nexus, in this novel, is a nanoparticle computer network that one can inject into the brain. The idea is that the particles can monitor and influence neural networks. Coupled with wireless packet transmission, it effectively enables mind-to-mind linkage, and control.

Needless to say, abuses are nefarious; body hijackings, slavery, murder, rape, drug-like stimulatory usages – all are in the novel. The last point is probably the flavor most consistent with why such devices would be made: therapeutic purposes.

Presumably, if these particles can localize to the brain (and possible elsewhere in the body), the dream is to be able to perform fine-scale monitoring of aberrant body processes and deliver precise therapy. The mind-link capability could potentially be driven by new approaches in treating mental illness. Probably the most profound use might be for enabling normative ways of communicating between loved ones who have autistic family members. Another key reason might be to enable joining of minds to enhance performance; the simple case might be in sports or within an orchestra, but more likely, such direct networking can benefit the military and using groups of humans as massively powerful distributed computing network.

Although there have been great strides in brain-machine interfaces for vision, we are a ways away from being able to replace the eye.

However, my sense is that a true Nexus like technology can be immensely function to cause harm, as soon as the technology is released. It will probably be co-opted into tools for body control, torture and rape, just because it should be easier to cause paralysis and induce base emotions.

So, in these contexts, with the immense potential for abuse and nearly limitless potential, is it worth it to pursue this technology? Further, is it a meaningless question? The premise of human dignity tends to be a Western concept. In other cultures, the needs of the many outweigh the needs of the few. That type of culture tends to respect the group, perhaps at the expense of the individual. In that context, can anyone reasonably expect a lack of research into such technologies, based on the concept of individual rights? If anything, there are more countries that are ostensibly authoritarian than not; I would not be surprised if the technology arose precisely because a government wishes to exert control, rather than from, say, the healthcare sector.

Naam has a distinct view; for one, his main characters, and generally what one attributes as the viewpoints with which an author is most aligned, tend to be more libertarian of the USA variety. It’s the usual gun lobby approach: the technology does not harm; humans do. There is a strong counter balance to this viewpoint, but what we are left with, in the novel, is a technology that is released into the wild, with no oversight, but dependent on most people doing “good”.

I’m not sure. Despite Naam’s ostensible viewpoint, I am left ambivalent. I’m not sure if this technology should develop, let alone be released, considering the potential for private, corporate and governmental abuse.

So what is the point of thinking about the Nexus Trilogy in the context of projecting what amounts to technology governance policy? Isn’t something like this best left to policy wonks?

Well, it goes back to my point: the best novels provoke thought. In this case, it isn’t so much the technology or how realistic the science is. The question remains, how will humans react/interact with the device or circumstance?

 

It is precisely the intersection of humans and technology that we should focus on. The response of humanity to technology is not written on a blank slate. Technology is introduced in the context of, first a few humans, and then society. We can draw from past examples to see how technology affects the economy. We can assess how technologies altered power relationships among different groups. These would of course be actual anthropological, archaeological, and historical studies.

Sometimes, however, a novel – even from genre fiction – that places realistic constraints on human reaction and motivations can cut through the noise and expose the heart of the problem.

 

Advertisements

I recently heard a fun episode of This American Life, called “Kid Politics”. Ira Glass presented three stories about children being forced to make grown-up choices. The second story is an in-studio interview of Dr. Roberta Johnson, geophysicist and Executive Director of the National Earth Science Teachers Association, and Erin Gustafson, a high-school student. The two represented a meeting of minds, between a scientist who is presenting the best evidence demonstrating human induced climate change and a student who, in her words, does not believe in climate change.

It is worth listening to; Ms. Gustafson is certainly articulate, and she is entitled to think what she wants. I simply emphasize that, Ms. Gustafson uses language that suggests she is engaged in a defense of beliefs rather than an exploration of scientific ideas.

Ira Glass, near the end of the interview, asks Dr. Johnson to present the best pieces of evidence arguing in favor of anthropogenic climate change. Dr. Johnson speaks of the analysis of ice cores, where carbon dioxide levels can be detected. This can be correlated to evidence of temperature. Ms. Gustafson points out that apparently, in the 1200s, there was human record of a warm spell – I gathered it was somewhere in Europe, although the precise location and the extent of this unseasonably hot weather was not mentioned –  where low CO2 levels at the time.

Clearly, Ms. Gustafson has shown enough interest in the topic to find some facts or observations to counter a scientific conclusion. She then calls for scientists to show her all evidence, after which she herself will get to decide. I suppose at this point, I’m going to trespass into Kruger-Dunning territory and speak about expertise, evidence, and the use of logic.

In general, I do not think it is a good approach for scientists to simply argue from authority. I admit, this comes from a bias in my interests in writing about science to a lay audience. I focus on the methods and experiment design, rather than the conclusions; my hope is that by doing that, the authority inherent in the concept of “expertise” will be self-evident. That is, I show you (not just tell you) what others (or I) have done in thinking and investigating a problem. By extension, I hope I informed myself sufficiently before I prepare some thoughts on the matter, shooting specifically for fresh metaphors and presentation. (As an aside, I suppose that this might be a mug’s game, given the findings of Kruger and Dunning.)

If a scientist has done his or her job, one is left with a set of “facts”. These facts populate any school textbook. But the facts are more than that: they can act as, with a bit of thought and elaboration, as models. I dislike the distinction people make when they argue that we need to teach kids how to think and not a set of facts. I argue that learning “how to think” depends crucially on how well a student had been taught to deal with facts. These skills include how to deal with facts by using them as assumptions in deductive reasoning, weighing whether a fact has solid evidence behind it, and using facts as if they were models.

Here’s my issue with how Ms. Gustafson, and other anti-science proponents (like anti-evolutionists), argue. Let’s say we were told that gas expands upon heating. One might take this as a given and immediately think of consequences. If these consequences are testable, then you’ve just made up an experiment. Inflate a balloon and tie it off. If temperature increases lead to volume increases, one might immerse the balloon in hot water to see if it grows larger. One might choose to examine the basis of thermal expansion of gas, and he’ll find that the experiments have been well documented since the 1700’s (Charles’s Law). A reasonable extrapolation of this fact is that, if heating gas increases its volume, then perhaps cooling gas will lead to a contraction? One might have seen a filled balloon placed in liquid nitrogen (at – 196 deg C) solidify, but it also shrivels up.

Depending on how well facts are presented, they can be organized within a coherent framework, as textbooks, scientific reviews, and  introductions in peer-reviewed articles already do. My graduate advisor characterized this context fitting as  “provenance.” No idea is truly novel; even if one does arrive at an idea through inspiration and no obvious antecedents, it is expected that this idea have a context. It isn’t that the idea has to follow from previous ideas. The point is to draw threads together and if necessary,  make new links to old ideas. The end point is a coherent framework for thinking about the new idea.

Of course, logic and internal consistency is no guarantee of truth; that is why a scientist does the experiment. What hasn’t been really emphasized about science is that it is as much about communication as it is about designing repeatable experiments. Although scientists tend to say, “Show me,” it turns out that they also like a story. It helps make the pill easier to swallow. The most successful scientists write  convincingly; the art is choosing the right arguments and precedents to pave the way for the acceptance of empirical results. This is especially important if the results are controversial.

The error Ms. Gustafson makes is that she thinks by refuting one fact, she can refute an entire tapestry of scientific evidence and best models (i.e. “theory”). She points to one instance where carbon dioxide levels do not track with the expected temperature change. But in what context? Is it just the one time out of 1000 such points? I would hazard a guess that the frequency of divergence is probably higher than that, but unless the number of divergences is too high, one might reasonably suppose that the two correlate more often than not. (Causation is a different matter;  correlation is not causation.)

But let us move on from that; a more elemental disagreement I have with Ms. Gustafson’s point is that, let’s say that one agrees that carbon dioxide is a greenhouse gas. A simple model is that this gas (and other greenhouse gases such as water vapor, methane, nitrous oxide) absorbs heat in the form of infrared radiation. Some of this energy is transferred into non-radiative processes. Eventually, light is re-emitted (also as infrared radiation) to bring the greenhouse molecule to a less energetic state. Whereas the infrared light had a distinct unidirectional vector, radiation by the greenhouse molecule will occur in all direction. Thus some fraction of light is reflected back towards the source while some other light essentially continues on its original path. If infrared light approaches earth from space, then these gases act as a barrier, reflecting some light back into space. Absorption properties of molecules can be identified in a lab. We can extend these findings to ask, what would happen to infrared heat that is emitted from the surface of the planet?

A reasonable deduction might be that just as out near the edge of the atmosphere, greenhouse gases near the Earth surface also absorb and reflect  a  fraction of heat. Only this time, the heat remains near the Earth’s surface. One logical question is, how does this heat affect the bulk flow of air through the atmosphere? (An answer is that the heat may be absorbed by water, contributing to melting of icebergs. Another related answer is that the heat may drive evaporation and increasing kinetic energic of water vapor, providing energy to atmospheric air flows and ultimately to weather patterns.

For someone who ignores greenhouse gas induced global warming, dismissing the contribution of carbon dioxide isn’t just a simple erasure of a variable in some model. What the global warming denier is really asking that the known physical property of carbon dioxide be explained away or modified. Again, the point is that carbon dioxide has measurable properties. For it not to contribute in some way to “heat retention” is to say that we must ask why the same molecule won’t absorb infrared radiation and re-emit infrared radiation in the atmosphere, in the same way that was observed in the lab. In other words, simply eliminating the variable would require us to explain why there are two different sets of physical laws that apply to carbon dioxide. In turn, this would require a lot of work to provide context, or, the provenance to the idea.

Yes, one might argue that scientists took a reductionist approach that somehow removed some other effector molecule if they measured carbon dioxide properties using pure samples. Interestingly enough, the composition of the atmosphere is well known. Not only that, one can easily obtain the actual “real-world” sample and measure its ability to absorb unidirectional infrared and radiate in all directions. This isn’t to say that thermodynamics of gases and their effects on the climate of Earth is simple. But it is going to take more than a simplistic type of question, such as to posit that there is some synergistic effect between carbon dioxide and some other greenhouse gas or some as-yet unidentified compound, so that we actually modify the working model physicists and chemists have about absorption and transfer of energy.

If you think that it seems rather pat for a scientist to sit and basically discriminate among all these various counter-arguments, I am sorry to disabuse you of the notion that scientists weigh all facts equally. Ideally, the background of the debaters ought not to matter. Hence, you will get scientists to weigh your criticisms more heavily if you show the context of the idea. The more relevant and cohesive your argument, the more seriously you will be taken. Otherwise, your presentation may do you the disservice of giving the appearance that you are simply guessing. That’s one problem with anti-science claimants: all too often it sounds like they are trying to throw as many criticism as possible, hoping that they will get lucky and have one stick.

Take evolution: if one suggests that mankind is not descended from primates, then one is saying that mankind was in fact created de novo. That is fine, in and of itself, but let’s fill out the context. Let’s not focus on the religious texts, but instead consider all the observations we have to explain away.

If we were to go on and to try and explain mankind as a special creation, how would we go about explaining mankind’s exceptionalism? Can we even show that we are exceptional? Our physiology is similar to mammals. We even share physical features as primates. Sure, we have large brains, among the largest brain mass to body mass ratios in the animal kingdom. Yet we differ in about 4% of our genome compared to chimpanzees. Further, at a molecular level, we are hard pressed to find great differences. We simply work the same way as a lot of other creatures. We have the same proteins, despite the obvious differences between man and mouse, a weak similarity between our proteins mean that we have only 70% sequence homology. It seems to me that at multiple levels, at a physiological level, at the level of physical appearances, and at a genomic level, we are of the same mettle as other life on earth. Yes, the fact is that we do differ from these other lifeforms, but it seems to be more logical to suggest that mankind is one type of life in a continuum of the possible lifeforms that can exist on Earth. It just seems likely that by whatever process that led to such a variety of creatures, man must also have been “created” from such a process.

 I hate to harp on this, but a fellow grad student and I had such arguments, while we were both doing our thesis work. My friend is a smart guy, but he still makes the same mistake that anti-evolutionists make: by disproving natural selection, one  therefore has provided some support for creationism. We argued about Darwin’s theory and whether it can be properly extended from a microscopic domain. He was willing to concede evolution occurs at a microbiotic level – such as for “simple” organisms, evolution makes sense, since fewer genes mean less complexity and therefore changes can be just as likely to be beneficial and deleterious.

I thought the opposite. If an organism is “simpler” – namely because it contains a smaller genome – it is even more crucial for a given organism that a mutation be beneficial. A larger genome, from empirical data, generally contains more variants of a given protein. While this in itself reflects the appropriation of existing genes and their products for new functions. Perhaps one possibility is that   an increase in isoforms of a protein also suggests how mutations can occur without the organism suffering ill effects directly. There is a redundancy of protein and function. Also, my friend seems to regard fitness as a “winner takes all” sort of game – as in the organism lives. I merely saw the “win” as an increase in probability that the animal will have a chance to mate, not organismal longevity. Sure, this is a just so story; I think his argument is better than the usual creationist claptrap, but only in the  trivial sense that, yes we need to take care not to over interpret our data or models and yes,  scientific theories – althoughswa they are our best models –  are temporary in the sense that we can revise them when better evidence comes along.

To go back to the way Ms. Gustafson and my friend argue, it behooves them to explain the exceptional circumstances by which we, or carbon dioxide, can act differently from our best model (i.e. theory) and yet conform to it most of the time.

Thus, despite Ms. Gustafson’s call for “all the evidence”, I somehow was left thinking no amount of evidence will persuade her. Part of the problem is that, like the religious who misapply ideas of meaning found in their bibles to the physical evidence generated by scientists, she misapplies her political views to provide the context through which she views scientific evidence about global warming. Whereas she should have used logic to deduce that global climate does not predict local weather and scientific principles  to determine whether global warming is part of a normal cycle for the Earth or is in fact due to circumstances like an increase in greenhouse gases, she probably thought of global warming in terms of regulations and taxes pushed, generally in the United States, by Democrats. Thus, Ms. Gustafson speaks, in Stephen Jay Gould’s term, from the magisteria of meaning (as defined by her political and religious beliefs) and not from the magisteria of science. In this case, she isn’t defending her theory about how the world works; her motivation is to fit the observations to her political and religious ideals.

Can we really separate the political from the scientific? If some scientist argues that there is a problem, it seems difficult to find ways to argue against them. My only suggestion is that Ms. Gustafson and others like her consider their arguments more carefully. Nitpicking specific examples is counter-productive. All theories can be criticized in this way. However, integrating the counter-example is not a straight-forward process, especially if simplistic criticism is at odds with some other firmer, more fundamental observation that even Ms. Gustafson has no problems accepting.

 

I have no desire to rehash arguments made by many others, in and out of publishing, or who have published with big or small press, about the good and the bad of e-books. Instead, I offer some observations from Teleread (e-books continue to show an increase in sales and that, as a form, books are undergoing changes – thank you, Chris Meadows and Paul Biba for the links) and The Digital Reader.

****

Yesterday, I went to Porter Square Bookstore to attend a reading by Tom Perotta (The Leftovers). I am a fan of Perotta’s (I had some reviews from Goodreads that I haven’t yet reproduced here. I managed to repost my essay on Perotta’s Little Children and The Abstinence Teacher.) While self-contained (it was one about two men, one of whom reaches out to the other to provide comfort), it did not seem too compelling to me. Instead, I found the book jacket description to be more interesting: a lot of people vanish (Rapture style). How do the people who are left behind cope (in the absence of an explanation as to why the vanishing happened?)

There were not many questions about his books, per se. There were two involving the profit motive: one person asked if it was any easier to get a second book published. Another asked if he now writes with an eye to screen adaptations. For the latter, Perotta noted that, after Election, the movie, was released, Hollywood seemed excited by the prospects of his College Joe. The book disappointed that crowd in that it was not the slapstick, raunchy comedy people were expecting. As for Little Children, Perotta would have marked that as one of the least likely books to be adapted (an ensemble piece, with a plot about a child molester). The director, however, really wanted it made.

To tie it into this post: One woman asked Perotta how he thought about ebooks, whether he feels they provide an opportunity or if he sees it as a threat. Perotta, like in his books, seemed to give a fair answer. He acknowledged that there are opportunities for authors: new authors can be published, while established authors will never go out of print. His tone, posture, and rushed ending to that statement suggested to me that he understood the virtues of ebooks rationally, he did in fact feel threatened. He did not rail against ebooks. He realized that the medium is undergoing a transition; in the short term, he is satisfied that there is a place for books. His evidence? He gave his reading in a bookstore, which is acting as a forum for readers and authors to interact. More emphasis was given to the fact that he was comfortable in the publishing world. He grew up reading words on paper, and that’s his comfort level. It seems his point is that paper book readers have a culture, and that e-book readers will eventually form a different sort of culture from the one he has known.

I think our current conception of e-books is actually limited, to some extent, by the adoption of the Kindle. The Kindle is a translation of paper to screen. A number of features mimic what people can do with paper (marking pages, writing notes) while improving on others (such as whole book search, storing large collections of titles). But the e-ink technology (in the current black/white, slow screen refresh state) lends itself to be treated like a book.

With the iPad and NookColor, we are beginning to see reshaping of content to fit the color screen of a portable computer. The popularity of the Kindle may have stemmed from its familiarity to the printed word. Sooner or later, e-books will diverge from this current form (book like presentation), turning into slick interactive, multifaceted presentations (probably some hybrid wiki-page/HTML5/video/music extravaganza). We are already seeing that in the Dr. Suess books being converted to iPad and Android apps. It is ironic in that many have tried to expand on the book form (think Griffin and Sabine books, and the Dragonology series) only to bypass it altogether.

I think what is lost in attacks and defense of ebooks is the concept of technology creating culture. Neil Postman, Mark Helprin, and Nicholas Carr have made these points. Technology is neutral in the sense that humans can decide on its immediate use. We also have the ability to select among a great number of tools. However, the authors I cited here make compelling arguments that we are also shaped by our tools. We may not select the proper tool (if we are holding a hammer, it won’t help us with set-screws.) And tools can limit how we approach a task (hence the cliche of, when you have a hammer, everything looks like a nail.) They take the argument a step further; technologies that alter language can literally alter how we think.

I don’t think it is controversial to say that humans are generally intellectually adaptable. Postman et al. argue that we are much more malleable than assumed, and to our detriment. Online activity in the mobile age, googling, clicking links, video-centric delivery, and short texts (shorthand, abbreviations, two-sentence paragraphs) tend to promote shallow scanning. One might counter that, if a person is inclined, he will delve deeper. Postman et al. counter, no, they won’t. The nature of Internet presentation, they argue, will make it less likely for people to ruminate, to read deeply, and to think in the silence of their own heads. It is easier to follow the next link.

Of the three, I think Postman gave a framework for dealing with technology. In both Amusing Ourselves to Death and Conscientious Objections, he argues that new technology is here to stay (at the time, he was writing about the pervasiveness of television), and we need to be aware that all such communication altering technologies has the capacity to reshape the way we think. We must take care to exploit its virtues while limiting its disadvantages. In other words, control the technology lest it controls us. What was interesting is that he argued that TV isn’t bad because it provides salacious entertainment. TV is most pernicious when it aspires to teach and to serve as a forum public discourse.

Not just television, but effective television presentation, comes with visual excitement and change. This is the opposite of the arguments one can develop in excruciating detail in a book. One can compare a book (even better, read many books) on global warming to an Al Gore movie or to inane 5-minute segments in television news. Postman would simply prefer that we realize that a 5-minute segment is the worst way  of dealing with complex arguments. It simply isn’t enough, especially given the scientific literature on the subject matter. What TV is suited for, Postman notes, is an entertaining 5 minute segment. Something to make you laugh or cry and enjoy; something with impact, translatable into sensational imagery – sound is no longer enough. Instead, we are concluding that audio-visual presentations (whether on TV or in Youtube videos) comprise  the main solution, rather than a portion. It isn’t that we do not what the limits of technology are; we do not ask if we are using the right tool.

I agree with this assessment. Now, when I peruse textbooks that are written for college students (in neuroscience), I note all the missing pieces of information. Not just nuanced counterarguments, but  complete series of compelling experimental evidence that points to alternative theories. And that happens even in a 700-page textbook. Imagine how much can be lost by reduction into sound-bytes (not compressed, since it implies that the total information is there but reformed into a more efficient notation.) Television has shortened political debates into  short oral bursts (hopefully, with visuals), because its strength is in providing ever changing stimulation. The Internet will reshape reading on a screen, emphasizing scanning, clicking and instant look-up, not necessarily understanding or retention, since the information is always at hand. The new “smart” will be in constructing proper search terms.

I don’t think there is anything wrong with that, though. As Postman and Carr suggest: be aware of what is happening to you (although I am paraphrasing liberally; they devalue this type of intelligence. I am willing to redefine what intelligence ought to be in this brave new world of ours). Maybe, one can simply use the search engine to find the proper book.

As a final aside: here’s another take on what we can lose. Scintillating intellectual conversation. I was browsing through the stacks at Porter Square Books and saw that there is a new collection of essays from Christopher Hitchens. The book jacket blurb seemed to have a pertinent statement: Hitchens combines intelligence, wit, a huge store of knowledge, the ability to recall from this “offline” repository, and charm. That description does sound like someone who would make a wonderful dinner companion. I can certainly see how conversational flow can be ruined if all of us are googling into our phones. But I sense a hint of elitism in that; for my part, I have a (I hope relatively idiosyncratic) collection of stories about science, quantum mechanics, Richard Feynman, mathematical gambling analysis, gadgets, statistical analysis, novels, World War II, microscopy techniques, and 19th-century European history running in my head. And that’s just a thin slice of what I know. Whether I am good company depends on the people I am with, how well I present my thoughts, and how receptive they are to them. I think the point is that, simply, Hitchens and I (and others) have chosen to remember different things. Maybe the cultural gatekeepers are just annoyed so many people choose to remember something different than they do?

Is curation important? I think so, but only in the sense that it plays to our virtues. We are not indexing machines like Google’s data containers. What we do remember are things associated with great emotional impact. That helps us perform single-trial learning (to, if we are lucky, avoid in the future things that hurt or almost killed us), but in this age, it can help us identify meaningful cultural objects. It may be reflected in the fact that we prefer people tell us of formative events that shaped their lives, rather than a considered answer as to the sequence of life’s happenings that let their lives unfold the way it did.

All this is a way of saying that, I agree with Perotta that reading culture will change. Since I am so comfortable with both paper and digital screen, I do not feel the same loss that Perotta does. I know there are readers out there like me: those who feel comfortable in a library, a bookstore, or on bn.com/ebooks. I pack paper books and my NookColor for trips. I write marginalia in books I own, and I upload my notes to Evernote when I read e-books. But are we the most common sort of e-book readers? No idea; I am not sure what the dominant form of e-book reading culture will be.

What a strange book. The whole point of being is to trash intellectuals who idealizes the pursuit of freedom (either in behavior, in intellectual pursuits, from society). Paul Johnson admitted that it was unfair to use the private lives of individuals to judge the strength of their thoughts, but nonetheless he spent the entire book documenting the deficiencies of men who talked big and lived meanly. The quality of the men never matched the beauty of their vision, prose, or poetry.

The futility of such an exercise is noted early, in the chapter about Shelley. Johnson admits that this cad was a wastrel who had no compunction about writing mean letters detailing the failures of his parents while concurrently asking for money. Shelley used people, seeing his family as nothing but a source of income and women no more than a means for physical pleasure. Naturally, he thought himself liberal, dispensing with archaic institutions of monogamy. He expected his wife to accept his mistress to share their apartment, but he graciously extended the same privilege to his wife (whom apparently complained about this arrangement.)

Regardless, all this is peripheral: Johnson thinks Shelley wrote beautifully, and his poetry moved Johnson. Johnson writes,

The truth, however, is fundamentally different and to anyone who reveres Shelley as a poet (as I do) it is deeply disturbing. It emerges from a variety of sources, one of the most important of which is Shelley’s own letters.”

Great. But why should the gap between artisanal accomplishments and the empty lives of artists be so surprising, in an age when starlets, athletes, politicians, authors, musicians, and entertainers behave as if they were competing for the favor of the Borgias? Johnson already conceded the point that he can appreciate the artistry, if not the artist.

There was one high point in the book, though. Johnson destroyed Karl Marx on both a personal and professional level. In this instance, it seems that there are elements in Marx’s personality that might have directly resulted in the shoddy intellectual quality of his work. Marx made a better short form than long form writer; the long form exposed Marx’s deficiencies as a researcher and investigator. Das Kapital contained a number of misuse of evidence. Marx did do a spectacular job of digging up dirt on his enemies, though.

In a coda, Johnson links 2oth century atrocities to both secular intellectuals ignoring atrocities committed in their name and to the social milieu they created that promoted nihilism (namely in excesses of Communist regimes.)  It seems to me a simpler case that these mass murderers were ambitious, ruthless, and disposed to murder even before they encountered post-modern philosophy. As much as I detest social relativism, post-modernism, and religious dogma, I can’t fault these ideas as causing mass effects. I can, however, fault the men who, upon gaining power to commit atrocities, cloak their acts in the trappings of a recognizable philosophy.  To suggest that terrorists or dictators  valued life until reading a book seems to be placing the cart before the horse.

In the end, I do agree with Johnson in that it is so disappointing that philosophers rarely reach the ideals they espouse. So what else is new?

%d bloggers like this: