Archive

Essay

I had recently finished The Myth of Autism by Michael Goldberg, M.D., who presents anecdotal observations that support the idea that some cognitive defects currently diagnosed under the umbrella of autism spectrum disorder is a result of pathology, not a genetically based, developmental defect. This is a crucial distinction, as a pathology suggests a disease state that is, in theory, treatable, even if we lack a cure now.

In the book, Dr. Goldberg goes further and suggests that a broad class of autistic patients suffer from a neuroimmunological disorder, and only a small fraction of autistic patients suffer from classical autism (considered a developmental condition.) He brings enough observations to bear on the idea that an aberrant immunological response may underlie cognitive deficits, that I’ve begun looking at neuroimmunology related research (partly out of personal interest, but also because I am in the middle of identifying a research agenda for a 1-D imaging cytometer my lab has developed.).

From a presentation perspective, I was disappointed that Dr. Goldberg does not present a systematic study of his patients and treatments. Instead, he gives an appendix of patient testimonies. The omission of a study is glaring, given his access to patient data and outcome: symptoms (cognitive impairments),  biomarker status (elevated immune system and inflammation responses), and long-term observation.

Again, I think his arguments are compelling, and he cites existing research (including a recently, formally retracted paper on a retrovirus as a cause of chronic fatigue syndrome*) that supports his thesis, which makes follow up easy. Given the element of self-promotion and the lack of a clear scientific consensus, one can be forgiven for  withholding judgment on the veracity of his claims until a  deeper reading into the primary research. But one might do worse than to use this book as a guide into subsequent research.

*John Timmer, at Ars Technica, has written some articles (here and here) regarding the controversy surrounding the idea that a retrovirus causes chronic fatigue syndrome.

I had been thinking about ebooks lately; the upcoming Nook Tablet and Kindle Fire are the final nails in the coffin for the book publishing industry. Ebooks are simply a commodity, and one that produces less revenue than either music or video. Static ebooks, resembling ink-on-paper, will be superceded by a product that fully embraces the possibilities given it by tablet based computers. What I am about to say comes sounds reactionary, perhaps even downright Luddite in today’s world: the reader demographic I belong to is being discarded.

I think I am that rare beast (20-35 years of age, male) who had avoided action movies, sports books (well, not sports statistics, but I would argue that falls under economics and math), video games (ever since I received my bachelor’s degree), and cars (I’m a big fan of public transport.) When I was ten, I made a conscious decision to listen only to classical music (I have been a listener of WGBH radio and Classical Radio Boston since that time). What I bugged my parents most about was shuttling me to the library, at least until I was old enough to get there on my own. When I was a teenager, I took piano lessons from a jazz and blues teacher; however, I asked him if he wouldn’t mind teaching me Beethoven instead. So yes, I admit I’m a strange duck. The point is that I’m probably not the mainstream market.

With regards to reading technology, I agree absolutely with people like Neil Postman and Nicholas Carr (among others), who argue that we are leaving the era of densely organized, linear narrative literacy into a mode of literacy designed for cursory scanning, link following, and 140-character phrasings. Although both these writers took pains to explain things in neutral terms, one cannot help but see their disgust and despair at the world having the opportunity to choose either a rich internal, mental life or one based on satisfying emotional impulses, pitch headlong to the latter.

Both these men phrased the argument in these terms: literacy, that of constructing arguments based on weaving facts and rhetoric and placing it into a written form enabled the formation of intellect and wisdom. The act of writing onto paper rendered a permanence to thoughts that required the author to consider and respect each and every word. Once published, the words cannot be so easily retracted and fixed. But the true innovation is that readers can refer to previous statements made in the text, something that simply cannot be done had the author been limited to speaking. Thus the book enabled complex arguments due to its ability for cross-referencing. Complex arguments can be delivered as one coherent unit of thought.

One might argue that web-enabled documents should enhance the concept of books, providing ease in cross-referencing and primary source access. However, that is generally not the case. While reading texts on screens is as efficient as reading on paper, when hypertext links are introduced, reading comprehension decreases. Once links are present, it seems readers can’t help but to follow them, especially to other articles. Ironically, overall, reading comprehension for the text they were originally reading goes down. It isn’t clear that the readers could distinguish between the multiple articles they read. Thus source attribution becomes problematic. It might not matter in every day speech, but if one wishes to write informed op/ed pieces or scholarly works, one might see how it can be inefficient.*

* Source misattribution need not be limited to heavy internet users; I remember a flap with Doris Goodwin Kearns, who was found to have plagiarized from another scholar in her book on the Kennedys. She accepted responsibility for her misuse of quotes, although her defense was carelessness and not malice. She lost track of her notes and mixed up passages she wrote with those written by others. I think, for a scholar, carelessness is a greater offense than stealing, but that’s like arguing whether being killed in a hail of bullets is worse than being killed by a single gunshot to the head.

Despite these episodes, one can see that books lend themselves to being literal dividers among different “thoughts”. On the web, HTML addresses serve that purpose. Who among us, however, pay that much attention to them as identifiers? At any rate, taking care might compensate for these lapses, but apparently, a majority of people do not take care (hence the likely decrease in comprehension and increase in confusing different authors.) Online texts become a data slush.

Books engage readers unimodally, making it difficult to access other source materials. Follow-up probably requires some selection and distinction by readers, to waste the least amount of time. One might save himself a lot of time in a library or bookstore by simply thinking about what was written and identifying the most fruitful line of research, then acting on it. This is probably the “virtuous cycle” engendered by books. Without the distraction of clicking on links or even doing something else (like playing games, watching YouTube, or making iTunes playlists), one’s attention is captured by the writer and his own thinking processes. If you value choosing, more often than not, things that make you think, you might place a premium on identifying works of distinction for reading, constantly sorting books into the “literary” and “genre” piles or marking non-fiction as “scholarly” or “polemical” (you know, like being an elitist snob?)

Neither Carr nor Postman feels  technology is neutral, because new technology brings the possibility of drastic change into an existing culture. This can certainly be tempered by our exercising selectivity when adopting and using new tech. But we don’t do that; Americans in particular have rarely felt need to hinder acceptance of technology, generally feeling new machines and methods can be simply mapped to existing routines. It is rare for humans to realize the potential that the new has to overwrite the old.

We also seem unable to accept that there are such things as intrinsic differences, and that describing the differences do not necessarily imply “goodness” or “badness”. Instead, contrasts are turned into a line of battle, and one list must vanquish the other. I have noticed that when people talk about the pros and cons of ebooks, they focus on the differences between reading on a screen and reading black-text-on-white-paper. People talk about ink versus pixel density, smells, tangibles, marginalia, and so forth. I had always found these arguments silly. Both systems convey words quite well and aesthetics is a secondary concern, while the true difference is that one is reading a relatively invariant, single-purpose object or from a computer.

One might want to keep in mind the differences that each engender, and then take steps to choose the right tools for the job. For example, web pages lend themselves to displaying news. The style of brevity, the need for immediacy, and the ability for multimedia presentations offer readers a variety of reports and primary audio/visual supporting materials. Interactive pictures (Flash animals, Javascript, and so on) suit themselves into longer magazine like articles. The very nature of link heavy, multimedia extravaganza, however, detracts from the single purpose and mindedness of books. In my experience reading on e-readers, computers, laptops, and tablets, I have not come across the “all-business” approach of book. A writer can simply tell his story, present evidence, furnish a descriptive table of contents, and simply get out of my way.

But there is one difference between my take on screen based reading technologies from those of Postman and Carr. I have never had the problem of distraction when reading or writing with tablets or desktop PCs. One reason is that I spend time to find the best piece of software to let me read in the way that I want. Sure, I am particular about minutiae like margin width, font face and font size, but once set, I do not fuss with settings. I’m more concerned with optimizing my ‘read-flow”. On my Nook color,  I got rid of stock Nook Color (well, I run CM7.1 off an SD card. I have a soft spot for B&N right now, considering that I think that Amazon will drive them out of business) because the Nook’s reading software does not hook into Evernote. Also, the Nook version of Evernote was at an earlier version than for the general Android market (it’s been corrected since, but I still can’t share directly into it from the NC reader).

The base NC OS also does not allow for widgets; one thing I love about Evernote for the Android market is the expanded widget, which shows 3 previews of your most recent notes. You can either add a new note or tap the preview to go directly to that note. With rich text formatting, I can now copy and paste interesting quotes from the ereader software, change formatting, write a few thoughts, and go back to my e-book.

This might seem inefficient. It is. For all-in-one note taking, I like MoonReader. However, it does not support CSS for formatting ebooks properly. Well, it has a mode where it displays CSS using Android’s native HTML renderer. Unfortunately, the book is essentially displayed as a web page, and the user can no longer highlight or annotate text. WIth that said, most ebooks can be read with basic paragraph indentations. For non-fiction, it’s a non-starter. It can be hard to tell whether the author wrote something or if he set something of into block-quotes. I’ve gone back to Aldiko. Selecting text across pages is no longer a problem. I just copy/paste in two segments.

Why do I like this inefficiency? I’ve found that when it’s easy to make notes, I made morethan when I composed my notes carefully. I realized that using the internal highlighting function creates the college textbook phenomenon: where a large portion of the text is highlighted by the student, but without marginalia. I intentionally made it less convenient for myself by copying the relevant passsage in another program. This means I have introduced an immediate cost in my note taking, and everytime I do so, I have to disrupt my reading flow. As a result, I annotate in ebooks the way I mark up actual paper books. I do bracket an important passaage or two, but write a summary, objections, citations to contrary arguments and publicstions, or questions in the margin. I try reading larger segments of the book, pick a salient quote, and write a small summary of the arguments.

I see this as adapting technology to my reading, as opposed to letting tools change the way I read. Yes, I spent a little time with metareading, trying to find my comfort and thinking about what I want in my software. Before I got my NookColor, I read on the phone, and before that, I had a Palm/Compaq/HP PDA. Then, I tended to read fiction (due to terrible inefficencies with annotations – this was the PalmPilot days) on the PDA, and reserved non-fiction readings to books I owned. I also read a lot more library books than I do now.

So I never wrung my hands over reading ebooks, because I made a conscious effort to do my e-reading in a way where my ability to think, ruminate, and write is preserved. I spent some time adjusting the software for my comfort. But I don’t see that as any different from setting up a reading nook, a den, or a writing desk. I’ve seen writers do something similar with their computer setups, selecting programs that simply shows text: no menus, no fonts, no styles, maximized workspace so that they  cannot see the other programs that beckon for their mouse clicks. It is unlikely that most consumers will make that effort to use ther tools in such a precise manner. That is the very distinction between power users and regular consumer users.

And this is what I mean by the Nook Tablet signaling the end of a bookcentric reading demographic. No, I am not discounting genre readers. But I think the segmentation is between “literary”/”genre” readers vs. the mass of non-readers. it has always been thus. Think about the books on the NY Times bestsellers: I am sure any aficionado can suggest better alternatives within a particular genre. The most popular books are not always the best books.

However, I’m not worried about some abstract notion of quality; I do worry that book sellers and publishers who treat literature as a market (and it is their right to do so) will not exercise the selectivity that could result in good literature. I am not saying that popular books are all bad, and that good literary works can’t be popular. I am saying that shrewd business men who happen to have chosen to make their money in publishing (although one might argue that making money from books isn’t so shrewd) will make decisions based on the bottom line. This will lead to decisions that won’t make sense for book lovers, but might sense if one is simply trying to capture dollars from non-readers.

So we will see sellers use the internet to the best they can: data mining, marketing to niches, fine slicing of market segments, i.e. a continuation of market splintering. The mass market  won’t be literary minded people, who enjoy reading for its own sake, and perhaps with broad interests. I guess I am saying that there are fewer cross genre readers than there are consumers who are simply satisfy their narrow interests among different media types. Hey, we can’t expect anything from free market econonmics, geared to finding the cheapest way to pull in the biggest income.

And so a book seller sees a need to build an ereader that is actually a general purpose computer (i.e. Nook Color and Nook Tablet.) I suppose the Nook Color is a book-centric tablet, but with each update, it’s gained multimedia functions. That is what is demanded and needed to compete with Amazon and Apple’s iPad. This point is underscored by popular e-book bloggers and publishing industry watchers. In fact, Nate Hoffelder of The Digital Reader prefers the Nook Tablet to the Kindle Fire because of the Nook’s media capabilities.

There is no conspiracy or malice. Just a bunch of smart people, doing the best they can to make the most money for their companies. They wind up selecting the biggest market segments and cater their wares to them. Consumers will buy products that they think will serve them best. And so we’ll see more e-books that are app like, with embedded videos, music, links, and interactive figures. The concept of a linear, comprehensive narrative will be superceded by apps amenable to updates and upgrades. I’m not immune to this; I can see how attractive having an interactive Cat in the Hat story app would be to a child. But then these aren’t “books” in the traditional sense, nor are they even “e-books”. They are book based apps.

Is this bad? Not from a market standpoint. It’s probably where the money’s at. And I’d be the last person to begrudge another his ability to make money to feed his family. I’m not even decrying the fact that we aren’t practicing wisdom, but leaning on market research, in choosing the products we make and sell. It’s like choosing a web search engine (Bing? Google?), an app market ecosystem (Google’s? Amazon’s? Barnes and Noble’s? Apple’s?), a cloud storage service (Dropbox? Ubuntu One?), or an OS (Win? Linux? MacOS?). Each choice leads to a different array of probabilities and paths one can take. It also tells companies how they can behave in order to capture our choices and dollars. And because the momentum of the mass market is towards apps and general purpose computers – not ebook readers, that’s exactly where we will end up.

Adam Kirsch wrote a great essay on how the Loeb Classical Library allows readers to discover how Socrates’s contemporaries viewed him:

But it has always been a matter of debate whether the Socrates Plato writes about is a faithful reflection of the man who walked the streets of Athens.

And later

Pursuing the figure of Socrates through the Loeb Classical Library leads, then, to troubling conclusions. There’s no reason to think that Xenophon’s dull moralist or Aristophanes’s comic foil is closer to the real Socrates than Plato’s philosopher — rather the contrary, since Plato was the closest to Socrates of any of them. But the three portraits are a reminder that we have no direct access to the real Socrates, whoever he was. We have only interpretations and texts…

 

I have no desire to rehash arguments made by many others, in and out of publishing, or who have published with big or small press, about the good and the bad of e-books. Instead, I offer some observations from Teleread (e-books continue to show an increase in sales and that, as a form, books are undergoing changes – thank you, Chris Meadows and Paul Biba for the links) and The Digital Reader.

****

Yesterday, I went to Porter Square Bookstore to attend a reading by Tom Perotta (The Leftovers). I am a fan of Perotta’s (I had some reviews from Goodreads that I haven’t yet reproduced here. I managed to repost my essay on Perotta’s Little Children and The Abstinence Teacher.) While self-contained (it was one about two men, one of whom reaches out to the other to provide comfort), it did not seem too compelling to me. Instead, I found the book jacket description to be more interesting: a lot of people vanish (Rapture style). How do the people who are left behind cope (in the absence of an explanation as to why the vanishing happened?)

There were not many questions about his books, per se. There were two involving the profit motive: one person asked if it was any easier to get a second book published. Another asked if he now writes with an eye to screen adaptations. For the latter, Perotta noted that, after Election, the movie, was released, Hollywood seemed excited by the prospects of his College Joe. The book disappointed that crowd in that it was not the slapstick, raunchy comedy people were expecting. As for Little Children, Perotta would have marked that as one of the least likely books to be adapted (an ensemble piece, with a plot about a child molester). The director, however, really wanted it made.

To tie it into this post: One woman asked Perotta how he thought about ebooks, whether he feels they provide an opportunity or if he sees it as a threat. Perotta, like in his books, seemed to give a fair answer. He acknowledged that there are opportunities for authors: new authors can be published, while established authors will never go out of print. His tone, posture, and rushed ending to that statement suggested to me that he understood the virtues of ebooks rationally, he did in fact feel threatened. He did not rail against ebooks. He realized that the medium is undergoing a transition; in the short term, he is satisfied that there is a place for books. His evidence? He gave his reading in a bookstore, which is acting as a forum for readers and authors to interact. More emphasis was given to the fact that he was comfortable in the publishing world. He grew up reading words on paper, and that’s his comfort level. It seems his point is that paper book readers have a culture, and that e-book readers will eventually form a different sort of culture from the one he has known.

I think our current conception of e-books is actually limited, to some extent, by the adoption of the Kindle. The Kindle is a translation of paper to screen. A number of features mimic what people can do with paper (marking pages, writing notes) while improving on others (such as whole book search, storing large collections of titles). But the e-ink technology (in the current black/white, slow screen refresh state) lends itself to be treated like a book.

With the iPad and NookColor, we are beginning to see reshaping of content to fit the color screen of a portable computer. The popularity of the Kindle may have stemmed from its familiarity to the printed word. Sooner or later, e-books will diverge from this current form (book like presentation), turning into slick interactive, multifaceted presentations (probably some hybrid wiki-page/HTML5/video/music extravaganza). We are already seeing that in the Dr. Suess books being converted to iPad and Android apps. It is ironic in that many have tried to expand on the book form (think Griffin and Sabine books, and the Dragonology series) only to bypass it altogether.

I think what is lost in attacks and defense of ebooks is the concept of technology creating culture. Neil Postman, Mark Helprin, and Nicholas Carr have made these points. Technology is neutral in the sense that humans can decide on its immediate use. We also have the ability to select among a great number of tools. However, the authors I cited here make compelling arguments that we are also shaped by our tools. We may not select the proper tool (if we are holding a hammer, it won’t help us with set-screws.) And tools can limit how we approach a task (hence the cliche of, when you have a hammer, everything looks like a nail.) They take the argument a step further; technologies that alter language can literally alter how we think.

I don’t think it is controversial to say that humans are generally intellectually adaptable. Postman et al. argue that we are much more malleable than assumed, and to our detriment. Online activity in the mobile age, googling, clicking links, video-centric delivery, and short texts (shorthand, abbreviations, two-sentence paragraphs) tend to promote shallow scanning. One might counter that, if a person is inclined, he will delve deeper. Postman et al. counter, no, they won’t. The nature of Internet presentation, they argue, will make it less likely for people to ruminate, to read deeply, and to think in the silence of their own heads. It is easier to follow the next link.

Of the three, I think Postman gave a framework for dealing with technology. In both Amusing Ourselves to Death and Conscientious Objections, he argues that new technology is here to stay (at the time, he was writing about the pervasiveness of television), and we need to be aware that all such communication altering technologies has the capacity to reshape the way we think. We must take care to exploit its virtues while limiting its disadvantages. In other words, control the technology lest it controls us. What was interesting is that he argued that TV isn’t bad because it provides salacious entertainment. TV is most pernicious when it aspires to teach and to serve as a forum public discourse.

Not just television, but effective television presentation, comes with visual excitement and change. This is the opposite of the arguments one can develop in excruciating detail in a book. One can compare a book (even better, read many books) on global warming to an Al Gore movie or to inane 5-minute segments in television news. Postman would simply prefer that we realize that a 5-minute segment is the worst way  of dealing with complex arguments. It simply isn’t enough, especially given the scientific literature on the subject matter. What TV is suited for, Postman notes, is an entertaining 5 minute segment. Something to make you laugh or cry and enjoy; something with impact, translatable into sensational imagery – sound is no longer enough. Instead, we are concluding that audio-visual presentations (whether on TV or in Youtube videos) comprise  the main solution, rather than a portion. It isn’t that we do not what the limits of technology are; we do not ask if we are using the right tool.

I agree with this assessment. Now, when I peruse textbooks that are written for college students (in neuroscience), I note all the missing pieces of information. Not just nuanced counterarguments, but  complete series of compelling experimental evidence that points to alternative theories. And that happens even in a 700-page textbook. Imagine how much can be lost by reduction into sound-bytes (not compressed, since it implies that the total information is there but reformed into a more efficient notation.) Television has shortened political debates into  short oral bursts (hopefully, with visuals), because its strength is in providing ever changing stimulation. The Internet will reshape reading on a screen, emphasizing scanning, clicking and instant look-up, not necessarily understanding or retention, since the information is always at hand. The new “smart” will be in constructing proper search terms.

I don’t think there is anything wrong with that, though. As Postman and Carr suggest: be aware of what is happening to you (although I am paraphrasing liberally; they devalue this type of intelligence. I am willing to redefine what intelligence ought to be in this brave new world of ours). Maybe, one can simply use the search engine to find the proper book.

As a final aside: here’s another take on what we can lose. Scintillating intellectual conversation. I was browsing through the stacks at Porter Square Books and saw that there is a new collection of essays from Christopher Hitchens. The book jacket blurb seemed to have a pertinent statement: Hitchens combines intelligence, wit, a huge store of knowledge, the ability to recall from this “offline” repository, and charm. That description does sound like someone who would make a wonderful dinner companion. I can certainly see how conversational flow can be ruined if all of us are googling into our phones. But I sense a hint of elitism in that; for my part, I have a (I hope relatively idiosyncratic) collection of stories about science, quantum mechanics, Richard Feynman, mathematical gambling analysis, gadgets, statistical analysis, novels, World War II, microscopy techniques, and 19th-century European history running in my head. And that’s just a thin slice of what I know. Whether I am good company depends on the people I am with, how well I present my thoughts, and how receptive they are to them. I think the point is that, simply, Hitchens and I (and others) have chosen to remember different things. Maybe the cultural gatekeepers are just annoyed so many people choose to remember something different than they do?

Is curation important? I think so, but only in the sense that it plays to our virtues. We are not indexing machines like Google’s data containers. What we do remember are things associated with great emotional impact. That helps us perform single-trial learning (to, if we are lucky, avoid in the future things that hurt or almost killed us), but in this age, it can help us identify meaningful cultural objects. It may be reflected in the fact that we prefer people tell us of formative events that shaped their lives, rather than a considered answer as to the sequence of life’s happenings that let their lives unfold the way it did.

All this is a way of saying that, I agree with Perotta that reading culture will change. Since I am so comfortable with both paper and digital screen, I do not feel the same loss that Perotta does. I know there are readers out there like me: those who feel comfortable in a library, a bookstore, or on bn.com/ebooks. I pack paper books and my NookColor for trips. I write marginalia in books I own, and I upload my notes to Evernote when I read e-books. But are we the most common sort of e-book readers? No idea; I am not sure what the dominant form of e-book reading culture will be.

Little Children and The Abstinence Teacher are two complex, sympathetic works. These are the only two Perrotta books I have read, but it is clear to me that he is a generous author, who is able to detail the complex thought chains lying below each of his characters’ surfaces. This generosity turns symbols into living, breathing people, enabling them to transcend simple, thematic opposition and actually interact with one another. The key point is that he does not treat the opposition as punching bags.

Little Children is the lesser work of the two, if only because the plot seems so apparently stilted next to the personalities of the characters. The inclusion of a child-molester in this story seems to serve no purpose other than to enable some opportunities for Brad to get out of the house (as part of a neighborhood watch group) and to provide some tension near the end of novel. It is too clumsy, given that Perrotta’s skill is so evident in his descriptions of the molester, inspiring both repulsion and pity.

There is one misstep in characterization that occurs on the first page, when the women are introduced – except for our protagonist Sarah – as the mother of so-and-so child. It isn’t symbolism: it is a neon sign that states Sarah is the contrarian of the bunch, a lapsed feminist who longs to be defined by anything other than motherhood. For the most part, the other women, who serve more as the Harpies than a Greek chorus, are not fleshed out. There is one little vignette where the shrew’s (Mary Ann’s) unhappy home life is laid bare, but for the rest of the story they serve to remind Sarah of the destiny awaiting her. No conversation is more meaningful than where the offspring is going to preschool, what toys are being recalled, what TV shows one had watched through heavy-lidded eyes.

That alone would drive one to drink, but Sarah chooses adultery instead. She was and is a mousy girl, who wanted to but couldn’t date the popular jock in high-school or college; she achieves this juvenile ambition by eventually sleep with Brad, a househusband who should be studying for his third attempt at passing the bar exam. The affair has great power within the context of the trapped lives both Sarah and Brad feel they lead. The excitement isn’t so much in the illicit nature of sneaking behind their spouses but rather in the fact that they share a common appreciation of one another. Therein lies the trick in Perrotta humanizing the two; certainly, I felt badly for Richard and Kathy, the spurned spouses. But I felt more sadness than anger in Sarah and Brad finding their escape in each other.

The humanization comes because one can identify with the cause of the affair: the perception that one’s spouse doesn’t fully appreciate him as a partner. It is not a matter of reality; it is that one spouse feels put upon and felt the need to seek that appreciation elsewhere. Brad is the simple case: he is going through his mid-life crisis early. He has failed the bar exam twice, but he states he entered law school on a whim. He watches teenage boys skateboarding and longs to join; instead, he winds up with a bunch of cops and ex-cops in a football league. He is satisfied being a house husband, but of course his wife is expecting him to contribute financially. Her moral support of his attempting the bar exam has crossed from wishing him well into an expectation that he will fail and not pull his financial weight. Sarah’s case is just as simple: her husband isn’t interested in her. She wants to be significant. She is intelligent, but decides that the only way to distinguish herself from the pack of mothers is to flirt with Brad. The two hit it off.

It would have been cheap for Perrotta to distance the reader from Richard and Kathy. Instead, Perrotta turns them into people, each with flaws. Kathy is a harried woman, one reaching the limit of her patience with her husband. Fairly or not, she feels too put upon. She works and so doesn’t spend enough time with her son. Although she is following her dream of directing documentaries, it doesn’t pay well. She has been understanding and a cheerleader for her husband – despite his repeated failure. She is tired. Richard is more difficult to describe; he appreciated Sarah’s intelligence when they first met and now provides financial stability for their family. But in the end, he too is tired and desires something less ordinary.

That is what I like about Perrotta’s writing. Sure, he slings barbs at suburban life, but his characters are people like you or me. Under any number of circumstances, we could be Sarah, Richard, Kathy or Brad. Perrotta’s characters in an understandable manner, despite our disapproval. Recently, I had read Pinker’s The Blank Slate, which helped crystallized some ideas about human emotional and cultural baggage for me. Perrotta’s characters strike me as real because he describes the dissonance between basic desires driving action (i.e. nature) and professed desires (the sum of education, environment, and upbringing) so well.

One scene that illustrates this is when Brad notices that his son flat out ignores him as soon as Mom (Kathy) comes home. That scene bundles the flash of Brad’s jealousy of the bond between son and mother, the fact that the boy and mother essentially enter their own world and exclude him, and the fact that he might be feeling both unmanly (for being a house husband) and his efforts not being recognized by his son or appreciated by his wife. Everything about this scene rings of authenticity. Again, without declaring whether there is validity in the perception (although one will be either sympathetic to Brad or not), the sum of all these minor events build up the case that Perrotta is interested in explaining (and thus looking past one’s view of the adulterers), but not excusing , Brad’s and Sarah’s behaviors.

I would guess the moral of the story is that communication only goes so far. Perhaps that is what love means: that a partner thinks enough of the other person to continue talking. If so, then Perrotta must think the world a loveless place.

Joe Posnanski has written another thoughtful piece on the divide between writers of a statistical bent and those who prefer the evidence of their eyes.  I highly recommend it; Posnanski distills the arguments into one about stories. Do statistics ruin them? His answer is no. Obviously, one should use statistics to tell other stories, if not necessarily better ones. He approached this by examining how one statistic, “Win Probability Added”, helped him look at certain games with fresh eyes.

My only comment here is that, I’ve noticed on his and other sites (such as Dave Berri’s Wages of Wins Journal) that one difficulty in getting non-statisticians to look at numbers is that they tend to desire certainty. What they usually get from statisticians, economists, and scientists are reams of ambiguity. The problem comes not when someone is able to label Michael Jordan as the greatest player of all time*; the problem comes when one is left trying to place merely great players against each other.

* Interestingly enough, it turns out the post I linked to was one where Prof. Dave Berri was defending himself against a misperception. It seems writers such as Matthew Yglesias and King Kaufman had mistook Prof. Berri’s argument using his Wins Produced and WP48 statistics, thinking  that Prof. Berri wrote other players were “more productive” than Jordan. To which Prof. Berri replied, “Did not”, but also gave some nuanced approaches in how one might look at statistics. In summary, Prof. Berri focused on the difference in performance of Jordan above that of his contemporary peers. 

The article I linked to about Michael Jordan shows that, when one compares numbers directly, care should be taken to place them into context. For example, Prof. Berri writes that, in the book Wages of Wins, he devoted a chapter to “The Jordan Legend.” at one point, though, he writes that

 in 1995-96 … Jordan produced nearly 25 wins. This lofty total was eclipsed by David Robinson, a center for the San Antonio Spurs who produced 28 victories.

When we examine how many standard deviations each player is above the average at his position, we have evidence that Jordan had the better season. Robinson’s WP48 of 0.449 was 2.6 standard deviations above the average center. Jordan posted a WP48 of 0.386, but given that shooting guards have a relatively small variation in performance, MJ was actually 3.2 standard deviations better than the average player at his position. When we take into account the realities of NBA production, Jordan’s performance at guard is all the more incredible.

If one simply looked at the numbers, it does seem like a conclusive argument that Robinson, having produced more “wins” than Jordan, should be the better player. The nuance comes when Prof. Berri places that into context. Centers, working closer to the basket, ought to have more, high-percentage shooting opportunities, rebounds, and blocks. His metric of choice, WP48, takes these into consideration. When one then looks at how well Robinson performed above his proper comparison group (i.e. other centers), we see that Robinson’s exceptional performance is something one should expect when comparing against other positions but is not beyond the pale when compared to other centers. However, Jordan’s performance, when compared to other guards, shows him to be in a league of his own.

That argument was accomplished by taking absolute numbers (generated for all NBA players, for all positions) and placing them into context (comparing to a specific set of averages, such as by position.)

This is where logic, math, and intuition can get you. I don’t think most people would have trouble understanding how Prof. Berri constructed his arguments. He tells you where his numbers came from, why there might be issues and going against “conventional wisdom”, and in this case, the way he structured his analysis resolved this difference (it isn’t always the case he’ll confirm conventional wisdom – see his discussions on Kobe Bryant.)

However, I would like to focus on the fact that Prof. Berri’s difficulties came when his statistics generated larger numbers for players not named Michael Jordan. (I will refer people to a recent post listing a top-50 of NBA players on Wages of Win Journal.*)

* May increase blood pressure.

In most people’s minds, that clearly leads to a contradiction: how can this guy, with smaller numbers, be better than the other guy? Another way of putting this is: differences in numbers always matter, and they matter in the way “intuition” tells us.

In this context, it is understandable why people give such significance to 0.300 over 0.298. One is larger than the other, and it’s a round number to boot. Over 500 at-bats, the difference between a 300-hitter and a .298-hitter  translates to 1 hit. For most people who work with numbers, such a difference is non-existent. However, if one were to perform “rare-event” screening, such as for cells in the blood stream that were marked with a probe that “lights” up for cancer cells, then a difference of 1 or 2 might matter. In this case, the context is that, over a million cells, one might expect to see, by chance, 5 or so false-positives in a person without cancer. However, in a person with cancer, that number may jump to 8 or 10.

For another example: try Bill Simmons’s ranking of the top 100 basketball players in his book, The Book of Basketball. Frankly, a lot of the descriptions, justifications, arguments, and yes, statistics that Simmons cites looks similar. However, my point here is that, in his mind, Simmons’s ranking scheme matters.  The 11th best player of all time lost something by not being in the top-10, but you are still better off than the 12th best player. Again, as someone who works with numbers, I think it might make a bit more sense to just class players into cohorts. The interpretation here is that, at some level, any group of 5 (or even 10)  players ranked near one another are practically interchangeable in terms of their practicing their craft. The differences between two teams of such players is only good for people forced to make predictions, like sportswriters and bettors. With that said, if one is playing GM, it is absolutely a valid criterion to put a team of these best players together based on some aesthetic consideration. It’s just as valid to simply go down a list and pick the top-5 players as ordered by some statistic.* If two people pick their teams in a similar fashion, then it is likely a crap shoot as to which will be the better team in any one-off series. Over time (like an 82-game season), such differences may become magnified. Even then, the win difference between the two team may be 2 or 3.

* Although some statistics are better at accounting for variance than others.

How this leads back to Posnanski is as follows. In a lot of cases, he does not just simply rank numbers; partly, he’s a writer and story teller. The numbers are not the point; the numbers illustrate. Visually, there isn’t always a glaring difference between them, especially when one looks at the top performances.

Most often, the tie-breaker comes down to the story, or, rather, what Posnanski wishes to demonstrate. He’ll find other reasons to value them. In the Posnanski post I mentioned, I don’t think the piece would make a good story, even if it highlighted his argument well, had it ended differently.

 

 

 

 

This is exciting; Lev Grossman’s sequel to The Magicians, called The Magician King, will be out tomorrow. There is a small write-up at The Brooklyn Paper. I was looking for news about the novel when I came across an old New Yorker interview, which has a few words about the sequel. I had also written some thoughts about The Magicians.  I guess my essay seemed down on the novel, but I really liked it.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In a previous post, I wrote about financial models. The point is that a scientific model generally simplifies. At the same time simplification gives models their power, one must also take care to assess whether adapting or transplanting the model to new fields is valid. Hence some disconnects between economic models and the financial tools based off these models.

Here’s another illustration. I was talking with my friend about his thesis. R. is interested in building a model of the olfactory bulb. This structure is interesting; it is well defined anatomically into three layers. The top layer contains neuropil structures called glomeruli. Glomeruli contain the axon projections from the primary sensory neurons and dendritic branches of the neurons in the bulb. Both these “main” neurons and so-called interneurons form  connections within this layer. Since this is where raw signals from the nose arrive, it is called the input layer. Together, these cells form a network and reshapes the responses into new neural activity patterns, relayed to deeper olfactory processing areas of the brain.

The middle layer contains the cell bodies of the olfactory bulb output neurons. As mentioned, these cells, called mitral or tufted cells (usually termed M/T cells), send a main dendrite to the glomerulus. Each cell also sends secondary dendrites laterally, within the middle layer. The third layer, the granule cell layer, contains interneurons that form connections between the laterally spread dendrites in the middle layer. This forms a second point within the olfactory bulb where the raw input from the nose can be reshaped, repatterned, and repackaged for subsequent processing.

OK: my friend spoke of his troubles. He needed to convert the sensory neuron activity (from the nose), which differ for different smells. The features that are important seem to be when the activity begins (onset latency), how long it lasts for (duration), and how intense (basically how often the neuron “fires” an action potential.) There are some other subtleties, naturally. Each smell evokes activities in a great many olfactory neurons, some of which respond with a different set of characteristics. The idea is to build the model so that the responses from bulb output neurons can be calculated, given the set of parameters (i.e. the input activity patterns).  Ultimately, these input neural patterns can be related to the actual behavior that helped shape them (such as the sniffing that an animal might engage in as they hone in on some odorous.)

His trouble came with integrating the Hodgkin-Huxley model of the action potential (this is basically derived from physical/thermodynami first principles), determining how this model would generate action potential “spikes” in a way that mimics what the olfactory bulb neurons would do, given the pattern of input activity and the 2 layers of interneuronal influence within the bulb. It seemed like a set of nested differential equations – that is, the action potentials varied over time, with the degree of influence from the various interneurons also changing in time. That’s a real cluster-eff.

I thought I had a brilliant idea (and I still think it’s nice.) I suggested that he can simply build a phase space to describe all the possible arrangements of his input patterns. Each point in this abstract descriptive space can be correlated to a set of output profiles (i.e. how the bulb neurons eventually respond.) He can, in the end, identify the bulb response most likely to result from a given set of input patterns.

The problem is that this is a descriptive model. The Hodgkin-Huxley model would have the advantage of being an actual, theoretical model. Once this is in place, they can literally predict, down to the number of spikes and when they fire, the output of the olfactory bulb.

So yes, that, in a nutshell, is the difference between data-mining versus something derived from first principles. While one might be able to infer the same conclusions from a descriptive model, the theoretical model might be easier to work with when extending it slightly further than what had been observed by scientists. As Justin Fox warns, such extensions can be perilous if one does not take care to worry about validity.

I hadn’t quite planned on reading about the rise of mathematical financial theory and efficient market hypothesis,  but that is what I did.

As it my wont,  I will digress and say that, a prime theme of Moneyball is not that statistics are better than visual pattern recognition: it is that when markets exist,  so do arbitrage opportunities.  Lewis’s writing style is to group his subjects into opposing camps,  to the detriment of his story. So the tension between scouts and stats geeks dominate the book.  It’s a more interesting book,  if you like people stories.

The moneyball story isn’t simply that OBP is a good statistic; it was an undervalued metric,  in the sense that players with high OBP weren’t paid highly compared to,  say,  batters with high homerun totals and batting averages.  Whether Billy Beane was the first one to “discover” OBP (he wasn’t) is incidental to the observation that no one was actively making use of that information. While GMs at the time were starting to identify other metrics, no one put their money where their mouths  were: high OBP players were not paid a premium. Because of that pricing difference (OBP contributes strongly to runs scored and thus wins, but GMs did not pay well for it),  one might be able to buy OBP talent on the cheap.  Now,  that arbitrage opportunity has disappeared,  as teams with money (read: Red Sox and Yankees)  have bid up the price.  That means high OBP now commands a premium.  Thus what worked before (a winning strategy on the cheap),  no longer works now.  It is a combination of fiscal constraints and incorrect pricing that gave Beane an edge.  The fact that there was a better stat  is besides the point; the fact that there was an arbitrage opportunity is absolutely the point.

This brings us to financial markets. If prices for stocks in a company were set by supply and demand, then rational buyers and sellers essentially agree on a fair price due to the fact that the seller has control of the product (i.e. stocks) and can name its price, while buyers need not purchase the stock if they  find the deal poor. In other words, opposing rational interests create a balance between something being charged too much or too little.

Is this price the correct price?

From a simple question, much of the mathematical economics was developed to help investors, fund managers, brokers and bankers identify the worth of the various products they buy and sell today. The most successful of these theories is that  markets are efficient: prices in a financial market such as the New York Stock Exchange are not only the optimum price for sellers and buyers, but reflects a conclusion about the value of the product. That is, this price correctly valuates the company whose stock is being sold. There are different forms of this efficient market theory: they differ in the emphasis on whether different “information” is accounted for in the price. A weak version of efficient market theory suggests the stock prices reflect all past public information. A semi-strong form of this theory is that new publicly available information is accounted for in the price of a stock, in a large financial market. The strong form of this theory is that even private (i.e. inside) information is accounted for in the price.

This might seem strange to people, given that a) we just saw a financial market meltdown because finance sector personnel did not evaluate sub-prime mortgage bonds correctly, b) such bubbles existed before and even after we have complicated performance metrics (Dutch tulip  mania and the dot-com bubble), and c) that there are enough shenanigans involving inside trading.

At any rate, one difference that I will focus on is that economic scientists (i.e. economists, and a breed we should separate from the operators in the financial market), like most scientists, seek general explanations. Because their tool of trade is mathematics, economists prefer to derive their conclusions from first principles. Generally, statistical analysis is thought of as ways of either testing theory or helping guide the development of a theory. Statistical models are empirical and ad-hoc. They rely on the type of technique one uses, how one “scores” the observation, and they are, as a rule, not good at describing things that were unseen. A good theory is a framework for distilling some “essence” or a less complex principle that governs the events that happen, which led to “observations.” Usually, the goal is to isolate the few variables that presumably give rise to a phenomenon. These distinctions are not so firm, of course, in practice. Good observations are needed to provide the theorists with curves to fit, mathematically. And even good theories fall apart (again, it is still based on observations – boundary conditions are a key area where theories fail.)

What does all this have to do with financial markets and efficient markets? While we have evidence of inefficient markets, these events may have been rare or the result of a confluence of exacerbating factors. However, one thing that scientists would pay heed to is that pricing differences were proven to exist, mathematically, and derived from the same set of equations used to describe market efficiency. Joseph Stiglitz proved that there can’t be a so-called strong form of an efficient stock market, since information gathering in fact adds value and has a cost. The summary of his conclusion is that, if markets were perfect and all agents have perfect information, then everyone would have to agree on the price. If that were true, then there would be no trading (or rather, speculating), since no one would price things differently. When people are privy to different information, it may lead to pricing differences. That in turn, must lead to arbitrage opportunities (no matter how small.) Thus the “strong form” of market efficiency cannot exist.

I was talking with a friend who has an MBA. He wasn’t too keen on hearing that the efficient market hypothesis may not be entirely proper, when I was describing to him Justin Fox’s book, The Myth of the Rational Market. I was approaching things from a scientific perspective; I know that models are simplifications. Even the best of them can be found inadequate. And this is what I want to focus on: that although models may not describe everything exactly, it’s fine. It does not detract from it.

From Fox’s book, and also William Poundstone’s Fortune’s Formula, the reader sees some difficulties with the efficient market theory. For one, the theory was originally posited to explain why prices, in the very short term (daily), varied around some mean. Sure, over time, the overall price increases, but at every iota of time, one can see that prices ticked up and down by a very small fraction of the price. This is known as the random walk, first mathematically described in the doctoral thesis of Louis Bachelier. One bit of genius is that, Holbrook Working pointed out that these random price fluctuations may in fact indicate that the market has worked properly and efficiently to set a proper price. Otherwise, we would see huge price movements that reflect the buying and selling of stock due to new information. In other words, the price of a stock constitutes the mean around which we see a “natural” variation.

And from that, much followed. Both Poundstone and Fox talked at length about pricing differences. In some sense, market efficiency, although implying both speed and precision, did not address the rate of information propagation.  Eugene Fama suggested that information spread in a market is near instantaneous (as in, all pricing changes are set and reset constantly at a proper level). In the theory’s original form, I think this instantaneous rate resulted from a mathematical trick. Bachelier was able to “forecast” into the near, near future, showing the stock price can tick up or down. His work was extended into many instants by a brilliant mathematical trick. By assuming that stock transactions can be instantly updated and without cost, one can build up a trajectory of many near instants by constantly updating one’s stock portfolio. The near, near future can now be any arbitrary future moment.

Again, my only point here is not that the efficient market theory is wrong and must be discarded. I was fascinated by the description of counter examples and the possibility that some of the assumptions helping to build up a mathematical framework may  need revision.

My boss and I were talking about the direction of our research. He thought that models of cell signaling pathways were lacking in rigor (by that he means a mathematical grounding). He, having a physics background, scoffed at the idea that biology is a hard science, because biological models are mostly empirical and does not ‘fall-out” from considering first principles (i.e. based on assumptions, postulates, and deductive reasoning). I, being the biologist, tried defending this view. Biology, like any sort of system, is complex. There are some simple ideas that can help explain a lot (for instance, evolution and genetic heritability). The concept of the action potential, in neurons, can in fact be derived from physical principles (it is simply the movement of ions down an electrochemical gradient, which can be derived from thermodynamics). In fact, neurons can be modeled as a set of circuits. For example, one recent bit of work my supervisor and I published on, using UV absorption as a way to measure nucleic acid and protein mass in cells, is based on simple physical properties (the different, intrinsic absorption of the two molecules to light), which can be described by elementary, physical mathematical models.

However, the description of how networks of neurons may work, and how such physical phenomenon can give rise to animal and thoughts, and in turn how individuals may act in concert with others and form a societal organism, are wildly complex. Further, there can be multiple principles at work, none of which are necessarily derivable or deduced from a common set of ur-assumptions. For example, Newton’s laws of motion can be derived from Einstein’s theory of relativity. However, some basic ideas about human behavior (such as that leading to pricing correctness in market efficiency and game theory), or how humans may interact (as described by network theory), and how something as seemingly nebulous as and human-dependent as “information” can actually be described by Boolean algebra and a mathematical treatment of circuits.

I should be clear: I am simply noting that some fields are closer to being modeled by precise, mathematical rules than others. Reductionism works; even the process of trying to identify key features underlying natural phenomena is helpful. However, one should also keep in mind that wildly successful theories may change, as we obtain better tools and make more accurate measurements.

I think an important point that Fox makes, then, is that we do have a number of observations suggesting that markets are not entirely efficient. For example, there is price momentum (a tendency for stock prices to continue moving in a particular direction), there is significant amount of evidence suggesting that humans do not always act rationally (they tend to overvalue their property but discount things they do no own), and there are clearly signals that sometimes, herd mentality results (a la price momentum or bubbles). Fox also points out something rather important: even as economists point out inefficiencies in the market, they seem to disappear once known. Part of it could be statistical quirks: by chance, one might expect to see patterns in the noise of large, complex systems. Another part of it is that, once known, the information is in fact integrated into future stock prices. This places economists in a bind: if the effect is false, one might be justified in ignoring it as noise or a mirage of improper statistical analysis. However, if the effect is real, then it clearly suggests that the appearance of price incorrectness reflects market inefficiency. At the same time, the effect disappeared, also suggesting that once known, the market price showed correction, just as efficient market theorists predicted.

As one can imagine, there are opposing camps of thought.

Further compounding the difficulty is the fact that it has been hard to integrate non-rational agents into traditional market theory. current theory treats pricing as an equilibrium, consistent with the idea that information and rational agents pulling and pushing the prices this way and that, but ultimately, the disturbances are minor and the overall price of the stock is in fact the proper, true price. Huge disturbances are interpreted as movements in the equilibrium point, but they must arise from external forces (that is, from effects not modeled within the efficient market model – which actually leads to an inelegance of the variety that mathematicians and physicists dislike.) As the number of contingencies increase, one might as well resort to a statistically based, empirical model. Which brings us back to the original point of how well we understood the phenomenon.

On the other hand, no one who wishes to modify efficient market theory has successfully integrated the idea of the irrational agent. The advantage is that here, pricing changes – correct or incorrect – are based on the actions of “irrational” agents. Thus we are no longer looking at an assumption of a correct price and deviations from that price. We can, presumably, derive the current price by adding into the model the systematic errors made by agents. Thus even huge deviations in proper prices (i.e. bubbles, undervaluations, and perhaps even the rate of information incorporation) would be predicted in the model. However, a model remains just out of reach. In other words, efficient market opponents do not yet have a completed and consistent system to replace and improve the existing one. Be default, efficient market is what continues to be taught in business schools.

My interest in the Fox and Poundstone books is precisely in how difficult it is to incorporate new ideas if an existing one is place. It is this intellectual inertia that results in the concept of memes as ideas that take on a life of its own  (in that ideas exist for its own reproductive sake) and Kuhnian paradigm shifts that have to occur in science. My specific application has always been in how non-scientists deal with new ideas. If scientists themselves are setting up in opposing camps, what must laymen be doing when faced with something they do not understand?

 

Michael Lewis seems to specialize in telling stories about misanalysis in terms of cowboys-and-Indians. This is clearly a style that gets him into flaps (well, I know of one writer – Buzz Bissinger – who is antagonized by whatever Lewis is trying to sell). I see The Big Short as similar to Moneyball: Lewis has an affinity for people who see something different from the  wisdom, and who are curious enough to explore why that is.

The Big Short is not unlike Moneyball in that Lewis clearly takes a side, even though there is really none to take. It certainly is easy to jeer the big banks, as Lewis finds a cast of characters who, in essence, foresaw the economic meltdown of 2007-2008, led by the tanking of the subprime mortgage market. But they are not heroes; they simply saw something different. By exploiting their observation, the tools they used was every bit as devastating as the subprime mortgage loans in fomenting the crisis. These tools were more devastating than analytical models of baseball player productivity.

I tried describing this book to a friend by saying that, I can simply define several terms and one cannot help but draw moral conclusions about the participants. The terms are subprime mortgages, subprime mortgage bonds, collateral debt obligations, and credit default swaps.

Subprime mortgages –  Normal loans (i.e. mortgages) earn banks money through the interest charged on the loan. The interest rate usually follows a rate set by the Federal Reserve Bank, called the prime rate. Subprime mortgages are loans that banks underwrite but with smaller than usual interest rates (generally under the prime rate, thus, subprime). These loans are attractive to borrowers because they would pay less interest (at the beginning). The banks earn money by eventually raising the interest rate, usually quite a bit above prime rate. This situation is akin to the credit card teaser rates: 0-5% interest for the first 6 months, followed by a raise to 15-23% on the remaining balance. Because of the low teaser interest, it has the problem of seeming more affordable to people who cannot pay for the loan. Some less than ethical banks in fact targeted people who do not have steady incomes or who possess poor credit ratings for these loans. The flip side is that these loans could in fact have done some good. Banks were given an incentive to lend more money so that more US citizens could own homes. The only way they would give money to people who are at higher risk of not paying back is, ironically enough, to charge them even more for the privilege of getting the loan money. Whether banks should have relaxed their standards by so much is a serious question, since there was political pressure for banks to do so. But the issue was not examined properly, certainly not by banks who simply signed off on a loan and then sold it (i.e. originate and sell) to some other bank, who assumed the risk of the borrower defaulting. Banks can make money by selling these loans, hence the motivation for disreputable banks to lend money to people who cannot afford them.

Mortgage bonds – Some other banks thought mortgage bonds are an attractive financial product because the bonds can dilute the risk of the bondholder losing money. As I understand it, in very simple terms, mortgage loans can be packaged into a bond with many subdivisions (called tranches.) What an investor sees is not a set of mortgages, but a bond with a return. This income stream comes from the interest paid to service the loan. More importantly, the purchaser of the  bond (in theory) isn’t lending all the money to a single borrower. The bond doesn’t cut things so fine; in a sense, it doesn’t matter which borrower is paying the interest to the bond-buyer. It could be one borrower paying thousands of dollars; it could be a few pennies from a million borrowers. In theory, one borrower defaulting won’t affect the bond buyer’s expected income. Theoretically. The important point is that the mortgage bonds are simply a collection of many mortgages. Some mortgage loans are given to fairly responsible people. Some to shady characters. Since we are really talking about subprime mortgage bonds, one might expect more shady characters than not. In general, investment banks have an incentive to create these bonds: they can charge fees when they sell these bonds (i.e. a service charge).

Collateral Debt Obligation – The CDO is a financial instrument; it is basically a bond of bonds. Just as a mortgage bond (presumably) dilutes the risk of borrower’s defaulting, the CDO supposedly dilutes the risk of multiple bonds going bad.  The idea is that not all mortgages are equal (that is, not all risk is equal), and thus not all bonds are expected to go bad at once. In another bit of financial genius, one might realize that, it might make some sense to split bonds up before repackaging into a CDO. Again, this goes back to the assumption that a bond contains a great many different levels of risk (i.e. in mortgage bonds, every borrower is presumably different.) So what if one takes the less risky parts of bonds (a part here is termed “tranche”) and package them together? Then what if one takes the next riskiest tranche across all these mortgage bonds, and then packages those into another CDO? What if one does this, all the way down to the riskiest sections of the bond (and remember, the higher interest charged to the borrowers is what underlies the revenue stream that would make this an attractive investment)? If one properly assesses the risk (which is basically equal to cost) and benefit (which is income), then one might see how, at some point, one might actually buy even the CDO that contains the riskiest slice of bonds. Risk is directly proportional to the returns one might expect. What this also means is that, for a while, the buyers of the mortgage bonds are actually other banks – so they can package them into these meta-bonds (CDOs). Naturally, this is adding value; the investment banks that create the CDO can charge a fee as the middleman.

Credit default swap – An insurance policy, which anyone can purchase – even if one is not “exposed” to the risk. As I read it, it’s as if I bought an insurance policy against your house burning down. Of course, I would hope I choose wisely. I might select some house, sitting next to a forest with a history of fires, and during a drought, in southern California. But if I’m doing that, one hopes that the insurance company isn’t selling this policy for cheap (i.e. from my perspective, I’m looking for the underwriter that has improperly priced the risk – the premium they would charge is low given the probability of the worst happening and that they would then pay out). So it is a balancing game; the expectation is that the worst will not happen, but if it does, then the insurers had charged a high enough premium to get enough money stockpiled to pay.

Some questions arise: If the whole mortgage bond market was driven by high returns based on the high-interest rates charged to the borrower, why did so few people not associate the high-interest rates with the higher risk involved. Again, the high-interest rate is charged as a penalty because lender feels that the borrower might not be able to pay. In this context, why would any bank not ask for income information for some mortgages (“Alt-A” mortgages)? Why would banks lend money with a 0% teaser and then jack up the rate to 10%, and then making these loans to the applicants with no steady jobs?

While default poses an obvious problem (i.e. the borrower can’t pay), paying too soon is also another problem. A smart borrower, who is able, would refinance home so that he can get out from the usurious rates. This poses a problem for the bond buyers; the original mortgage would be paid, which means that the total interest gained by the bond buyer will be low (the total income is thus low). Why would banks assume they can get 15 or 30 years worth of returns at these teaser deals? There are two strikes against the mortgage bond: borrowers can prepay. Or borrowers default. What if the borrower can’t refinance? The original reason he got this type of subprime loan is that he was unlikely to be approved for a normal loan. What makes anyone think that a large fraction of these borrowers will stabilize and become candidates for a normal loan through refinancing, after the teaser rate period? This is another risk posed to the bond investor.

The same comments can be made of a CDO. Did investors properly evaluate the downside of owning these bonds? Lewis’s answer is an emphatic no. In all the above definitions, the products were sold because the seller can charge fees. The middlemen  made all the money here. The buyers were playing a game of hot potato.

Lewis cannot contain his disgust at the shenanigans being played here. He reserves his hardest forehead slap for the synthetic CDO. At some point, a few investors got wise and started betting against whoever bought CDOs. Some of these investors looked as deeply as they could and found that the tranches did not represent independent risks, which is the supposed advantage of packaging these loans and bonds together. These short-sellers assumed that at some point (perhaps at the end of the teaser rate period?), the loans will go bad, and then the bonds, and then the CDOs. So they bought insurance against the CDOs failing; they paid a premium now, hoping that things get so bad that the insurers will have to pay out on the policy. Not only that, the default swap allows the buyer to insure specific things. That is, it is almost like a CDO in this regard. Instead of insuring against a mortgage bond defaulting, one can target specific tranches. So one might simply buy a credit default swap on the riskiest tranche, and work ones way up (to the less risky tranches). It’s a reverse CDO. Actually, it is exactly a CDO. Some wit thought it made perfect sense to simply to turn the CDS around and sell it as a CDO. Since this CDO was constructed from the CDS (as opposed to original “research”?), it is termed synthetic.

*****************

Too Big to Fail picks up where The Big Short ends. Bear Stearns had fallen, and the spectre of ruin is playing out through the entire financial market. A part of the problem has to do with the so called asset held by investment banks and investors: they held billion dollar bonds that were about to go bad. While some banks might be able to write off the loss, people who place their accounts and money with various banks do not know that. One reason for the seemingly unstoppable financial meltdown was the result of many people asking for their money back. And why can’t a bank ever pay back everybody’s money at once? Because banks use the money to invest (buy assets, make loans). That is why we all get interest paid on our deposits. How much of the money they can use for investments is subject to regulation. For investment banks like Goldman Sachs, JP Morgan, Merrill Lynch, Bear Stearns, and Lehmann Brothers, they were leveraged to the hilt (their money is everywhere but in a vault). But it still doesn’t matter. Unless the reserved amount is set at 100%, no bank can ever hope to withstand a run.

And scale matters. A bunch of us normal customers making a run may not matter over the short term. The most damaging thing a bank can suffer from is if a few big clients – other banks, hedge funds, corporations – decide to take their business away. For example, the money used by investment banks come from hedge funds that have trading accounts. These are billion dollar accounts. Even JP Morgan, at one point during the meltdown, who was liquid enough to get actual cash of $180 billion,  could not withstand a run. Inside of a week, they were down to $40 billion, with the possibility that after the weekend, they would go bankrupt. Since everyone was panicking and asking for their money back, it fed into a negative feedback cycle. Banks that weren’t in trouble were soon.

That is the essence of Too Big to Fail. I had joked to my wife that, at various parts, things were “getting exciting”. Like when Lehmann Brothers about to go bankrupt, and the fear and panic spreads anew. It seems that there was very little the Treasury Secretary – Henry Paulson – and the Chairman of the Federal Reserve Bank – Ben Bernanke – could do. Every measure caused a small rally, followed by a new cycle of panic and money/account withdrawal. It truly was like watching dominoes: after Bear Stearns. The tension was in the fact that the situation did look hopeless. The contagion in this case was irrational fear. Every investor who feared for his money decided to pull it out. However, the very act of withdrawal was one more chip at the banking edifice. Banks need to pull back their own investments to generate the money to fund the withdrawal. Once they start doing that, then everyone is essentially stuck calling in their marks – you need to pay back loans. Not being able to do so means that you are bankrupt. Whatever one wishes to call it – negative feedback, self-fulfilling prophecy – it is difficult to see how the cycle could stop (lots of money).

Strangely enough, it wasn’t just money; it was the way it was doled out. The string attached to the $800 billion bailout is that the investment banks were forced to become holding banks, which are subject to more regulation. Merrill Lynch was forced to merge with Bank of America, becoming the investment arm of BoA. JP Morgan and Goldman Sachs had also considered such a merger, but they received an out. They were converted directly into holding banks. But the plus side is that all these banks now have access to the discount window offered by the Fed to holding banks. JP Morgan and Goldman Sachs could borrow money from the Fed. This is important. It meant that the investment banks no longer had to sell assets or call in their marks to gain cash to pay off, withdrawals.

Essentially, the guarantors of the investment bank became the American taxpayers. This did stop the meltdown proceeding.

However, there looms the spectre of actual mortgage defaults, and they will have to pay out credit default swaps. Banks will lose money, and it is unclear how they will deal disposition of the properties.

****

Too Big to Fail was simply a story. It wasn’t meant to be an analysis of the methods used by the Treasury Department to stave off financial collapse. Instead, it contained reconstructed scenes of the various players, placing them into a narrative history. I realized that I do not like narrative histories. I much prefer dense arguments and presentation of primary data. Most importantly, I wanted analysis. But that is not what the strength of this book is. Too Big to Fail simply documents the what, not the why and how.

***

While it is easy to see how writers dealing with the meltdown can develop strong opinions, I think the problem that Lewis has with the banks transcend simple disgust. One undercurrent in his previous book, Moneyball, which I also found in The Big Short is that he can’t understand why people are so lacking in curiosity.

In Moneyball, he juxtaposed baseball statisticians and scouts. But the fact that it was statisticians as the insurgent is meaningless. It could have been anyone who wanted to challenge an existing way of doing things, which had bordered on the ritualistic.

In The Big Short, the problem is more glaring. We are now talking about people’s life-savings and retirement funds and the ability for local and global commerce to happen. Billions of dollars. Fund managers were buying up assets, trusting on Moody’s ratings. No analysis, or, the type that is available to just about anybody. Aren’t Wall St. traders privy to better information and resources? Another reason for the meltdown can be attributed to laziness and incompetence. No due diligence was done, even if billions were on the line.

Exploiting advantages may also have contributed to the problem. For example, if one understood how various banks and ratings agencies valued various assets, it may give some traders an advantage. If done with public information, then  it is simply being aware.  Keep in mind, however, that it is no secret that Moody’s analysts are underpaid relative to the rest of Wall St., and that traders have been known to defect from Moody’s to the investment banking firms, carrying working knowledge of the algorithms Moody’s used to generate ratings. It is possible that Moody’s formulas for valuation is known for certain assets. What is interesting is that this should be a two-edged sword. Investors may be able to exploit the knowledge against someone who does not know, but at the same time, it should give pause to the same banks in placing their trust solely on ratings agencies.

It is not only incompetence; there is the spectre of malfeasance. I became really interested in investment banking, and have Charles Ellis’s The Partnership and Suzanne Mcgee’s Chasing Goldman on my to-read list. Mcgee’s book came out in 2010, and in her foreward, she writes that Goldman is being investigated for fraud. Remember how default swaps were turned around and repackaged as CDOs? At one point, it should have dawned on someone that it’s strange that someone would want to bet against the riskiest CDOs (or specific, worst-of-the-worst tranches of different CDOs), only to have the same instrument be sold as an asset (AAA rated, no less!) to someone else. At the least, the bank is ill-serving their clients. Since Goldman is considered to be the best and brightest, it is unlikely to be a mistake but rather something intentional. Goldman made money on both ends: creating and selling both CDSs and CDOs. It’s one thing for an investor to come and ask for specific instruments to be created (which is what happened with the creation of CDSs), but it would be another if Goldman knew the assets had a high likelihood of default and hid the data in order to sell CDOs. So yes, what seems at the very least… unsporting… is actually rotten.

Lewis story does comment on how traders like Steve Eisman and Michael Burry had trouble tracking down the information regarding the loans underlying the bonds and CDOs. It is not out of the realm of possibility that someone was suppressing the information – whether it was perpetrated in part by Goldman remains to be seen.