Archive

Thoughts

The best novels focus on true to life characters, who swim and thrive against the tide of events. What has been interesting is how limited the purview of modern literati is in terms of identifying novels of note. Books that step outside the boundaries of wringing the profound out of the mundane living  American Rural or the Big City tend to be scoffed at.

What I found so stunning and effective about Ramez Naam’s Nexus Trilogy: Nexus, Crux, and Apex, is because of how recognizable the motivations and actions of his characters are. First and foremost, the series is a thriller that explores ideas about not yet existing technology that can very much arrive in the next few decades. But the novels encapsulate, more so than white papers, policy articles from think tanks, or academic research, the human tensions of a new telepathy/mind-link/brain control technology.

If one were to ask what humans would do with such new devices, one needs to look no further than Nexus in order to get a realistic snapshot.

What made the novel so thought provoking? Probably because Naam did not shy away from the abuses of the technology. Nexus, in this novel, is a nanoparticle computer network that one can inject into the brain. The idea is that the particles can monitor and influence neural networks. Coupled with wireless packet transmission, it effectively enables mind-to-mind linkage, and control.

Needless to say, abuses are nefarious; body hijackings, slavery, murder, rape, drug-like stimulatory usages – all are in the novel. The last point is probably the flavor most consistent with why such devices would be made: therapeutic purposes.

Presumably, if these particles can localize to the brain (and possible elsewhere in the body), the dream is to be able to perform fine-scale monitoring of aberrant body processes and deliver precise therapy. The mind-link capability could potentially be driven by new approaches in treating mental illness. Probably the most profound use might be for enabling normative ways of communicating between loved ones who have autistic family members. Another key reason might be to enable joining of minds to enhance performance; the simple case might be in sports or within an orchestra, but more likely, such direct networking can benefit the military and using groups of humans as massively powerful distributed computing network.

Although there have been great strides in brain-machine interfaces for vision, we are a ways away from being able to replace the eye.

However, my sense is that a true Nexus like technology can be immensely function to cause harm, as soon as the technology is released. It will probably be co-opted into tools for body control, torture and rape, just because it should be easier to cause paralysis and induce base emotions.

So, in these contexts, with the immense potential for abuse and nearly limitless potential, is it worth it to pursue this technology? Further, is it a meaningless question? The premise of human dignity tends to be a Western concept. In other cultures, the needs of the many outweigh the needs of the few. That type of culture tends to respect the group, perhaps at the expense of the individual. In that context, can anyone reasonably expect a lack of research into such technologies, based on the concept of individual rights? If anything, there are more countries that are ostensibly authoritarian than not; I would not be surprised if the technology arose precisely because a government wishes to exert control, rather than from, say, the healthcare sector.

Naam has a distinct view; for one, his main characters, and generally what one attributes as the viewpoints with which an author is most aligned, tend to be more libertarian of the USA variety. It’s the usual gun lobby approach: the technology does not harm; humans do. There is a strong counter balance to this viewpoint, but what we are left with, in the novel, is a technology that is released into the wild, with no oversight, but dependent on most people doing “good”.

I’m not sure. Despite Naam’s ostensible viewpoint, I am left ambivalent. I’m not sure if this technology should develop, let alone be released, considering the potential for private, corporate and governmental abuse.

So what is the point of thinking about the Nexus Trilogy in the context of projecting what amounts to technology governance policy? Isn’t something like this best left to policy wonks?

Well, it goes back to my point: the best novels provoke thought. In this case, it isn’t so much the technology or how realistic the science is. The question remains, how will humans react/interact with the device or circumstance?

 

It is precisely the intersection of humans and technology that we should focus on. The response of humanity to technology is not written on a blank slate. Technology is introduced in the context of, first a few humans, and then society. We can draw from past examples to see how technology affects the economy. We can assess how technologies altered power relationships among different groups. These would of course be actual anthropological, archaeological, and historical studies.

Sometimes, however, a novel – even from genre fiction – that places realistic constraints on human reaction and motivations can cut through the noise and expose the heart of the problem.

 

Advertisement

I recently heard a fun episode of This American Life, called “Kid Politics”. Ira Glass presented three stories about children being forced to make grown-up choices. The second story is an in-studio interview of Dr. Roberta Johnson, geophysicist and Executive Director of the National Earth Science Teachers Association, and Erin Gustafson, a high-school student. The two represented a meeting of minds, between a scientist who is presenting the best evidence demonstrating human induced climate change and a student who, in her words, does not believe in climate change.

It is worth listening to; Ms. Gustafson is certainly articulate, and she is entitled to think what she wants. I simply emphasize that, Ms. Gustafson uses language that suggests she is engaged in a defense of beliefs rather than an exploration of scientific ideas.

Ira Glass, near the end of the interview, asks Dr. Johnson to present the best pieces of evidence arguing in favor of anthropogenic climate change. Dr. Johnson speaks of the analysis of ice cores, where carbon dioxide levels can be detected. This can be correlated to evidence of temperature. Ms. Gustafson points out that apparently, in the 1200s, there was human record of a warm spell – I gathered it was somewhere in Europe, although the precise location and the extent of this unseasonably hot weather was not mentioned –  where low CO2 levels at the time.

Clearly, Ms. Gustafson has shown enough interest in the topic to find some facts or observations to counter a scientific conclusion. She then calls for scientists to show her all evidence, after which she herself will get to decide. I suppose at this point, I’m going to trespass into Kruger-Dunning territory and speak about expertise, evidence, and the use of logic.

In general, I do not think it is a good approach for scientists to simply argue from authority. I admit, this comes from a bias in my interests in writing about science to a lay audience. I focus on the methods and experiment design, rather than the conclusions; my hope is that by doing that, the authority inherent in the concept of “expertise” will be self-evident. That is, I show you (not just tell you) what others (or I) have done in thinking and investigating a problem. By extension, I hope I informed myself sufficiently before I prepare some thoughts on the matter, shooting specifically for fresh metaphors and presentation. (As an aside, I suppose that this might be a mug’s game, given the findings of Kruger and Dunning.)

If a scientist has done his or her job, one is left with a set of “facts”. These facts populate any school textbook. But the facts are more than that: they can act as, with a bit of thought and elaboration, as models. I dislike the distinction people make when they argue that we need to teach kids how to think and not a set of facts. I argue that learning “how to think” depends crucially on how well a student had been taught to deal with facts. These skills include how to deal with facts by using them as assumptions in deductive reasoning, weighing whether a fact has solid evidence behind it, and using facts as if they were models.

Here’s my issue with how Ms. Gustafson, and other anti-science proponents (like anti-evolutionists), argue. Let’s say we were told that gas expands upon heating. One might take this as a given and immediately think of consequences. If these consequences are testable, then you’ve just made up an experiment. Inflate a balloon and tie it off. If temperature increases lead to volume increases, one might immerse the balloon in hot water to see if it grows larger. One might choose to examine the basis of thermal expansion of gas, and he’ll find that the experiments have been well documented since the 1700’s (Charles’s Law). A reasonable extrapolation of this fact is that, if heating gas increases its volume, then perhaps cooling gas will lead to a contraction? One might have seen a filled balloon placed in liquid nitrogen (at – 196 deg C) solidify, but it also shrivels up.

Depending on how well facts are presented, they can be organized within a coherent framework, as textbooks, scientific reviews, and  introductions in peer-reviewed articles already do. My graduate advisor characterized this context fitting as  “provenance.” No idea is truly novel; even if one does arrive at an idea through inspiration and no obvious antecedents, it is expected that this idea have a context. It isn’t that the idea has to follow from previous ideas. The point is to draw threads together and if necessary,  make new links to old ideas. The end point is a coherent framework for thinking about the new idea.

Of course, logic and internal consistency is no guarantee of truth; that is why a scientist does the experiment. What hasn’t been really emphasized about science is that it is as much about communication as it is about designing repeatable experiments. Although scientists tend to say, “Show me,” it turns out that they also like a story. It helps make the pill easier to swallow. The most successful scientists write  convincingly; the art is choosing the right arguments and precedents to pave the way for the acceptance of empirical results. This is especially important if the results are controversial.

The error Ms. Gustafson makes is that she thinks by refuting one fact, she can refute an entire tapestry of scientific evidence and best models (i.e. “theory”). She points to one instance where carbon dioxide levels do not track with the expected temperature change. But in what context? Is it just the one time out of 1000 such points? I would hazard a guess that the frequency of divergence is probably higher than that, but unless the number of divergences is too high, one might reasonably suppose that the two correlate more often than not. (Causation is a different matter;  correlation is not causation.)

But let us move on from that; a more elemental disagreement I have with Ms. Gustafson’s point is that, let’s say that one agrees that carbon dioxide is a greenhouse gas. A simple model is that this gas (and other greenhouse gases such as water vapor, methane, nitrous oxide) absorbs heat in the form of infrared radiation. Some of this energy is transferred into non-radiative processes. Eventually, light is re-emitted (also as infrared radiation) to bring the greenhouse molecule to a less energetic state. Whereas the infrared light had a distinct unidirectional vector, radiation by the greenhouse molecule will occur in all direction. Thus some fraction of light is reflected back towards the source while some other light essentially continues on its original path. If infrared light approaches earth from space, then these gases act as a barrier, reflecting some light back into space. Absorption properties of molecules can be identified in a lab. We can extend these findings to ask, what would happen to infrared heat that is emitted from the surface of the planet?

A reasonable deduction might be that just as out near the edge of the atmosphere, greenhouse gases near the Earth surface also absorb and reflect  a  fraction of heat. Only this time, the heat remains near the Earth’s surface. One logical question is, how does this heat affect the bulk flow of air through the atmosphere? (An answer is that the heat may be absorbed by water, contributing to melting of icebergs. Another related answer is that the heat may drive evaporation and increasing kinetic energic of water vapor, providing energy to atmospheric air flows and ultimately to weather patterns.

For someone who ignores greenhouse gas induced global warming, dismissing the contribution of carbon dioxide isn’t just a simple erasure of a variable in some model. What the global warming denier is really asking that the known physical property of carbon dioxide be explained away or modified. Again, the point is that carbon dioxide has measurable properties. For it not to contribute in some way to “heat retention” is to say that we must ask why the same molecule won’t absorb infrared radiation and re-emit infrared radiation in the atmosphere, in the same way that was observed in the lab. In other words, simply eliminating the variable would require us to explain why there are two different sets of physical laws that apply to carbon dioxide. In turn, this would require a lot of work to provide context, or, the provenance to the idea.

Yes, one might argue that scientists took a reductionist approach that somehow removed some other effector molecule if they measured carbon dioxide properties using pure samples. Interestingly enough, the composition of the atmosphere is well known. Not only that, one can easily obtain the actual “real-world” sample and measure its ability to absorb unidirectional infrared and radiate in all directions. This isn’t to say that thermodynamics of gases and their effects on the climate of Earth is simple. But it is going to take more than a simplistic type of question, such as to posit that there is some synergistic effect between carbon dioxide and some other greenhouse gas or some as-yet unidentified compound, so that we actually modify the working model physicists and chemists have about absorption and transfer of energy.

If you think that it seems rather pat for a scientist to sit and basically discriminate among all these various counter-arguments, I am sorry to disabuse you of the notion that scientists weigh all facts equally. Ideally, the background of the debaters ought not to matter. Hence, you will get scientists to weigh your criticisms more heavily if you show the context of the idea. The more relevant and cohesive your argument, the more seriously you will be taken. Otherwise, your presentation may do you the disservice of giving the appearance that you are simply guessing. That’s one problem with anti-science claimants: all too often it sounds like they are trying to throw as many criticism as possible, hoping that they will get lucky and have one stick.

Take evolution: if one suggests that mankind is not descended from primates, then one is saying that mankind was in fact created de novo. That is fine, in and of itself, but let’s fill out the context. Let’s not focus on the religious texts, but instead consider all the observations we have to explain away.

If we were to go on and to try and explain mankind as a special creation, how would we go about explaining mankind’s exceptionalism? Can we even show that we are exceptional? Our physiology is similar to mammals. We even share physical features as primates. Sure, we have large brains, among the largest brain mass to body mass ratios in the animal kingdom. Yet we differ in about 4% of our genome compared to chimpanzees. Further, at a molecular level, we are hard pressed to find great differences. We simply work the same way as a lot of other creatures. We have the same proteins, despite the obvious differences between man and mouse, a weak similarity between our proteins mean that we have only 70% sequence homology. It seems to me that at multiple levels, at a physiological level, at the level of physical appearances, and at a genomic level, we are of the same mettle as other life on earth. Yes, the fact is that we do differ from these other lifeforms, but it seems to be more logical to suggest that mankind is one type of life in a continuum of the possible lifeforms that can exist on Earth. It just seems likely that by whatever process that led to such a variety of creatures, man must also have been “created” from such a process.

 I hate to harp on this, but a fellow grad student and I had such arguments, while we were both doing our thesis work. My friend is a smart guy, but he still makes the same mistake that anti-evolutionists make: by disproving natural selection, one  therefore has provided some support for creationism. We argued about Darwin’s theory and whether it can be properly extended from a microscopic domain. He was willing to concede evolution occurs at a microbiotic level – such as for “simple” organisms, evolution makes sense, since fewer genes mean less complexity and therefore changes can be just as likely to be beneficial and deleterious.

I thought the opposite. If an organism is “simpler” – namely because it contains a smaller genome – it is even more crucial for a given organism that a mutation be beneficial. A larger genome, from empirical data, generally contains more variants of a given protein. While this in itself reflects the appropriation of existing genes and their products for new functions. Perhaps one possibility is that   an increase in isoforms of a protein also suggests how mutations can occur without the organism suffering ill effects directly. There is a redundancy of protein and function. Also, my friend seems to regard fitness as a “winner takes all” sort of game – as in the organism lives. I merely saw the “win” as an increase in probability that the animal will have a chance to mate, not organismal longevity. Sure, this is a just so story; I think his argument is better than the usual creationist claptrap, but only in the  trivial sense that, yes we need to take care not to over interpret our data or models and yes,  scientific theories – althoughswa they are our best models –  are temporary in the sense that we can revise them when better evidence comes along.

To go back to the way Ms. Gustafson and my friend argue, it behooves them to explain the exceptional circumstances by which we, or carbon dioxide, can act differently from our best model (i.e. theory) and yet conform to it most of the time.

Thus, despite Ms. Gustafson’s call for “all the evidence”, I somehow was left thinking no amount of evidence will persuade her. Part of the problem is that, like the religious who misapply ideas of meaning found in their bibles to the physical evidence generated by scientists, she misapplies her political views to provide the context through which she views scientific evidence about global warming. Whereas she should have used logic to deduce that global climate does not predict local weather and scientific principles  to determine whether global warming is part of a normal cycle for the Earth or is in fact due to circumstances like an increase in greenhouse gases, she probably thought of global warming in terms of regulations and taxes pushed, generally in the United States, by Democrats. Thus, Ms. Gustafson speaks, in Stephen Jay Gould’s term, from the magisteria of meaning (as defined by her political and religious beliefs) and not from the magisteria of science. In this case, she isn’t defending her theory about how the world works; her motivation is to fit the observations to her political and religious ideals.

Can we really separate the political from the scientific? If some scientist argues that there is a problem, it seems difficult to find ways to argue against them. My only suggestion is that Ms. Gustafson and others like her consider their arguments more carefully. Nitpicking specific examples is counter-productive. All theories can be criticized in this way. However, integrating the counter-example is not a straight-forward process, especially if simplistic criticism is at odds with some other firmer, more fundamental observation that even Ms. Gustafson has no problems accepting.

 

I hadn’t quite planned on reading about the rise of mathematical financial theory and efficient market hypothesis,  but that is what I did.

As it my wont,  I will digress and say that, a prime theme of Moneyball is not that statistics are better than visual pattern recognition: it is that when markets exist,  so do arbitrage opportunities.  Lewis’s writing style is to group his subjects into opposing camps,  to the detriment of his story. So the tension between scouts and stats geeks dominate the book.  It’s a more interesting book,  if you like people stories.

The moneyball story isn’t simply that OBP is a good statistic; it was an undervalued metric,  in the sense that players with high OBP weren’t paid highly compared to,  say,  batters with high homerun totals and batting averages.  Whether Billy Beane was the first one to “discover” OBP (he wasn’t) is incidental to the observation that no one was actively making use of that information. While GMs at the time were starting to identify other metrics, no one put their money where their mouths  were: high OBP players were not paid a premium. Because of that pricing difference (OBP contributes strongly to runs scored and thus wins, but GMs did not pay well for it),  one might be able to buy OBP talent on the cheap.  Now,  that arbitrage opportunity has disappeared,  as teams with money (read: Red Sox and Yankees)  have bid up the price.  That means high OBP now commands a premium.  Thus what worked before (a winning strategy on the cheap),  no longer works now.  It is a combination of fiscal constraints and incorrect pricing that gave Beane an edge.  The fact that there was a better stat  is besides the point; the fact that there was an arbitrage opportunity is absolutely the point.

This brings us to financial markets. If prices for stocks in a company were set by supply and demand, then rational buyers and sellers essentially agree on a fair price due to the fact that the seller has control of the product (i.e. stocks) and can name its price, while buyers need not purchase the stock if they  find the deal poor. In other words, opposing rational interests create a balance between something being charged too much or too little.

Is this price the correct price?

From a simple question, much of the mathematical economics was developed to help investors, fund managers, brokers and bankers identify the worth of the various products they buy and sell today. The most successful of these theories is that  markets are efficient: prices in a financial market such as the New York Stock Exchange are not only the optimum price for sellers and buyers, but reflects a conclusion about the value of the product. That is, this price correctly valuates the company whose stock is being sold. There are different forms of this efficient market theory: they differ in the emphasis on whether different “information” is accounted for in the price. A weak version of efficient market theory suggests the stock prices reflect all past public information. A semi-strong form of this theory is that new publicly available information is accounted for in the price of a stock, in a large financial market. The strong form of this theory is that even private (i.e. inside) information is accounted for in the price.

This might seem strange to people, given that a) we just saw a financial market meltdown because finance sector personnel did not evaluate sub-prime mortgage bonds correctly, b) such bubbles existed before and even after we have complicated performance metrics (Dutch tulip  mania and the dot-com bubble), and c) that there are enough shenanigans involving inside trading.

At any rate, one difference that I will focus on is that economic scientists (i.e. economists, and a breed we should separate from the operators in the financial market), like most scientists, seek general explanations. Because their tool of trade is mathematics, economists prefer to derive their conclusions from first principles. Generally, statistical analysis is thought of as ways of either testing theory or helping guide the development of a theory. Statistical models are empirical and ad-hoc. They rely on the type of technique one uses, how one “scores” the observation, and they are, as a rule, not good at describing things that were unseen. A good theory is a framework for distilling some “essence” or a less complex principle that governs the events that happen, which led to “observations.” Usually, the goal is to isolate the few variables that presumably give rise to a phenomenon. These distinctions are not so firm, of course, in practice. Good observations are needed to provide the theorists with curves to fit, mathematically. And even good theories fall apart (again, it is still based on observations – boundary conditions are a key area where theories fail.)

What does all this have to do with financial markets and efficient markets? While we have evidence of inefficient markets, these events may have been rare or the result of a confluence of exacerbating factors. However, one thing that scientists would pay heed to is that pricing differences were proven to exist, mathematically, and derived from the same set of equations used to describe market efficiency. Joseph Stiglitz proved that there can’t be a so-called strong form of an efficient stock market, since information gathering in fact adds value and has a cost. The summary of his conclusion is that, if markets were perfect and all agents have perfect information, then everyone would have to agree on the price. If that were true, then there would be no trading (or rather, speculating), since no one would price things differently. When people are privy to different information, it may lead to pricing differences. That in turn, must lead to arbitrage opportunities (no matter how small.) Thus the “strong form” of market efficiency cannot exist.

I was talking with a friend who has an MBA. He wasn’t too keen on hearing that the efficient market hypothesis may not be entirely proper, when I was describing to him Justin Fox’s book, The Myth of the Rational Market. I was approaching things from a scientific perspective; I know that models are simplifications. Even the best of them can be found inadequate. And this is what I want to focus on: that although models may not describe everything exactly, it’s fine. It does not detract from it.

From Fox’s book, and also William Poundstone’s Fortune’s Formula, the reader sees some difficulties with the efficient market theory. For one, the theory was originally posited to explain why prices, in the very short term (daily), varied around some mean. Sure, over time, the overall price increases, but at every iota of time, one can see that prices ticked up and down by a very small fraction of the price. This is known as the random walk, first mathematically described in the doctoral thesis of Louis Bachelier. One bit of genius is that, Holbrook Working pointed out that these random price fluctuations may in fact indicate that the market has worked properly and efficiently to set a proper price. Otherwise, we would see huge price movements that reflect the buying and selling of stock due to new information. In other words, the price of a stock constitutes the mean around which we see a “natural” variation.

And from that, much followed. Both Poundstone and Fox talked at length about pricing differences. In some sense, market efficiency, although implying both speed and precision, did not address the rate of information propagation.  Eugene Fama suggested that information spread in a market is near instantaneous (as in, all pricing changes are set and reset constantly at a proper level). In the theory’s original form, I think this instantaneous rate resulted from a mathematical trick. Bachelier was able to “forecast” into the near, near future, showing the stock price can tick up or down. His work was extended into many instants by a brilliant mathematical trick. By assuming that stock transactions can be instantly updated and without cost, one can build up a trajectory of many near instants by constantly updating one’s stock portfolio. The near, near future can now be any arbitrary future moment.

Again, my only point here is not that the efficient market theory is wrong and must be discarded. I was fascinated by the description of counter examples and the possibility that some of the assumptions helping to build up a mathematical framework may  need revision.

My boss and I were talking about the direction of our research. He thought that models of cell signaling pathways were lacking in rigor (by that he means a mathematical grounding). He, having a physics background, scoffed at the idea that biology is a hard science, because biological models are mostly empirical and does not ‘fall-out” from considering first principles (i.e. based on assumptions, postulates, and deductive reasoning). I, being the biologist, tried defending this view. Biology, like any sort of system, is complex. There are some simple ideas that can help explain a lot (for instance, evolution and genetic heritability). The concept of the action potential, in neurons, can in fact be derived from physical principles (it is simply the movement of ions down an electrochemical gradient, which can be derived from thermodynamics). In fact, neurons can be modeled as a set of circuits. For example, one recent bit of work my supervisor and I published on, using UV absorption as a way to measure nucleic acid and protein mass in cells, is based on simple physical properties (the different, intrinsic absorption of the two molecules to light), which can be described by elementary, physical mathematical models.

However, the description of how networks of neurons may work, and how such physical phenomenon can give rise to animal and thoughts, and in turn how individuals may act in concert with others and form a societal organism, are wildly complex. Further, there can be multiple principles at work, none of which are necessarily derivable or deduced from a common set of ur-assumptions. For example, Newton’s laws of motion can be derived from Einstein’s theory of relativity. However, some basic ideas about human behavior (such as that leading to pricing correctness in market efficiency and game theory), or how humans may interact (as described by network theory), and how something as seemingly nebulous as and human-dependent as “information” can actually be described by Boolean algebra and a mathematical treatment of circuits.

I should be clear: I am simply noting that some fields are closer to being modeled by precise, mathematical rules than others. Reductionism works; even the process of trying to identify key features underlying natural phenomena is helpful. However, one should also keep in mind that wildly successful theories may change, as we obtain better tools and make more accurate measurements.

I think an important point that Fox makes, then, is that we do have a number of observations suggesting that markets are not entirely efficient. For example, there is price momentum (a tendency for stock prices to continue moving in a particular direction), there is significant amount of evidence suggesting that humans do not always act rationally (they tend to overvalue their property but discount things they do no own), and there are clearly signals that sometimes, herd mentality results (a la price momentum or bubbles). Fox also points out something rather important: even as economists point out inefficiencies in the market, they seem to disappear once known. Part of it could be statistical quirks: by chance, one might expect to see patterns in the noise of large, complex systems. Another part of it is that, once known, the information is in fact integrated into future stock prices. This places economists in a bind: if the effect is false, one might be justified in ignoring it as noise or a mirage of improper statistical analysis. However, if the effect is real, then it clearly suggests that the appearance of price incorrectness reflects market inefficiency. At the same time, the effect disappeared, also suggesting that once known, the market price showed correction, just as efficient market theorists predicted.

As one can imagine, there are opposing camps of thought.

Further compounding the difficulty is the fact that it has been hard to integrate non-rational agents into traditional market theory. current theory treats pricing as an equilibrium, consistent with the idea that information and rational agents pulling and pushing the prices this way and that, but ultimately, the disturbances are minor and the overall price of the stock is in fact the proper, true price. Huge disturbances are interpreted as movements in the equilibrium point, but they must arise from external forces (that is, from effects not modeled within the efficient market model – which actually leads to an inelegance of the variety that mathematicians and physicists dislike.) As the number of contingencies increase, one might as well resort to a statistically based, empirical model. Which brings us back to the original point of how well we understood the phenomenon.

On the other hand, no one who wishes to modify efficient market theory has successfully integrated the idea of the irrational agent. The advantage is that here, pricing changes – correct or incorrect – are based on the actions of “irrational” agents. Thus we are no longer looking at an assumption of a correct price and deviations from that price. We can, presumably, derive the current price by adding into the model the systematic errors made by agents. Thus even huge deviations in proper prices (i.e. bubbles, undervaluations, and perhaps even the rate of information incorporation) would be predicted in the model. However, a model remains just out of reach. In other words, efficient market opponents do not yet have a completed and consistent system to replace and improve the existing one. Be default, efficient market is what continues to be taught in business schools.

My interest in the Fox and Poundstone books is precisely in how difficult it is to incorporate new ideas if an existing one is place. It is this intellectual inertia that results in the concept of memes as ideas that take on a life of its own  (in that ideas exist for its own reproductive sake) and Kuhnian paradigm shifts that have to occur in science. My specific application has always been in how non-scientists deal with new ideas. If scientists themselves are setting up in opposing camps, what must laymen be doing when faced with something they do not understand?

 

My life (I am an American male) does not revolve around sports. I do follow the Boston Bruins, but they are never must-see TV for me – even when they are in the playoffs. Sorry, I prefer reading, making sure my house is in order, and spending time with family and friends.

My interest in sports run along mathematical lines; I am more interested in statistical analysis and model building than in the games (and especially for baseball.) That and drinking beer while watching games.

So it is strange that I read just about everything Joe Posnanski writes. He writes about baseball, and without exception I  read his pieces about living and long-dead ball players whom I have (mostly) never seen.

This piece is particularly good. The way I would approach describe Posnanski is that he is about nuance. Nick Hornby isn’t the first to notice that males tend to love ranking things. Bill Simmons and Chuck Klostermann have also made similar points, in their own entertaining ways. Posnanski, in addition to offering his own rankings, a number of observations that tempers the ranking. In other words, the separation between 2 players may not be as large as the gulf implied by, for example, a “first” and “second” ranking. This is interesting and somewhat in contrast to the approach of most sports columnists.

At any rate, here’s the nuance: Ryan and Suzuki are the best at what they do, but they don’t rank among the best baseball players ever. I won’t repeat Posnanski’s arguments here, but he’s not out to trash either guy. He’s simply trying to work through and present an informed opinion and analysis. The pair, Ryan and Suzuki, can be considered exceptional players along one-dimension. Ryan threw more strikeouts than anyone; Suzuki is a hit machine. But because of other inefficiencies in their game, they actually do not help their teams as much as one might think (in terms of preventing runs for Ryan and driving the offense for Suzuki.)

The greater point is this: I think Posnanski is among the best writers in explaining numbers to an audience. In all seriousness, I want that talent in describing science to non-scientists. When Posnanski gets rolling on presenting statistical arguments for baseball excellence, I applaud the effort because he is able to note all the ways in which these “binary answers” have many shades of gray. When Posnanski talks numbers, I don’t see a difference between him and a scientist who is trying to explain ideas to laymen. And of course his writing talent makes you want more. Or at least it makes me want to read more.

He’s one of my favorite science fiction authors. He also writes on his own blog, and it’s fascinating. Not only does he blog about IT, current events and gadgets (here and here), occasionally he’ll write about the business of publishing and the creative process. He’ll also toss out bits of research he’s been doing and packages them into really interesting thought experiments. Currently, he’s writing a series on “Books I Will Not Write”. I would like to bring attention to two of these:

The Crimson Permanent Assurance in Space and Floating in the Sea of Time. The setup for these ideas are soooooo interesting. I wish he would write them.

The neatest thing is that, Stross is never a one-trick pony. He has ideas – like pirates who essentially turn into a hedge fund managers cum Blackwater/Brinks security force for hire types – which make sense, even beyond the trappings of the gee-whiz technology he presents. While he has big ideas, he builds from the bottom up. He doesn’t start of with a perfect society (a la Star Trek). He thinks about humans first, how they are a bunch of selfish little shits, how there’s going to be new technologies, and how the assholes will find  ways to exploit lesser assholes with new technologies. A very human story, and so generally, his hard sci-fi is recognizable under the trappings of the genre. His characters have regular motives and feelings, even if they are teleporting across universes or uploading their minds into supercomputers.

And his rejected ideas are so thought-provoking! I hope somebody picks up on these ideas, although I would much prefer Stross to write these books.

 

 

While I thought the movie Sideways was funny enough, it wasn’t a movie I would enjoy rewatching; I detested Paul Giamatti’s and Thomas Hayden Church’s characters, Miles and Jack, respectively, in that movie.  The one standout scene in that movie, for me, isn’t when Miles talked about how much he likes pinot noirs – which is just a self-pitying comparison between him and the grape. (That is, the care and cultivation needed for that grape to reach its full potential as a wine is the same care that a woman needs to give to him. Really. The effort expended on the grape is less aggravating, since the grape isn’t boorish and doesn’t talk back. Why should anyone, even his mother, spend that much attention on him?)

No, the scene that made me feel some sympathy toward Miles is his guzzling his prized bottle of wine (a 1961 Château Cheval Blanc), from a styrofoam cup, in a fast food restaurant, after he found out his ex-wife is pregnant. I believe that’s the occasion he was saving that bottle for, with them still being married and finding out they are expecting  (or nowadays, probably waiting until she gave birth and finished breastfeeding). In a nutshell, one can see that maybe the wife didn’t share all his interests, and that he had spent way too much time indulging in his own passion while not sparing any for his wife. It is sad, and seems a common affliction.

I am not the first to point out that a number of books and movies that focus on unattractive, compulsive, abusive, jerky men who luck into wonderful relationships with walking sex fantasies with a heart of gold and infinite patience. The writers are writing about their own desires, and these writers are all white, middle-aged men who, if we assume that these movies and books express their ideas about relationships, do not work at building friendships. These men sound like assholes.

And so we finally come to Juliet, Naked, the story of Annie, Duncan, and Tucker. Annie is an intelligent woman, stuck in a dead-end relationship with Duncan. Duncan is obsessed with a musician (Tucker) who disappeared during a tour; he was not seen nor heard from again. However, a core of diehard fans kept paying tribute to Duncan in the form of website and forum, trading in bootlegs and speculating about why Tucker turned away from the life of a rock star. They share stories about pilgrimages to locations deemed an important part of Tucker’s life.

As one imagines, the problem is not the compulsive behavior of these men, in the microscopic examination of every shred of public evidence of Tucker’s life. A major problem is in how these men feel Tucker owes them access to his life, to the point where fans try to intrude on his life.

However, I think a small part of the novel deals with  fan behavior; Hornby is gracious enough to recognize that some fans look weird and obsessive because most other people make them out to be weird. It is expected, until the advent of web based tools that let artists easily engage in self-promotion, that artists keep distance from their audience. I would suppose that artists would prefer that fans don’t talk back and certainly not to break into the homes of people who have some relationship to them.

At any rate, spending a vacation touring suburbs and bathrooms in the Midwest suggests that Duncan is a pathetic, infantile man who cannot move on. Of course, it also describes Annie, and her situation is even worse because she won’t or can’t leave Duncan, despite his problems being abundantly clear to her.

Things change in Duncan’s and Annie’s life when she opens mail intended for Duncan. The package contained a disc of unreleased material; it is basically a draft of Juliet, a record Tucker had released, and dubbed Juliet, Naked. Annie listens to it and concludes that the produced version is much better. This differs from Duncan’s view, and eventually their relationship breaks under the strain. Annie also writes out her thoughts about “Naked” on the Tucker fan site, managing to catch Tucker’s attention.

There are some interesting ideas here, mostly in how it is much easier to cultivate a relationship with someone who doesn’t reciprocate (in this case, it’s Tucker.) By traveling the same tour path as Tucker, by interpreting his music, and by doing everything short of treating Tucker like an actual person, Duncan and his compatriots can indulge in their pop psychology analysis of Tucker, of his drive and motivation. In short, fans like Duncan can project their own desires onto Tucker.

From my reading, Hornby’s books tend  examine the many different ways men engage in these one-sided relationships. It is much easier being a fan of a soccer team, ranking musicians, and generally being self-absorbed. Again, the idea of being a fan is to establish ones identity relative to the object of his obsession. It isn’t so much admiration as a mirror. The men interpret the art or the game or the players as they like (and it is their right), but it never seems as if they ever considered that the artist or the players may have their own views.

A major part of the work is in the idea of interpretation and how much an author has control over the nature of his works’ impact. There is one good bit with Duncan, towards the end of the novel. He meets Tucker and sees that Tucker isn’t the person Duncan has in mind. We also found out that these obsessives have staked a lot worship on the wrong information. They live on rumors about Tucker’s underground gigs, his supposed influence in production or writing of songs, and sightings of a person who isn’t even Tucker. So real Tucker doesn’t conform to Duncan’s idea of the man. We find out that Tucker also feels like that he can’t recognize the person he was anymore.

Tucker, it turns out, left music because he felt that his anguish over losing the love of Juliet, made tangible by his writing the songs on the record Juliet, was fake. He realized, while on tour, that he might actually love his new infant daughter more. That relationship may be more meaningful than young love. Perversely, he felt this feeling distanced himself from his own music, because whatever he wrote would be fiction. The music would no longer be authentic.

Duncan’s moment comes after this revelation. He argued the less creepy and more meaningful point that while an artist has his own motivations in creating a work of art, he cannot control how others perceive the piece or what meanings they take from it. Art inspires, but it is a mistake to think that it is an exact science in what feelings other take away. The important thing is that people take something from the piece, even if the artist loses touch with his own work.

This is the very argument I would use to justify me writing these musings about books I read. I was wrong to describe this blog as a series of book reviews. It is a collection of thoughts about books; ideally, I connect these ideas to themes from other books I read and, I hope, relatively novel thoughts I have based on my experience.

I would take the Duncan argument a step further; an artist shouldn’t feel inauthentic if his motivations and passions change. The sculpture, book, song, painting, photograph, or whatever, probably came from an authentic place, at the time the piece was created. If life happens and the artist feels differently later, what’s wrong with that? Why can’t he grow or regress? Why not author something new?

One final note: I suppose the Duncan argument runs a bit close to the post-modernist’s “textual analysis” justification. Everything is open for debate; meaning is in the eye of the beholder; there is no primary interpretation, thus ignoring the author’s own ideas, as if he has no idea why he created a work of art. This is a philosophical difference I am not reconciled with. I think that art should have some meaning or motivation. This comes from my finding art an absolute waste of time when the artist has no point of view. Rather, if his point-of-view is that he wants to say everything about everything, where symbols mean all things to all people, he says nothing at all. What I want from art is a particular thought, or feeling, something that convinces me that the author/painter/musician had something specific in mind. I don’t want to go to an artshow to look into a mirror, where I leave with what I  brought. I want to hear what the artist has to say, and think about it, and agree or disagree with it. With that said, of course individual interpretation has value; it’s just that I prefer it when the artist treats me with respect and has enough confidence in his own ideas to be specific. The problem is, this does require that an artist uses his vernacular to establish a framework for interpretation. That is, there is a so-called primary interpretation – that is, a true meaning – even if at a very skeletal level. Isn’t that the point of language, and, more generally, communication? Why write, speak or draw if the audience simply edits things on the fly to fit his own preconceptions? I do feel that interpretation is and should be constrained, and I do not respect artists who abrogate this basic responsibility.

I started reading this book because it was about education. My wife and I have 2 boys, one five-year-old and one 16 month old.  I’ve been thinking a great deal about their education. My wife and I are both scientists. We feel that while this line of work is intellectually rewarding, the road is hard. For one to reach the top, one has to make sacrifices. My wife and I are more interested in making sure that the boys grow up to make a comfortable living.

I would consider myself a lifelong student. I spent my 12 years in primary and secondary school. Four and a half years of collegiate learning (the extra semester came because I spent a yearlong exchange in Germany, and I decided to take some more courses to receive at least an International Studies minor). [This was followed] by 10 years of doctoral and post-doctoral training.

I have had the opportunity to learn in many settings. The modes of learning included both defined coursework and independent study. I did fine with both, although I think I had the advantage of being extremely interested in just about everything. So much so that on occasion I welcomed the structure imposed by instructors and their syllabus, plotting out a course of study that I may not have bothered with, on my own.

My graduate and post-doctoral work focused on olfaction, specifically on the physiology of olfaction as assessed using optical indicators of neural activity. I basically focused on recording brain responses in the “smell processing pathways” in the brain.

I didn’t have any affinity for the sense of smell. I applied to a neuroscience graduate program because I wanted to understand how the brain worked. It mattered not one wit whether the system was smell, taste, hearing, vision, or touch. I had a general question that I wanted to answer, and the specifics did not matter to me.

More recently, and this has some bearing on the book by Daniel Wolff, I had to find a second post-doctoral position because I decided that my skills were not marketable. I guess you can argue that I failed in convincing human resources that I can be productive for their company. However, it is also the case that biotechnology companies are not looking for a neurophysiologist who records in vivo neural responses using conventional microscopy. Instead, they look for electrophysiologists, scientists who do imaging in cell cultures, or deep tissue scanning using fMRI, CT scans, or PET scans.

Regardless, I couldn’t make myself fit into their bucket, and they weren’t willing to accommodate someone with my skill set and who could possibly bring something unique to their company.

The point is that I consider myself a professional student. Since I started my new post-doc in flow cytometry, I am learning new techniques, a new system, and getting to know the intimate lives of single cells. The strategy is simple: my boss has certain ideas he wants implemented. He left it to me to work out the specifics. So I am currently identifying attributes of my technique (UV spectroscopy), understanding the life cycle of a cell, identifying subcellular organelles, and learning how to make them stick to slides so I can look at them under a microscope. Most of these things I can find in published literature. The key thing is that I look for ways to combine my [new] technique ([involving] UV microscopy) and established ways of looking at cells.

In my previous post-doc, I had to determine the best way to preserve a “cranial window” through which I can look at brain responses, in the same animal, over a period of months. I adapted and extended previous work, and brought some newer techniques, to help me accomplish this goal. I also established a method to mimic natural breathing patterns in anesthetized mice, and so I had to learn LabView and MatLab to write software to control various devices and to analyze data.

So yes, I have some experience with learning new things.

And for the life of me I cannot think of a so-called best way for my boys to learn.

Thus I became interested in books like How Lincoln Learned to Read. It seems, from my reading, that the book confirmed  a few things about how kids learn. That is, it may not be clear until much later what exactly kids learned in school. Daniel Wolff took a snap shot of how 12 famous Americans were educated. He selected one child from each era and wrote about the formative years of each. Of course, these people may have become famous despite, and not because of, their education. One could tell that all of the young people were driven. Driven to achieve greatness, or driven to just do whatever it is that they became famous for.

In a way, the formal education was not all that important for each child. Some did in fact thirst for knowledge for its own sake. Wolff shaped his descriptions in terms of both pragmatic outcome and post-hoc analysis. We know where they ended up, so it becomes a way for us to interpret and identify the steps that led the children to their destiny. However, what he also noted was that the kids had the balls to chase the learning they needed. There is certainly an individualist streak, strongly evident in Lincoln’s and [Henry] Ford’s backgrounds; [both boys avoided farm work and] both boys were considered lazy for their time. [This was true given the focus of their times on the importance of farm work.] [Lincoln and Ford] were not layabouts. Lincoln read, and Ford tinkered. They learned what they wanted to know.

And even when the children were forced into limited opportunities for learning – such as for Thomectony, Abigail Adams, and Sojourner Truth – they did not let that education define them. They, as they should, got what they needed from books or their teachers, or even from their everyday observations. They did not let the limits of their so-called education prevent them achieving their ends.

Of all the characters, I felt the strongest affinity for W.E.B. Dubois, especially the view he took of education as uplift, in his younger days. A learned man is a rational man. The emphasis on books and abstract learning is the hallmark of civilization. Through reason and the sheer force of intellect and action, others cannot help but look past surface appearance to admire the man beneath. Education, simply, gives man and woman choice, so they do not have to sell their work cheaply.

But [the mass education] system seems broken to me. I have, and I hope this doesn’t turn readers off, thought for a long time now that education – and especially higher education – is not suited for everyone. I don’t think I am being elitist; I just happen to think that higher education is as useful to [most] people the way that knowing carpentry is useful to a plumber. It might be [handy] to know, but one certainly doesn’t require it to succeed in the job at hand. I will elaborate some more.

I’ve concluded that a university education is something that prepares students… to do research. For scientific research, the goal is to identify mechanisms underlying observed phenomena. The use may in fact not be obvious – this is more of the knowledge of the sake of knowledge [mode]. To me, it seems a destructive idea to [use] a university degree as a form of uplift. It cheapens education to the point that one thinks a degree is in fact a commodity to be bought. This cannot be further from the truth: to pay for an education should mean that one has decided that the resources of a university helps expand his knowledge, whether by working with specific machines and tools or with specific professors. This is active knowledge seeking.

The alternative is to think of college as a paid for experience, at the end of which one is conferred a piece of paper that acts as a passport to a job. In this mode, I can see how students and parents may impinge on faculty, outraged that the instructor dared to fail the lazy student. [I detest students who blame their own lack of academic success on the teacher wh odid not engage his interest], when in fact all they have paid for is access. The rest is up to the student to provide.

And so I was left with this: each of the subjects Wolff discusses had the common attribute of being presenters. These were all men and women who grew famous as politicians, orators, writers, or entrepreneurs. In a sense, each of these people excelled at internalizing and then espousing what they knew. Rachel Carson wrote about her Romantic-era like visions of her bucolic home, an ideal that was nowhere near the [reality] of her growing up near a glue factory. Elvis Presley spent time in school, sure, but he certainly did not shy from joining quartets or cutting demos. Ben Franklin, Helen Keller, and Andrew Jackson learned a fair bit about finding popular topics to write or talk about.

This theme, whether Wolff intended or not, has dovetailed with my own thinking about the type of education I want my sons to have. I think, most of all, they need to read proficiently. Not in the cheap post-modern way where all language simply reflects one’s preconceived notions and thoughts. No, I mean to read and to understand and internalize the writer’s point of view. To engage him honestly. The second thing would be to then tell an audience what he understood, and how he extends or refutes the idea. I think this second point is something that needs to be emphasized explicitly. By telling, one learns.

It is a bit of a cliche, in graduate school, that the best way to learn is to be forced to teach someone else. Only [by] actually thinking about the audience will one truly begin to understand. I have found the hard way that this is in fact true. One should aspire not to understand something, but to make it so that he can help a second person understand what he [now professes] to know. I realized I have been implementing this in a soft way. I keep asking my older son questions, helping him develop details to his stories. I am always amazed and gratified when he can put together sentences with subclauses, declaring a proper sequence of events.

Part of it is just joy in hearing him talk, in seeing him learn. [I] am glad to see that there are precedents for such [a] type of education.

Update 4/9/2010: Ick. I know blogs are meant to be fast, first drafts sort of posts, but I just can’t stand seeing obvious mistakes and not correct them. I placed the edits in brackets (I’m not sure if the strikethroughs I see in other blogs are real corrections or if it’s another tool to convey snide comments – I think usually the latter…. So, I am going with brackets.)

First, a digression (and I haven’t even gotten to the official topic sentence for the post that pertains to the title!): I am currently reading The Numbers Game by Michael Blastland and Andrew Dilnot. The book is something in the mode of what I’d want to write about: it’s a guide for helping non-mathematicians, non-economists, and non-scientists (and perhaps those very people) in dealing with numbers. I’ve written in this blog (and commented a number of times on Dave Berri’s Wages of Wins blog) on how sports fans and  journalists misunderstand and misinterpret sports productivity measures. The greater theme is that I think there is a lack of perspective in how laymen apply scientific information into their own worldviews. The book I’d write would deal with this topic, and this is the one that Blastland and Dilnot  wrote.

A lot of the book presents numbers within a context. Actually, Blastland and Dilnot exhorts readers to develop and build the proper context around numbers in order to make them more manageable. This is especially salient in the opening chapters about large and small numbers. In some sense, a number like 800,000,000 might not be so large, if it represents the amount overspent by Britain’s National Health Service – assuming the budget for this agency numbers in the $80 billion range. As another example, the well-memed “Six degrees of separation” might imply that members of a peer group may actually be about 5  intermediaries away, but that number may as well be infinite if you are linked to the President of the United States by your knowing a neighbor who knows a councilman who knows the mayor who knows a state rep who knows the senator who knows the President. The impact of your linkage to the President, at a personal level, is clearly small.

At any rate, there is another chapter on “chance.” The example that Blastland and Dilnot use is one of cancer clusters. Most humans have some innate sense of how “random” ought to look. If one throws rice up in the air and letting it land on flat ground, one might imagine that some parts of the ground contain more rice than others. This is a value neutral, and no one disagrees with the appearance of random clusters. Or rather, we do not think anything sinister behind the appearance of clusters. But replace rice with “cancer incidence”, the interpretation changes. No more do humans accept that a cluster might just mean the chance meeting of many events that result in a higher number of cancer patients. There must be some environmental cause that led to the cancer cases. Never mind that number of cases may not take into account length of habitation (what if all the cancer cases were from people who moved recently into the town? The case for environmental factors for cancer incidence falls apart), the types of cancers, or the genetic background of the patients.

The specific example happens to involve a cell phone mast that was built in Wishaw, England. Citizens in the area were outraged and angry enough to knock down the power, when they found out they were in a “cancer cluster”. OBviously, the citizens keyed in on the mast as the cause for the cancer. Of course, the personal involvement of the townspeople tends to skew their perception, and a dispassionate observer might be needed to ask simply, “If the cell phone mast was responsible for cancer in this town, shouldn’t all cell phone masts be at the center of cancer clusters?”

The reaction of the townspeople to the Wishaw cancer cases is illustrative of the same symptoms shown, in a less significant way, by sports fans and journalists who base their conclusions about athletic productivity on so-called observational “evidence” and not on controlled, rigorous  studies. The dispassionate observer who asks if all cell phone towers should be at the center of clusters would try to overlay the distribution of towers to a map of cancer cases. He might slice the cancer cases further, trying to isolate cancers that have a higher likelihood of being cause by electromagenetic fields.  He tries to address the hypothesis that cell phone towers cause cancer. The Wishaw denizens, in contrast, didn’t bother to look past the idea that the towers caused their cancer. This highlights the difference between the so called statistical approach and eyeball approach to evaluating athletic performance. The first method is valid for an entire population of athletes, while the second may or may not be valid for even the few athletes used to make the observation. A huge part of science is to make sure that the metrics being used are actual, useful indicators of the observed system.

This brings me to the Science review of The Trauma Myth. A key component for why humans go wacky over cancer clusters and not rice clusters is that cancers are more personal. It becomes more difficult for humans to let go. Case in point: some of the criticisms leveled at investigators of the Wishaw cancer cluster is that they took away hope. I suppose what the critics meant was that the certainty of cause-and-effect  was lost. The Trauma Myth sounds like an interesting book. It takes a view contrary to “conventional wisdom”. Clancy provides some evidence to suggest that young victims, at the time of their abuse by pedophiles, might not look upon the episode as traumatic as they did not have enough experience to classify it as such. Of course the children were discomforted and hurt, but they did not quite understand what exactly was wrong. The problem wasn’t in how the children felt; the problem would be if how they felt interfered with them coming forward to report the crime.

Clancy’s book then aims to address how best to guide these abused children to come forward to report the crime and receive the help they need.  Apparently, conventional wisdom suggests there is a single reaction after sexual abuse: trauma. Clancy might have oversold how much this affected the number of children who came forward, but the reviewer notes that it is entirely unfair to portray Clancy as somehow being sympathetic to pedophiles.

And yet that is what Clancy is accused of. The fact that laymen cannot seem to countenance any criticism as constructive and useful is problematic, but not limited  to laymen. Even her colleagues have thought the worst about her work.

It is cynical, but I am glad my research is not in such an emotionally charged field. Of course, I have seen strong personalities argue over arcane points, and rather vehemently, but in no case could any researcher be accused of abetting pedophiles and murderers.

The obvious lesson here is that science gives voice to even the wildest of ideas. The objectivity that science enjoys is based exclusively on the gathering of evidence. That’s it. The framing of the question, which methods to use, and how one draws conclusions is all subject to biases and politics. However, all scientists expect that once a method is selected, the data were in fact obtained exactly as stated and in the most complete and rigorous way possible. This is what allows one scientist to look on another’s work and criticize it. The reviewer of The Trauma Myth noted Clancy did not dwell on this idea, which is a shame. Intellectual honesty can often be at odds with political expediency or comfort. It seems that  laymen and Clancy’s colleagues would do well to focus on those subjects, however many or few of them, who had not reported these sexual assaults, regardless of whether Clancy is correct or not.

The reviewer noted that the main point in The Trauma Myth is that sex crimes are underreported, and possibly due to children being confused by the fact that they had not felt traumatized (and thus somehow thought that they were not victims). I hope for their sakes that Clancy, her colleagues, and her current opponents can work to ensure that all victims of child abuse can come forward and obtain justice against the perpetrators.

Margret Guthrie of The Scientist gave a favorable review to Newton and the Counterfeiter. It sounds like a wonderful little vignette into the great mathe-magician’s life.

The philosopher eventually assembled such a compelling case against Chaloner — from testimony by witnesses, informants, and even the wives and mistresses of the criminal’s associates — that he was able to bring him up on charges of counterfeiting the King’s coin, a treasonable offence, in 1698.

On Thomas Levenson’s writing, she notes

[His] pace and timing rival those of the best crime story authors. He has written a real page-turner, perfect for a long afternoon’s engagement with the hammock or whiling away a long airport layover.

Nature journal has published a review of Italo Calvino’s Cosmicomics. The book is a re-release, and Alan Lightman recommends the book. It is a set of short stories with cosmological themes. It is whimsical, in one case having a mollusc imagining it had a mustache. The anthology compared favorably with Primo Levy’s Periodic Table. I am now interested in reading Cosmicomics. I have yet to read Levy’s book, although it is collecting dust on my book shelf.

%d bloggers like this: