Archive

Tag Archives: Justin Fox

In a previous post, I wrote about financial models. The point is that a scientific model generally simplifies. At the same time simplification gives models their power, one must also take care to assess whether adapting or transplanting the model to new fields is valid. Hence some disconnects between economic models and the financial tools based off these models.

Here’s another illustration. I was talking with my friend about his thesis. R. is interested in building a model of the olfactory bulb. This structure is interesting; it is well defined anatomically into three layers. The top layer contains neuropil structures called glomeruli. Glomeruli contain the axon projections from the primary sensory neurons and dendritic branches of the neurons in the bulb. Both these “main” neurons and so-called interneurons form  connections within this layer. Since this is where raw signals from the nose arrive, it is called the input layer. Together, these cells form a network and reshapes the responses into new neural activity patterns, relayed to deeper olfactory processing areas of the brain.

The middle layer contains the cell bodies of the olfactory bulb output neurons. As mentioned, these cells, called mitral or tufted cells (usually termed M/T cells), send a main dendrite to the glomerulus. Each cell also sends secondary dendrites laterally, within the middle layer. The third layer, the granule cell layer, contains interneurons that form connections between the laterally spread dendrites in the middle layer. This forms a second point within the olfactory bulb where the raw input from the nose can be reshaped, repatterned, and repackaged for subsequent processing.

OK: my friend spoke of his troubles. He needed to convert the sensory neuron activity (from the nose), which differ for different smells. The features that are important seem to be when the activity begins (onset latency), how long it lasts for (duration), and how intense (basically how often the neuron “fires” an action potential.) There are some other subtleties, naturally. Each smell evokes activities in a great many olfactory neurons, some of which respond with a different set of characteristics. The idea is to build the model so that the responses from bulb output neurons can be calculated, given the set of parameters (i.e. the input activity patterns).  Ultimately, these input neural patterns can be related to the actual behavior that helped shape them (such as the sniffing that an animal might engage in as they hone in on some odorous.)

His trouble came with integrating the Hodgkin-Huxley model of the action potential (this is basically derived from physical/thermodynami first principles), determining how this model would generate action potential “spikes” in a way that mimics what the olfactory bulb neurons would do, given the pattern of input activity and the 2 layers of interneuronal influence within the bulb. It seemed like a set of nested differential equations – that is, the action potentials varied over time, with the degree of influence from the various interneurons also changing in time. That’s a real cluster-eff.

I thought I had a brilliant idea (and I still think it’s nice.) I suggested that he can simply build a phase space to describe all the possible arrangements of his input patterns. Each point in this abstract descriptive space can be correlated to a set of output profiles (i.e. how the bulb neurons eventually respond.) He can, in the end, identify the bulb response most likely to result from a given set of input patterns.

The problem is that this is a descriptive model. The Hodgkin-Huxley model would have the advantage of being an actual, theoretical model. Once this is in place, they can literally predict, down to the number of spikes and when they fire, the output of the olfactory bulb.

So yes, that, in a nutshell, is the difference between data-mining versus something derived from first principles. While one might be able to infer the same conclusions from a descriptive model, the theoretical model might be easier to work with when extending it slightly further than what had been observed by scientists. As Justin Fox warns, such extensions can be perilous if one does not take care to worry about validity.

I hadn’t quite planned on reading about the rise of mathematical financial theory and efficient market hypothesis,  but that is what I did.

As it my wont,  I will digress and say that, a prime theme of Moneyball is not that statistics are better than visual pattern recognition: it is that when markets exist,  so do arbitrage opportunities.  Lewis’s writing style is to group his subjects into opposing camps,  to the detriment of his story. So the tension between scouts and stats geeks dominate the book.  It’s a more interesting book,  if you like people stories.

The moneyball story isn’t simply that OBP is a good statistic; it was an undervalued metric,  in the sense that players with high OBP weren’t paid highly compared to,  say,  batters with high homerun totals and batting averages.  Whether Billy Beane was the first one to “discover” OBP (he wasn’t) is incidental to the observation that no one was actively making use of that information. While GMs at the time were starting to identify other metrics, no one put their money where their mouths  were: high OBP players were not paid a premium. Because of that pricing difference (OBP contributes strongly to runs scored and thus wins, but GMs did not pay well for it),  one might be able to buy OBP talent on the cheap.  Now,  that arbitrage opportunity has disappeared,  as teams with money (read: Red Sox and Yankees)  have bid up the price.  That means high OBP now commands a premium.  Thus what worked before (a winning strategy on the cheap),  no longer works now.  It is a combination of fiscal constraints and incorrect pricing that gave Beane an edge.  The fact that there was a better stat  is besides the point; the fact that there was an arbitrage opportunity is absolutely the point.

This brings us to financial markets. If prices for stocks in a company were set by supply and demand, then rational buyers and sellers essentially agree on a fair price due to the fact that the seller has control of the product (i.e. stocks) and can name its price, while buyers need not purchase the stock if they  find the deal poor. In other words, opposing rational interests create a balance between something being charged too much or too little.

Is this price the correct price?

From a simple question, much of the mathematical economics was developed to help investors, fund managers, brokers and bankers identify the worth of the various products they buy and sell today. The most successful of these theories is that  markets are efficient: prices in a financial market such as the New York Stock Exchange are not only the optimum price for sellers and buyers, but reflects a conclusion about the value of the product. That is, this price correctly valuates the company whose stock is being sold. There are different forms of this efficient market theory: they differ in the emphasis on whether different “information” is accounted for in the price. A weak version of efficient market theory suggests the stock prices reflect all past public information. A semi-strong form of this theory is that new publicly available information is accounted for in the price of a stock, in a large financial market. The strong form of this theory is that even private (i.e. inside) information is accounted for in the price.

This might seem strange to people, given that a) we just saw a financial market meltdown because finance sector personnel did not evaluate sub-prime mortgage bonds correctly, b) such bubbles existed before and even after we have complicated performance metrics (Dutch tulip  mania and the dot-com bubble), and c) that there are enough shenanigans involving inside trading.

At any rate, one difference that I will focus on is that economic scientists (i.e. economists, and a breed we should separate from the operators in the financial market), like most scientists, seek general explanations. Because their tool of trade is mathematics, economists prefer to derive their conclusions from first principles. Generally, statistical analysis is thought of as ways of either testing theory or helping guide the development of a theory. Statistical models are empirical and ad-hoc. They rely on the type of technique one uses, how one “scores” the observation, and they are, as a rule, not good at describing things that were unseen. A good theory is a framework for distilling some “essence” or a less complex principle that governs the events that happen, which led to “observations.” Usually, the goal is to isolate the few variables that presumably give rise to a phenomenon. These distinctions are not so firm, of course, in practice. Good observations are needed to provide the theorists with curves to fit, mathematically. And even good theories fall apart (again, it is still based on observations – boundary conditions are a key area where theories fail.)

What does all this have to do with financial markets and efficient markets? While we have evidence of inefficient markets, these events may have been rare or the result of a confluence of exacerbating factors. However, one thing that scientists would pay heed to is that pricing differences were proven to exist, mathematically, and derived from the same set of equations used to describe market efficiency. Joseph Stiglitz proved that there can’t be a so-called strong form of an efficient stock market, since information gathering in fact adds value and has a cost. The summary of his conclusion is that, if markets were perfect and all agents have perfect information, then everyone would have to agree on the price. If that were true, then there would be no trading (or rather, speculating), since no one would price things differently. When people are privy to different information, it may lead to pricing differences. That in turn, must lead to arbitrage opportunities (no matter how small.) Thus the “strong form” of market efficiency cannot exist.

I was talking with a friend who has an MBA. He wasn’t too keen on hearing that the efficient market hypothesis may not be entirely proper, when I was describing to him Justin Fox’s book, The Myth of the Rational Market. I was approaching things from a scientific perspective; I know that models are simplifications. Even the best of them can be found inadequate. And this is what I want to focus on: that although models may not describe everything exactly, it’s fine. It does not detract from it.

From Fox’s book, and also William Poundstone’s Fortune’s Formula, the reader sees some difficulties with the efficient market theory. For one, the theory was originally posited to explain why prices, in the very short term (daily), varied around some mean. Sure, over time, the overall price increases, but at every iota of time, one can see that prices ticked up and down by a very small fraction of the price. This is known as the random walk, first mathematically described in the doctoral thesis of Louis Bachelier. One bit of genius is that, Holbrook Working pointed out that these random price fluctuations may in fact indicate that the market has worked properly and efficiently to set a proper price. Otherwise, we would see huge price movements that reflect the buying and selling of stock due to new information. In other words, the price of a stock constitutes the mean around which we see a “natural” variation.

And from that, much followed. Both Poundstone and Fox talked at length about pricing differences. In some sense, market efficiency, although implying both speed and precision, did not address the rate of information propagation.  Eugene Fama suggested that information spread in a market is near instantaneous (as in, all pricing changes are set and reset constantly at a proper level). In the theory’s original form, I think this instantaneous rate resulted from a mathematical trick. Bachelier was able to “forecast” into the near, near future, showing the stock price can tick up or down. His work was extended into many instants by a brilliant mathematical trick. By assuming that stock transactions can be instantly updated and without cost, one can build up a trajectory of many near instants by constantly updating one’s stock portfolio. The near, near future can now be any arbitrary future moment.

Again, my only point here is not that the efficient market theory is wrong and must be discarded. I was fascinated by the description of counter examples and the possibility that some of the assumptions helping to build up a mathematical framework may  need revision.

My boss and I were talking about the direction of our research. He thought that models of cell signaling pathways were lacking in rigor (by that he means a mathematical grounding). He, having a physics background, scoffed at the idea that biology is a hard science, because biological models are mostly empirical and does not ‘fall-out” from considering first principles (i.e. based on assumptions, postulates, and deductive reasoning). I, being the biologist, tried defending this view. Biology, like any sort of system, is complex. There are some simple ideas that can help explain a lot (for instance, evolution and genetic heritability). The concept of the action potential, in neurons, can in fact be derived from physical principles (it is simply the movement of ions down an electrochemical gradient, which can be derived from thermodynamics). In fact, neurons can be modeled as a set of circuits. For example, one recent bit of work my supervisor and I published on, using UV absorption as a way to measure nucleic acid and protein mass in cells, is based on simple physical properties (the different, intrinsic absorption of the two molecules to light), which can be described by elementary, physical mathematical models.

However, the description of how networks of neurons may work, and how such physical phenomenon can give rise to animal and thoughts, and in turn how individuals may act in concert with others and form a societal organism, are wildly complex. Further, there can be multiple principles at work, none of which are necessarily derivable or deduced from a common set of ur-assumptions. For example, Newton’s laws of motion can be derived from Einstein’s theory of relativity. However, some basic ideas about human behavior (such as that leading to pricing correctness in market efficiency and game theory), or how humans may interact (as described by network theory), and how something as seemingly nebulous as and human-dependent as “information” can actually be described by Boolean algebra and a mathematical treatment of circuits.

I should be clear: I am simply noting that some fields are closer to being modeled by precise, mathematical rules than others. Reductionism works; even the process of trying to identify key features underlying natural phenomena is helpful. However, one should also keep in mind that wildly successful theories may change, as we obtain better tools and make more accurate measurements.

I think an important point that Fox makes, then, is that we do have a number of observations suggesting that markets are not entirely efficient. For example, there is price momentum (a tendency for stock prices to continue moving in a particular direction), there is significant amount of evidence suggesting that humans do not always act rationally (they tend to overvalue their property but discount things they do no own), and there are clearly signals that sometimes, herd mentality results (a la price momentum or bubbles). Fox also points out something rather important: even as economists point out inefficiencies in the market, they seem to disappear once known. Part of it could be statistical quirks: by chance, one might expect to see patterns in the noise of large, complex systems. Another part of it is that, once known, the information is in fact integrated into future stock prices. This places economists in a bind: if the effect is false, one might be justified in ignoring it as noise or a mirage of improper statistical analysis. However, if the effect is real, then it clearly suggests that the appearance of price incorrectness reflects market inefficiency. At the same time, the effect disappeared, also suggesting that once known, the market price showed correction, just as efficient market theorists predicted.

As one can imagine, there are opposing camps of thought.

Further compounding the difficulty is the fact that it has been hard to integrate non-rational agents into traditional market theory. current theory treats pricing as an equilibrium, consistent with the idea that information and rational agents pulling and pushing the prices this way and that, but ultimately, the disturbances are minor and the overall price of the stock is in fact the proper, true price. Huge disturbances are interpreted as movements in the equilibrium point, but they must arise from external forces (that is, from effects not modeled within the efficient market model – which actually leads to an inelegance of the variety that mathematicians and physicists dislike.) As the number of contingencies increase, one might as well resort to a statistically based, empirical model. Which brings us back to the original point of how well we understood the phenomenon.

On the other hand, no one who wishes to modify efficient market theory has successfully integrated the idea of the irrational agent. The advantage is that here, pricing changes – correct or incorrect – are based on the actions of “irrational” agents. Thus we are no longer looking at an assumption of a correct price and deviations from that price. We can, presumably, derive the current price by adding into the model the systematic errors made by agents. Thus even huge deviations in proper prices (i.e. bubbles, undervaluations, and perhaps even the rate of information incorporation) would be predicted in the model. However, a model remains just out of reach. In other words, efficient market opponents do not yet have a completed and consistent system to replace and improve the existing one. Be default, efficient market is what continues to be taught in business schools.

My interest in the Fox and Poundstone books is precisely in how difficult it is to incorporate new ideas if an existing one is place. It is this intellectual inertia that results in the concept of memes as ideas that take on a life of its own  (in that ideas exist for its own reproductive sake) and Kuhnian paradigm shifts that have to occur in science. My specific application has always been in how non-scientists deal with new ideas. If scientists themselves are setting up in opposing camps, what must laymen be doing when faced with something they do not understand?

 

%d bloggers like this: