When to talk…

I swear I never meant for this blog to focus so much on sports. But Dave Berri has a post that dovetails neatly with some thoughts I have regarding experts, expertise, and how the public should handle them. I think it can be interesting to approach science issues from the side, rather than head on. Specifically, three authors (Berri, Malcolm Gladwell, and Steven Pinker), all of whom I admire, have had a minor verbal tussle about the issue of expertise.

First, a digression. I was already going to comment on the interface between experts and laymen. The original impulse came about because I just finished reading Trust Us, We’re Experts! by Sheldon Rampton and John Stauber. Like books of this ilk, the authors spend many chapters recounting the failures of authority figures and the exploitation of these failings by people who follow the profit motive to an extreme degree. Although the title hints at a broadside against arrogance of scientists, it really is about the appropriation of the authority, rigor, and analysis of science to sell things. The targets of this book are mainly PR companies and the corporations that hire them. There are also a few choice words for scientists who become corporate flacks.

The book lacked in presentation, mostly because the authors avoided analyzing how one can tell good from bad science. The presentation leans on linkages between instances of corporate malfeasance; there is no analysis and data on how many companies engage PR firms in this. There is no analysis on the amount of research from company scientists versus independent ones. The authors focus on motives of corporate employees, but somehow ignore the possibility of bias within the academy. There is no attempt to identify if and when corporate research can be solid. In broad brush strokes, then, chemists who discover compounds with therapeutic potential are suspect; the same people working in academia (and presumably someone who will not capitalize on this finding financially) can be trusted.

This is actually a huge problem in the book; one of the techniques that Rampton and Stauber document is the use of name-calling (good old fashion “going negative”, ironically enough, the PR firms would simply label all opposition as junk science.) in describing research and scientists who publish contrary findings from whatever corporations happen to be pushing. But by avoiding the main issue of identifying good and bad science, the two stitch examples of corporate and public relations collusion. Now, the evidence they present is good; they hoist PR and corporate employees by their own petards, quoting from interviews, articles written for PR workers, and from internal memos. But the ultimate point here is that Rampton and Stauber simply tarnish corporate research because the scientists work for corporations. I believe this to be a weak argument and is ultimately useless. One example I can think of is, what if two groups with different ideologies present contrary findings? Assuming that the so called ‘profit motive’ are equally applicable, or not at all, then readers will have lost the major tool that Rampton and Stauber pushed on in this book. But as I will show, the situation is not always as stark as, for example, corporate shills and academicians or creationists against biologists. There is enough research of varied quality, published by ‘honest actors’, to cause enough head-scratching about how solid a scientific finding was.

Let’s be clear, though. Of course the follow-the-money strategy is straightforward and, I would think more likely than not, correct. But that cannot be the only analysis one does; if the thesis is that PR firms use name-calling as a major tactic in discrediting good, rational, scientific research, it seems bad form to use funding source as a way to argue that investigators funded by corporations do bad research. It’s just another instance of name calling. I expected more analysis so that we could move away from that.

And that’s the unfortunate thing about a book like this; why wouldn’t I want a book that causes outrage? Why, in essence, am I asking for an intellectually “pure” book, one that deals with corporate strong arm tactics in a so-called more methodical, scientific way. Doesn’t this smack of the political posturing, where somehow a result matters less than the means – and no, I do not mean the ends justify the means. I am just pointing out that there might be multiple ways of doing something (like taking route A vs. B or cutting costs by choosing between vendor C and vendor D). Workplace politics might elevate these mundane differences into managerial warfare. Why should I care what the politics are, so long as it leads to a desirable end result?

One problem problem with a book like Trust Us is that it appeals to emotions with rhetoric, without a corresponding appeal to logic. I think including analytical rigor is important as it provides the tools for lasting impact. As it is written, the book (published in 2000) provides catchy examples of corporate malfeasance. The most basic motif is as follows: activists use studies that, for example, correlate lung cancer with smoking in order to drive legislation to decrease smoking. Corporations and interested parties attack by calling this bad science, by calling the researchers irresponsible, by calling the activists socialist control freaks who wish to moralize on an issue that is really a matter of personal choice. They have a considerable war chest for this sort of thing. Frankly, if that’s what Rampton and Stauber are worried about, then their focus should have been on the herd mentality of people, not the fact that PR firms use negative ads.

But that is only one weapon; the other weapon is the recruitment or outright purchase of favorable scientific articles. The  example would be the studies published by scientists who work for tobacco companies, with the studies refuting the claims of the investigators. But Rampton and Stauber focus on simply point out that this favorable finding comes from researchers who are paid by Philip Morris. That’s nice, but how is this different from the name-calling Philip Morris engages in? The real issue is how one goes about identifying what bad research is.

They do throw a sop to analytical tools, at the end of the book. The discussion is cursory; the focus is again on helping the reader dissociate the emotional rhetoric from the arguments (such as they are.) The appeal is that the analysis is simple. Just question the motives of the spokesmen and experts.Worst of all, their discussion of the difficulties of science gives the impression that the whole enterprise is a bit of a crapshoot anyway. They point out peer review is a recent phenomenon, that grant disbursal depends upon critiques from competing scientists, and that the statistically significant differences reported are more often than not, mundane and not dramatic. Their discussion of p-values make scientific conclusions sound like so much guesswork, rather then the end result of hard work. Day-to-day science isn’t as bad as the pair portrayed it.

It is a trick to take a broad question (“How does the brain work?”), break it down into a model (“Let us use the olfactory system as a ‘brain-network lite'”), identify a technique that can answer a specific question (“I wonder if the intensity of a smell is related to the amount of neural activity in the olfactory system? We expect to see more synaptic transmission from the primary neurons that detect ‘smells.'”), do different experiments to get at this single question, analyze the data, and write up the results.

Forget the fact that different scientists have different abilities to ask and answer scientific questions; nature doesn’t often give a clear answer. So yes, it is hard to get conclusive statements. To confound the issue further, even good research can have a flaws, unclear experimental design, incorrect analysis, and distressingly minor differences between control and test conditions.  Which leads us to the question, what exactly does good research look like?

I am not going to answer this now, and I can’t answer this. The blog will, eventually, attempt to deal with this very issue by presenting papers and research that I read about, in addition to book reviews. But my point here is that Rampton and Stauber didn’t address this issue either. The very end of the book is a populist appeal, one that emphasizes “common sense” over jargon and statistics. They even appeal to our civic duty, that we should become more politically active and associate with (my term, not theirs) “lay-experts”. At some point, however, even well-informed non-scientist and non-experts must have turned to experts for some original research. Rather than disregard that research, then, one must learn and gain a comfort level with parsing scientific literature.

It took a while, but we return to the Gladwell-Pinker-Berri flap. The setup is simple: Berri is a sports economist, specializing in creating models that predict athletic performance. However, he has tackled multi-player games (basketball and American football), which, presumably, would lead to complex models, or perhaps something computationally intractable. Surprisingly, he found that neither was the case. The important point this time is that he was able to show where quarterbacks are selected in the NFL draft doesn’t fit with their performance (assessed using the Berri and Simmons QB Score metric.) Gladwell wrote an essay that presented Berri and Simmons argument favorably. Pinker made a short comment refuting this, saying that QB’s drafted high do have better performance.

Both Pinker and Gladwell‘s review and response seemed snippy to me. But what I found interesting was that while Pinker questioned Gladwell’s ability as an analyst (while giving Gladwell the backhanded compliment that he is a rather gifted essayist – but not a researcher or analyst), Gladwell, in turn, questioned the background of Pinker’s sources. I think Gladwell’s highlighting the faults with the arguments was sufficient, as Pinker’s sources are somewhat weak. It really wasn’t necessary to impugn their background.

This is ironic, as Pinker raises some peripheral issues regarding Gladwell’s suitability in reviewing the research and observations from experts. Just as with Gladwell, I think Pinker gave a reasonable counter-argument to Gladwell’s generally gung-ho and favorable presentation of his subjects. For example, there is a flip side to imperfect predictors: while they may not be useful for predicting the most suitable candidates, they help to remove the worst ones from the pool, in a cost-effective way. That’s an interesting, and I think one “system” that scientists can study to answer this is… sports (because of the wealth of performance data).

There really is no need to trash an expositor just because he is a better essayist than a scientist, for instance. Isn’t Gladwell in fact an expert in conveying novel research to the public (and effectively)?

In this case, I think both the “expert” and “lay person” gave a good accounting of their (intellectual) problems with the other. However, they both engaged in what amounted to look-at-the-source “analysis” (Pinker says Gladwell doesn’t know what he writes about. Gladwell trashes Pinker’s football sources for things they did, that are unrelated to football). The only thing the ad hominem attacks achieved was to raise the blood pressure of both participants.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: