Archive

Science

I have this idea that I need to get out of my head. I need to preface it by making the following disclaimers. I am an atheist. I have no interest in ‘proving’ any of the faithful that he is wrong. I am not asking anyone to reconsider his position, theological or philosophical. I simply wish to tell a story and ask a question, which admittedly is really an observation.

Here goes.

I think every religion seeks to place man within the cosmos, and this position embodies a contradiction. Generally, religions claim mankind is insignificant, needing the grace of a deity to make their lives meaningful. The contradiction is that a deity deign spend time on helping the ants find salvation.

In this context, and specifically in the  Judeo-Christian tradition, I made up the following. From my reading of the Bible, God sounds like a son of a bitch. Granted, I was more interested in the Old Testament, but there are enough militant statements in both Old and New Testaments speaking to man’s insignificance and need for God’s grace. God provides meaning to life. Fulfilling His desire gives purpose to man.

So, the story I concocted is that evolution need not be nemesis of religion. In fact, it would fit in magnificently (in a literary sense.  It is based on my interpretation of the character of the Almighty). One image I hear from Christians is that we are akin to children playing in the mud. Kids playing in the mud certainly can’t clean each other well. You need someone not in the muck. The analogous situation is that any creature less than God to grant purpose and meaning is a profane idea. If God does not exist, the false idol/prophet/holy man dispensing advice and using philosophy to give us purpose is like having one dirty child cleaning another. We need an omniscient being to become a cosmic referee, as it were.

In this context, evolution – in which the central principles are supported by a wealth of evidence from molecular biology research, fossil records, genome comparisons, the selection of antibody resistant microbes, dog breeders, and orchid growers, for example – simply underscores how insignificant we would be without God.

For example, we know that there is a record of life going back at least 500 million years. Humans share a genetic code that have similarities to apes, monkeys, dogs, mice, cats, lizards, birds, and fish. There is also a library of ape fossils where one can see semblance to human bone structures. After showing us that mankind is a part of the world around him, wouldn’t it make sense to show mankind that by possessing God’s grace (whether it be salvation, purpose, or insight), this trait allows man to transcend his status as animal? Doesn’t such a story imbue mankind with distinction conferred entirely by God, that without him we are in fact a bunch of great apes?

My observation/question to the religion-inclined is, “Can you show me where in the Bible you can refute this story?”

Since I did base this (OK, maybe I am guilty of trying to needle Christians) on a literary reading of the Bible, can you really argue whether my interpretation is so off base? What would you use as evidence, some passages from the Bible that I would be unlikely to use? How would you decide among all the existing creation myths? What are your criteria for dismissing Zeus and the Olympians but not Jesus, or Allah, or Yahweh, or the Buddha?

The nice people at Ars Technica wrote about a Science paper published today. Through the use of precise optical clocks, researchers were able to show the effects of relativity for objects in motion and at different distances from a massive object (i.e. Earth). Traditionally, effects become “obvious” and large when objects move at near light-speed. It is interesting then to see that macroscopic objects (like huge clocks and by extension, things and people) can also experience relativity, albeit with inconsequential effects. Researchers were able to show that moving a clock at 22 mph or placing a clock about 1 ft higher off the ground will result in that clock ticking slower. Both are good reads.

Something close to home. The Scientist and New York Times reports that Nobel Laureate Linda Buck retracted two papers recently. This follows a retraction of a Nature paper from 2001. All three papers featured work done by a post-doc, Zhihua Zou. The retracted papers have no bearing on the work Buck and Richard Axel had performed in identifying the family of G-protein coupled receptors, which won them the Prize.

Rather than focusing on the work in the retracted papers, I would like to explain why, fortunately enough, the retractions do not substantially alter our view of how olfaction works.

First and foremost, independent researchers, using independent means, have found similar results presented in the retracted papers. The main points from the three retracted papers are,

1) Using a genetically encoded neuronal tracer, the paper purported to show that neurons that express the same olfactory receptor connect to neurons that wire to the same brain regions responsible for olfactory processing (Zou et al, 2001).

2) That a marker of neuronal activity, c-fos expression, showed that activity patterns in olfactory cortex is reproducible across animals and are typical for a given smell, a molecule of which is termed “odorant” (Zou et al., 2005).

3) That mixtures of smells activate neurons that respond to the components individually (the pattern of activation is a summation the patterns evoked by single odor components.) (Zou and Buck, 2006).

The peripheral olfactory system can be described as follows. The primary, sensory neurons are situated in the nasal cavity and are responsible for detecting odor molecules. These neurons form connections with neurons of the olfactory bulb. In turn, OB neurons project to “higher olfactory centers”, which includes the piriform cortex.

For the first retracted point, there already exists research showing that connections into the piriform cortex from the olfactory bulb is both convergent and divergent. This can be shown by labeling small groups of neurons in the olfactory bulb, and then watching where the labels wind up. In the piriform cortex, one can see the label over large areas even if the label started out in a confined area in the olfactory bulb, thus showing divergence. A small location in the piriform cortex also receives neurons from all over the bulb (i.e. convergence) when using labels that travel from the cortex to the bulb.

The specific detail offered by the Buck group is that the neurons connecting to piriform cortex share a common origin. That is, the sensory neurons in the nose connect to olfactory bulb neurons that in turn connect in to clusters of physically near neurons within the piriform cortex. The groupings at this level suggest that the piriform cortex could be built from many such groups of neurons. Thus, when an odor molecule activates receptor neurons in the nose, eventually, clusters of activity could be found in the piriform cortex. These spatial patterns may result from the sums of all the receptor neurons that were activated (both these points were covered in Zou et al., 2005 and Zou and Buck, 2006). The spatial organization may reflect an (still unclear) advantage or need for neural processing.

That is a very rough sketch of some basic ideas in mammalian olfaction. As noted, tracing experiments, performed in separate labs with different methods show that there is some structure in where neurons form projections. Whether one can make specific statements linking a response and/or connection to some neurons in the nose is at issue.

One should note that the retractions from Buck lab do not indicate misconduct (yet) – i.e. doctoring and faking of data. The problems could have arisen in analysis. Indeed, Illig and Haberly, in 2003, using the same c-fos methodology to indicate activity, found that the piriform cortex had widespread activation in response to odorant exposure. A “pattern” of activity was absent. Using both electrophysical recordings and optical indicators of neural activity, similarly wide-spread activity was also observed in mice and rats. Even in zebrafish, wide-spread activity within the olfactory cortex analogue was observed. The clustering seen by the Buck group could have arisen by chance clustering of the c-fos signal and could have been enhanced by the analytical techniques they used. I do not know how the data was analyzed to lead to the results published in the paper, but there are mistakes one can commit, without any malfeasance intended.

A note on the methods: there are some significant differences in the way activity is reported with c-fos when compared to electrophysiological and optical recordings. Expression of c-fos is linked to calcium influx in activated neurons. The usual method in evoking this response, to create enough signal against a background, is to expose the animal for 30-60 minutes, to a single odorant. In contrast, the other techniques show responses lasting less then a second (and at millisecond precision) in response to exposure to smells. Further, the chain of events leading from activated neuron to c-fos expression is unknown. For example, how many electrical impulses (i.e. action potentials) in neurons correspond to a given level of c-fos expression? While useful as a gross measure to identify areas of interest, the c-fos technique ultimately lacks some of the advantages researchers need to make definitive statements about smell processing at the actual time scales relevant to brain function.

That is an important distinction: techniques  that are similar to those Linda Buck’s lab used have worked in other labs. The key point is that we can no longer use the tracing results that purported to show connections at 3 structures in the olfactory system. Although we no longer know the specific identities of the connected neurons, the general principle of convergent/divergent connections remain. As for her other conclusions in subsequent papers, there is enough evidence to suggest that the organization of activity in the piriform cortex occurs in the timing of neural responses and not necessarily by their physical locations.  Further, the technique her group used to assess activity has disadvantages in assessing neural activity at millisecond time scales (which is the regime where neurons work.  Thus the findings themselves, although retracted, do not alter at a deep level what researchers think about olfaction.

The Scientist has published some advice for training post-docs. More emphasis needs to be placed on what a career in science entails. Often, the key motivation in doing science is that experiments are  fun. However, that isn’t doing science.

Being a scientist means: looking for gaps in existing literature, stringing together theses gaps to build a research program (i.e. grant proposal), write grant proposals, manage money, manage time, learn to interact with colleagues, build working relationships (or at least acquaint and introduce oneself) with researchers outside lab, expose oneself to science (be selective!), do the bench work, and analyze data.

If you are a graduate student, then your job is to turn data into figures. Doing so will train you to think about how best to communicate a finding. I would argue that, even if you have an “n of 1”, you should start making the graphs, tables, curves, and so on. Have the framework in place to receive data.

This is the corollary to displaying your hypothesis in a prominent location and thinking if it needs to reworking.

Essentially, focus on telling people what you are doing, why, and what you have found so far.

In doing this, you will naturally look into literature to fill in gaps in your knowledge and also to find novel experiments to try.

This set of observations is not meant to be authoritative. It is simply something (new) for you to try if you haven’t already done so. If you want to add to this list, let me know. I can link back or just update this post.

And interesting article in The Scientist about how David Pendlebury, a citation analyst at Thomson Reuters, built a simple model to identify researchers who are candidates for winning a Nobel Prize. He used a simple citation and recognition model (the more someone is cited and the more prizes they have won, the more impact these researchers are thought to have.) The article is short and fun, and although the reporter Bob Grant notes the correct predictions, it would have been nice to see how often the model missed.

There is a study, published on Aug. 10, in the British Medical Journal examining whether the articles published online, with comments, elicited any responses from the authors. There is a thoughtful blog post from The Scholarly Kitchen about this study. Article authors generally didn’t respond to critics, even if the critics were serious and wrote substantively on the matter. Interestingly, when the authors did respond, the critics were satisfied responses less than half the time. The editors of the journal generally accepted the authors’ rebuttal, and of course, it should be said that they did accept the manuscripts for publication.

The lead author of the BMJ study, Peter Gøtzsche, suggested that  the editors 1) may not be qualified enough to review the criticisms and and rebuttal and 2) have a vested interest in maintaining the reputation of the journal (i.e. defend the stance that the science they accept is high-quality.) I have this additional snarky observation. The critics may be especially unreceptive to the authors’ original paper and rebuttal because the original criticism stemmed from contradictions with the critics’ own work.

Just a thought.

I have a few other thoughts, and I would argue that we not judge the editors too harshly for, perhaps, mistakenly accepting “bad” science or perhaps not understanding the nature of the criticisms. There are actually acceptable reasons as to why some controversial papers get published.I have some insight into the editorial process; no juicy gossip or behind-the-scenes look at nefarious machinations, mind you.

I had applied to a scientific editor position at Neuron almost 2 years ago. There were two open positions and I came in, at most, “third” (and dammit, it was my dream job!) For the first stage in the application process, I had to review 2 papers from 7 categories, published in Neuron over year prior to the application deadline. I was to choose an example of a good paper and a weak paper (from the set of already peer-reviewed and published articles). I was also supposed to write about the neuroscience field, identifying authors whom I would invite to write a review (and I had to specify the topic) and where the next big thing will be (and who the leading lights are.) It was great fun to write, although I left myself only a week to do it (and of course I was working in the lab all that time.)

But I digress. Of the 7 fields, I can honestly say that I am an expert in, at most, 2 of the fields. And even then, that’s a stretch, because the fields were more general than a particular sensory system (I worked in olfaction) or even a technique (quantitative microscopy, epifluorescence and 2-photon laser microscopy). As one might guess, and as found out later, this is the norm for the editors there. The editors all have different backgrounds and at Neuron, and no one is asked to specialize. So every editor will be asked to triage manuscripts outside of their expertise and background. Presumably, this has the advantage of ensuring that editors remain aware of developments over a wide swathe of neuroscience.

I can’t say about other journals, but at Neuron, the editors have final acceptance/rejection authority. They decide whether the article is sent for review in the first place. Using the peer review, they of course defer to the expertise of the reviewers, but the editors’ job here is to be a disinterested party in unknotting the various interests reviewers and authors have. But the decision to accept a manuscript for publication is also determined by this amorphous concept of making a significant advance in the field.

There are several ways of looking at this: perhaps the researchers themselves ought to understand where their field is going and so are the best placed to assess where the cutting edge research is, or that that researchers have a vested interested to “sell” their research as hot – regardless of actual scientific worth, or that the editors are in no way prepared to decide on what constitutes a significant advance – as they no longer have direct experience with the difficulties and intractabilities of various experiments and models, or that the editors are in fact best placed to see what is a significant advance – by virtue of seeing so many good and bad manuscripts with overlapping topics from various competing scientists.

I am inclined to go with the fourth idea, that good editors can observe developing trends from manuscript submissions. And if you look over a year’s worth of articles, from one journal, I also could see blocks of papers with similar topics (or at least similar keywords.)

I think the stewardship/peer-review system works, although I am not opposed to the more open style of publication like the Public Library of Science (PLoS) journals. These latter focus more on technical soundness; the reviewers try to make sure that the experiments support the conclusions, as is the norm. However, no editors are in place to reject papers because of a lack of perceived significance. The idea is that scientists will eventually cite this paper heavily – or ignore it – depending on its actual value; it is assumed that the cream will float. Again, I have nothing against these different modes of publishing.

During my interview with Neuron’s scientific editors, we discussed our reasons for wanting to become editors and problems that may arise during the adjustment phase. One potential downside is that no longer will an editor be recognized as an authority on any subject, and rightly so. The editors would no longer be in the trenches, won’t be adding any new techniques to his repertoire. However, I defended the idea that editors simply replaced one expertise with another. As I said above, a good editor becomes an expert in spotting over-populated and under-served topics. And the nature of the beast is that they would in fact see many, many similar manuscripts. They have the luxury of establishing a baseline level of quality and significance.

Well, I didn’t get the job, but I remain sympathetic to their roles. I think there is a need for the so-called gate-keeper role. The fact that someone took the time to place a science manuscript in the context of all the work that has recently been done lends an imprimatur of worth. Of course editors do not get it right all the time. But one can at least count on Neuron, or Nature, or Science, or Journal of Neuroscience publishing papers that presumably compared favorably to some cohort of papers. That takes judgment, and the editors read the manuscripts that you may not have time for.

My point here is that editors can misjudge the value of a piece of science, but that is no reason to think they add nothing of value. They do not necessarily have to defend their choices, at least not at the level of single papers. Remember, just as the editors themselves may have idiosyncrasies, so do the readers that read the articles. The scientists themselves also have different intellectual sharpness, shall we say. But, over time, if editors do consistently “get it wrong”, then it would in fact be obvious. The room for subjective assessments of value can only go so far. Techniques converge at some point, even if the systems scientists work on differ. Each experiment generates a control for comparison, anyone wishing to extend work generally tries to reproduce results – to show that they are doing something right, and the level of citations are all spot checks for the soundness of the science. At some point, missing experiments or graphs, other scientists complaining about articles whose results cannot be replicated, or few citations become problematic for a journal trying to maintain its luster. And the scientists themselves can start to ignore the offending journal by submitting to competing journals.

But, at its most basic, one wouldn’t expect an editor to be aware of the details raised by critics. Simply, the details are probably only important to the investigators and deal with “procedural points” (as an example, do you really care that an animal “sniffs” rapidly not to gather more odor molecules to increase signal-to-noise but rather to attenuate background smells – in order to increase signal-to-noise? Or that this is something imposed by behavioral modulation rather than centrifugal modulation of the olfactory bulb? See Verhagen et al, 2007.)   I guess it is fair to say that editors are “big picture” people. With that said, perhaps there is some way the editors can facilitate the discourse that occurs in the comments-sections that are now de rigeur.

First, a digression (and I haven’t even gotten to the official topic sentence for the post that pertains to the title!): I am currently reading The Numbers Game by Michael Blastland and Andrew Dilnot. The book is something in the mode of what I’d want to write about: it’s a guide for helping non-mathematicians, non-economists, and non-scientists (and perhaps those very people) in dealing with numbers. I’ve written in this blog (and commented a number of times on Dave Berri’s Wages of Wins blog) on how sports fans and  journalists misunderstand and misinterpret sports productivity measures. The greater theme is that I think there is a lack of perspective in how laymen apply scientific information into their own worldviews. The book I’d write would deal with this topic, and this is the one that Blastland and Dilnot  wrote.

A lot of the book presents numbers within a context. Actually, Blastland and Dilnot exhorts readers to develop and build the proper context around numbers in order to make them more manageable. This is especially salient in the opening chapters about large and small numbers. In some sense, a number like 800,000,000 might not be so large, if it represents the amount overspent by Britain’s National Health Service – assuming the budget for this agency numbers in the $80 billion range. As another example, the well-memed “Six degrees of separation” might imply that members of a peer group may actually be about 5  intermediaries away, but that number may as well be infinite if you are linked to the President of the United States by your knowing a neighbor who knows a councilman who knows the mayor who knows a state rep who knows the senator who knows the President. The impact of your linkage to the President, at a personal level, is clearly small.

At any rate, there is another chapter on “chance.” The example that Blastland and Dilnot use is one of cancer clusters. Most humans have some innate sense of how “random” ought to look. If one throws rice up in the air and letting it land on flat ground, one might imagine that some parts of the ground contain more rice than others. This is a value neutral, and no one disagrees with the appearance of random clusters. Or rather, we do not think anything sinister behind the appearance of clusters. But replace rice with “cancer incidence”, the interpretation changes. No more do humans accept that a cluster might just mean the chance meeting of many events that result in a higher number of cancer patients. There must be some environmental cause that led to the cancer cases. Never mind that number of cases may not take into account length of habitation (what if all the cancer cases were from people who moved recently into the town? The case for environmental factors for cancer incidence falls apart), the types of cancers, or the genetic background of the patients.

The specific example happens to involve a cell phone mast that was built in Wishaw, England. Citizens in the area were outraged and angry enough to knock down the power, when they found out they were in a “cancer cluster”. OBviously, the citizens keyed in on the mast as the cause for the cancer. Of course, the personal involvement of the townspeople tends to skew their perception, and a dispassionate observer might be needed to ask simply, “If the cell phone mast was responsible for cancer in this town, shouldn’t all cell phone masts be at the center of cancer clusters?”

The reaction of the townspeople to the Wishaw cancer cases is illustrative of the same symptoms shown, in a less significant way, by sports fans and journalists who base their conclusions about athletic productivity on so-called observational “evidence” and not on controlled, rigorous  studies. The dispassionate observer who asks if all cell phone towers should be at the center of clusters would try to overlay the distribution of towers to a map of cancer cases. He might slice the cancer cases further, trying to isolate cancers that have a higher likelihood of being cause by electromagenetic fields.  He tries to address the hypothesis that cell phone towers cause cancer. The Wishaw denizens, in contrast, didn’t bother to look past the idea that the towers caused their cancer. This highlights the difference between the so called statistical approach and eyeball approach to evaluating athletic performance. The first method is valid for an entire population of athletes, while the second may or may not be valid for even the few athletes used to make the observation. A huge part of science is to make sure that the metrics being used are actual, useful indicators of the observed system.

This brings me to the Science review of The Trauma Myth. A key component for why humans go wacky over cancer clusters and not rice clusters is that cancers are more personal. It becomes more difficult for humans to let go. Case in point: some of the criticisms leveled at investigators of the Wishaw cancer cluster is that they took away hope. I suppose what the critics meant was that the certainty of cause-and-effect  was lost. The Trauma Myth sounds like an interesting book. It takes a view contrary to “conventional wisdom”. Clancy provides some evidence to suggest that young victims, at the time of their abuse by pedophiles, might not look upon the episode as traumatic as they did not have enough experience to classify it as such. Of course the children were discomforted and hurt, but they did not quite understand what exactly was wrong. The problem wasn’t in how the children felt; the problem would be if how they felt interfered with them coming forward to report the crime.

Clancy’s book then aims to address how best to guide these abused children to come forward to report the crime and receive the help they need.  Apparently, conventional wisdom suggests there is a single reaction after sexual abuse: trauma. Clancy might have oversold how much this affected the number of children who came forward, but the reviewer notes that it is entirely unfair to portray Clancy as somehow being sympathetic to pedophiles.

And yet that is what Clancy is accused of. The fact that laymen cannot seem to countenance any criticism as constructive and useful is problematic, but not limited  to laymen. Even her colleagues have thought the worst about her work.

It is cynical, but I am glad my research is not in such an emotionally charged field. Of course, I have seen strong personalities argue over arcane points, and rather vehemently, but in no case could any researcher be accused of abetting pedophiles and murderers.

The obvious lesson here is that science gives voice to even the wildest of ideas. The objectivity that science enjoys is based exclusively on the gathering of evidence. That’s it. The framing of the question, which methods to use, and how one draws conclusions is all subject to biases and politics. However, all scientists expect that once a method is selected, the data were in fact obtained exactly as stated and in the most complete and rigorous way possible. This is what allows one scientist to look on another’s work and criticize it. The reviewer of The Trauma Myth noted Clancy did not dwell on this idea, which is a shame. Intellectual honesty can often be at odds with political expediency or comfort. It seems that  laymen and Clancy’s colleagues would do well to focus on those subjects, however many or few of them, who had not reported these sexual assaults, regardless of whether Clancy is correct or not.

The reviewer noted that the main point in The Trauma Myth is that sex crimes are underreported, and possibly due to children being confused by the fact that they had not felt traumatized (and thus somehow thought that they were not victims). I hope for their sakes that Clancy, her colleagues, and her current opponents can work to ensure that all victims of child abuse can come forward and obtain justice against the perpetrators.

%d bloggers like this: