Tag Archives: editing

There is a study, published on Aug. 10, in the British Medical Journal examining whether the articles published online, with comments, elicited any responses from the authors. There is a thoughtful blog post from The Scholarly Kitchen about this study. Article authors generally didn’t respond to critics, even if the critics were serious and wrote substantively on the matter. Interestingly, when the authors did respond, the critics were satisfied responses less than half the time. The editors of the journal generally accepted the authors’ rebuttal, and of course, it should be said that they did accept the manuscripts for publication.

The lead author of the BMJ study, Peter Gøtzsche, suggested that  the editors 1) may not be qualified enough to review the criticisms and and rebuttal and 2) have a vested interest in maintaining the reputation of the journal (i.e. defend the stance that the science they accept is high-quality.) I have this additional snarky observation. The critics may be especially unreceptive to the authors’ original paper and rebuttal because the original criticism stemmed from contradictions with the critics’ own work.

Just a thought.

I have a few other thoughts, and I would argue that we not judge the editors too harshly for, perhaps, mistakenly accepting “bad” science or perhaps not understanding the nature of the criticisms. There are actually acceptable reasons as to why some controversial papers get published.I have some insight into the editorial process; no juicy gossip or behind-the-scenes look at nefarious machinations, mind you.

I had applied to a scientific editor position at Neuron almost 2 years ago. There were two open positions and I came in, at most, “third” (and dammit, it was my dream job!) For the first stage in the application process, I had to review 2 papers from 7 categories, published in Neuron over year prior to the application deadline. I was to choose an example of a good paper and a weak paper (from the set of already peer-reviewed and published articles). I was also supposed to write about the neuroscience field, identifying authors whom I would invite to write a review (and I had to specify the topic) and where the next big thing will be (and who the leading lights are.) It was great fun to write, although I left myself only a week to do it (and of course I was working in the lab all that time.)

But I digress. Of the 7 fields, I can honestly say that I am an expert in, at most, 2 of the fields. And even then, that’s a stretch, because the fields were more general than a particular sensory system (I worked in olfaction) or even a technique (quantitative microscopy, epifluorescence and 2-photon laser microscopy). As one might guess, and as found out later, this is the norm for the editors there. The editors all have different backgrounds and at Neuron, and no one is asked to specialize. So every editor will be asked to triage manuscripts outside of their expertise and background. Presumably, this has the advantage of ensuring that editors remain aware of developments over a wide swathe of neuroscience.

I can’t say about other journals, but at Neuron, the editors have final acceptance/rejection authority. They decide whether the article is sent for review in the first place. Using the peer review, they of course defer to the expertise of the reviewers, but the editors’ job here is to be a disinterested party in unknotting the various interests reviewers and authors have. But the decision to accept a manuscript for publication is also determined by this amorphous concept of making a significant advance in the field.

There are several ways of looking at this: perhaps the researchers themselves ought to understand where their field is going and so are the best placed to assess where the cutting edge research is, or that that researchers have a vested interested to “sell” their research as hot – regardless of actual scientific worth, or that the editors are in no way prepared to decide on what constitutes a significant advance – as they no longer have direct experience with the difficulties and intractabilities of various experiments and models, or that the editors are in fact best placed to see what is a significant advance – by virtue of seeing so many good and bad manuscripts with overlapping topics from various competing scientists.

I am inclined to go with the fourth idea, that good editors can observe developing trends from manuscript submissions. And if you look over a year’s worth of articles, from one journal, I also could see blocks of papers with similar topics (or at least similar keywords.)

I think the stewardship/peer-review system works, although I am not opposed to the more open style of publication like the Public Library of Science (PLoS) journals. These latter focus more on technical soundness; the reviewers try to make sure that the experiments support the conclusions, as is the norm. However, no editors are in place to reject papers because of a lack of perceived significance. The idea is that scientists will eventually cite this paper heavily – or ignore it – depending on its actual value; it is assumed that the cream will float. Again, I have nothing against these different modes of publishing.

During my interview with Neuron’s scientific editors, we discussed our reasons for wanting to become editors and problems that may arise during the adjustment phase. One potential downside is that no longer will an editor be recognized as an authority on any subject, and rightly so. The editors would no longer be in the trenches, won’t be adding any new techniques to his repertoire. However, I defended the idea that editors simply replaced one expertise with another. As I said above, a good editor becomes an expert in spotting over-populated and under-served topics. And the nature of the beast is that they would in fact see many, many similar manuscripts. They have the luxury of establishing a baseline level of quality and significance.

Well, I didn’t get the job, but I remain sympathetic to their roles. I think there is a need for the so-called gate-keeper role. The fact that someone took the time to place a science manuscript in the context of all the work that has recently been done lends an imprimatur of worth. Of course editors do not get it right all the time. But one can at least count on Neuron, or Nature, or Science, or Journal of Neuroscience publishing papers that presumably compared favorably to some cohort of papers. That takes judgment, and the editors read the manuscripts that you may not have time for.

My point here is that editors can misjudge the value of a piece of science, but that is no reason to think they add nothing of value. They do not necessarily have to defend their choices, at least not at the level of single papers. Remember, just as the editors themselves may have idiosyncrasies, so do the readers that read the articles. The scientists themselves also have different intellectual sharpness, shall we say. But, over time, if editors do consistently “get it wrong”, then it would in fact be obvious. The room for subjective assessments of value can only go so far. Techniques converge at some point, even if the systems scientists work on differ. Each experiment generates a control for comparison, anyone wishing to extend work generally tries to reproduce results – to show that they are doing something right, and the level of citations are all spot checks for the soundness of the science. At some point, missing experiments or graphs, other scientists complaining about articles whose results cannot be replicated, or few citations become problematic for a journal trying to maintain its luster. And the scientists themselves can start to ignore the offending journal by submitting to competing journals.

But, at its most basic, one wouldn’t expect an editor to be aware of the details raised by critics. Simply, the details are probably only important to the investigators and deal with “procedural points” (as an example, do you really care that an animal “sniffs” rapidly not to gather more odor molecules to increase signal-to-noise but rather to attenuate background smells – in order to increase signal-to-noise? Or that this is something imposed by behavioral modulation rather than centrifugal modulation of the olfactory bulb? See Verhagen et al, 2007.)   I guess it is fair to say that editors are “big picture” people. With that said, perhaps there is some way the editors can facilitate the discourse that occurs in the comments-sections that are now de rigeur.

%d bloggers like this: