Peer review favors vanilla

Mar 01 2011 Published by under Forget What You've Read!

In high school, I had a history teacher who would berate any student who gave a ‘vanilla’ response in class – code for a ‘middle of the road’ or ‘safe to all ears’ answer.  As Mr. Toy explained, while vanilla is the most consumed flavor of ice-cream in the western world, and by dint, the most ‘popular,’ vanilla isn’t also the most liked flavor – people tend to feel much more strongly about chocolate, butter pecan, and strawberry – it’s simply the one that’s most passable to the greatest number of people.

Peer review can, at times, reward papers for being merely ‘vanilla,’ particularly at the high-impact commercial journals.  Last year, we had a paper at one of the top journals rejected on a 4:2 split (4 in favor, 2 added in 2nd and 3rd round against).  We've since had 1:1 (reject), 1:2 (reject), 2:1 (reject).  It's never going to be the case that there isn't someone out there who doesn't hate our work.*  We take a very definite theoretical stand, which means we elicit bimodal responses at every single journal we try to publish in, doesn't matter the impact factor.  Does that mean that our work is somehow less worthy of publication than papers that receive a chorus of middling votes?

[Oh, you already know what I think!]

But the issue isn't personal - we're certainly not the only lab to face this problem.  The question is: Is science supposed to be controversy-free?  Or, dare I say, vanilla?  Are commercial journals really just out to preserve the status quo?

*Um, did that triple-negation work?  I've tried rereading it three times and I'm still not sure.  Late night.  At least Charlie Sheen would approve.  ("You can't process me with a normal brain.")

See also: Graphene.

19 responses so far

  • A. Marina Fournier says:

    Vanilla didn't actually come to the States until Jefferson IIRC managed to bring it over from France, who had imported it from Mexico. Before its advent, lemon was the most popular ice/ice cream flavor in Europe and the States. I seem to recall that chocolate in the form we enjoy now wasn't yet available--the chocolate upper-class ladies drank before their tea in the morning was rather bitter still. Vanilla helped tame it some...

    Aside from your papers, which I am SURE deserve publication, there's vanilla and there's VANILLA!!! Once I had Tahitian vanilla, I haven't gone back to mere supermarket fare.

    It would be a terrible thing if science were controversy-free: however would you stumble upon something more effective, or newer, or (I ran out of ideas) if debate didn't occur and make someone think they could either do a better job or prove the opposite (whether or not that would be possible, for any given controversy)? In debate and dissent, ideas flourish or evolve.

    We've already seen, in the last administration, how some would like to eliminate controversy from scientific agencies, by censorship. Endangered polar bears, anyone?

  • Matt Hall says:

    Interesting post. I guess your perspective depends on whether you want the peer-review process to admit not only unusual flavours (like bacon and egg - one of Heston Blumenthal's) but also probably-horrible ones (bile and eggshell).

    Is a possible solution for you to eschew traditional outlets and seek an open platform (such as arXiv or PLoS ONE)? You would still be subjected to peer review, of course, but perhaps by peers with a less traditional outlook. It's hard to say without knowing what sort of objections reviewers had to your work - maybe they were right to reject it!

    I like the idea that we can publish our own work today, and have it peer-reviewed in public. For privateers like me, there's no need to measure impact if I have readers, but I know it's different for academics; altmetrics.org FTW!

    /matt

    • melodye says:

      In my case, I think it's because doing language work in psychology is unfortunately politicized. Comparatively little language research gets published in commercial journals for this reason; it's not particular to my lab. My lab does face a peculiar problem though, in that we publish work on developmental subjects in psychology but use learning models (from the animal learning literature) and response conflict models (from neuroscience) that many developmentalists aren't familiar with. So our problem is one part political, two parts lack of expertise on the part of reviewers.

      I've actually written a series (1, 2, 3) on some of the problems I see with peer review in psychology; this was just a bit of cheeky fun 🙂 I'm a big fan of Axel Boldt's essay "Extending Arxiv.org to Achieve Open Peer Review and Publishing."

      • Matt Hall says:

        Thanks for the great link - I hadn't seen that article. I love the idea of reviews being public. I always give my name and contact details when I review papers, and I try to give feedback that I wouldn't mind anyone else reading. But what one person loves the next will hate, so if it was the only way we'd lose a lot of great reviewers. And I wouldn't want to see review become some sort of open-air blog discussion... 😉

  • Vanilla is also an excellent base flavour on which to layer other toppings - it complements, but doesn't overpower.

    I'm sure there's an analogy in there somewhere...

  • Mark says:

    The triple-negation worked perfectly for me. I raced right through it the first time and got exactly what you meant. And when I reread it twice slowly it made perfect sense.

    Now does that make it right? ::shrug:: Prescriptivists be damned! It makes sense is what matters.

  • drugmonkey says:

    Absent more detail it is hard to know if there is a "problem" or not. Perhaps you are shooting too high or targeting the wrong journals.

    Sure, if you are in a field in which there is vigorous debate over theoretical approach, methods or other critical aspects, then you are going to have to fight to publish. That's good and you should take pride that you are in a field that makes you advance your best work each and every time.

    There are a *lot* of journals out there. Sometimes you have to suck up a pretty dismal impact factor just to start laying down your arguments in the peer reviewed literature. Sometimes it is necessary to build toward your high impact, high influence, ultimate eleventy publication in incremental steps. Once your ideas, methods or whatever have started to gain traction it because harder to dismiss your next manuscript on several key grounds. This can bring the editors around to your side over time...

    If you can't get published *anywhere*, well....

    • melodye says:

      Absolutely, good advice. This is what I've been tending towards more and more. It's certainly not the case that we can't publish anywhere - we do publish in fantastic journals, it just takes a lot of time and effort to publish. I've been thinking we should hold out on the commercial journals until we have a longer history of publishing our learning models elsewhere (at the moment, it's still very innovative - which is part of the appeal, but also part of the problem).

  • Daniel says:

    My experience so far is that reviewers give positive comments but the editor rejects the paper for being "vanilla"... Could it be that sometimes a little butter pecan or rhum raisin /my personal fav/ attracts the editor because of the polarized review comments?

  • Pascale says:

    Strawberry and chocolate are OK, but Imagine Whirled Peace and Cherry Garcia get labeled "too speculative" in the discussion, even though it's what any sensible person is going to think about when they read the damn paper!

    I love vanilla, but I always want it topped with a bit of hot fudge to keep things interesting.

  • Jon Brock says:

    To be honest, I think it's more a case that different ideas go in and out of fashion. Flavour of the month if you like. If you're submitting something that's fashionable then you (a) get favourable reviews because a majority of reviewers are likely to agree with you; and (b) get favourable decisions from editors because your work appears cutting edge (even though everyone else is doing the same thing). If you're doing truly ground-breaking work then by definition it will be unfashionable.

    That said, I do think that the whole peer review process would be improved massively if journals introduced a rejoinder system. So the journal administrator would collect the reviews, send them to the author, who gets say 500 words to address them. If all the reviews are negative or all positive then it won't make a difference and will add at most 2 minutes to the editor's time. But if you get one reviewer who completely misses the point or makes basic factual errors (which happens to us all the time) then the editor can be made aware of that. This would potentially save the article having to be resubmitted and re-reviewed elsewhere (possibly with the same reviewer making the same errors) and wasting everyone's time. Reviewers might get an education (as a reviewer, I hate the thought that I might get things badly wrong, and if the editor doesn't allow resubmission then you don't get any feedback). And it might go some way to addressing your vanilla problem.

    • melodye says:

      So - we have gotten papers re-reviewed at top journals under appeal. The problem is, it's not clear to me that editors at commercial journals know how to arbitrate these disputes. At the top journals, there are literally no experts in the sub-field we're working in, and it becomes a he-said vs she-said, which is never good. At one journal, we got one reviewer thrown out, only to have the journal solicit one of the reviewer's colleagues as a replacement! Kind of disastrous.

      In theory though, I agree that the rejoinder system is a good one. I just wish there were a way to make it work better..

  • AK says:

    I'm unhappy with the scientific establishment for many reasons, and this is one of them. IMO a reviewer who downgrades a paper because it doesn't agree with their opinion is behaving either dishonestly or unscientifically. Granted, it might get a more rigorous review of its references and conclusions, but if the i's are dotted and the t's crossed, no reviewer should reject it because they disagree with the conclusions or paradigm.

    • @AK Of course, reviewers are harder on papers that go against the grain of what the literature (in the reviewer's opinion) says. And nobody disputes that this should be true. Pretty much everyone agrees you should be harder on a paper giving evidence of ESP because ESP is so unlikely to actually be true (based on what else we know).

      But that's fine, because the truth is that getting a paper published is (despite how much we all complain about it) is relatively easy. Getting the paper *read* is hard. Most papers are never cited even once. If you can't convince 2-4 reviewers that your paper is theoretically interesting and empirically strong, what makes you think anyone will care about it once it's published?

      Just sayin'

      Let's also point out that some papers get bad reviews because they're bad papers. But almost no author, on getting such a review, concludes that the paper was a bad paper. They inevitably conclude it was a bad review. So evidence that there are people who think they're getting bad reviews is not evidence that there are, in fact, bad reviews. I'm not sure what such evidence would look like, but we don't have it.

      • dan says:

        @gameswithwords: actually, evidence of bad reviews is easy to obtain, if one uses formal methods in one's work.

        if one uses a formal method that makes a formal prediction, and if the reviewers are not formally competent such that they make comments or draw conclusions that reveal their formal incomprehension of the formal methods, then one soon has plenty of formal evidence of incompetent reviews.

        since most psychologists have no formal training in anything that even resembles engineering, such as, for example, computation, and since they seem to feel that a little math and interpretive dance will see them through anyway, it happens all the time.

        • Interesting, I've always heard the opposite: it's really easy to get a computational paper through the review system precisely because most reviewers can't really evaluate it and will give the benefit of the doubt.

          In any case, the incredible success enjoyed by a lot of the computational folk (Tenenbaum, Goodman and Jaeger are just the first three to come to mind) would suggest that employing formal methods does not necessarily doom one's research program.

  • Avery Andrews says:

    Since it is fairly unlikely that six reviewers will agree that anything is good, this might just be the easiest way for the editors of these journals to look selective in a measurable way without doing any real work (I'm horrible at making editorial-type judgement decisions, and this looks like an easy and 'objective' way to avoid doing it).

  • Alex says:

    Never get involved in a land war in Asia. Or in peer review with true believers.

  • [...] validation” with a straight face.  For just a sampling of this, go here, here, here, here or [...]