What Science’s “Sting Operation” Reveals: Open Access Fiasco or Peer Review Hellhole?


The science-associated blogosphere and Twitterverse were abuzz today with the news of a Gotcha! story published in today's Science, the premier science publication from the American Association for Advancement of Science. Reporter John Bohannon, working for Science, fabricated a completely fictitious research paper detailing the purported "anti-cancer properties of a substance extracted from a lichen", and submitted it under an assumed name to no less than 304 Open Access journals all over the world, over a course of 10 months. He notes:

... it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.

Nevertheless, 157 journals, out of the 255 that provided a decision to the author's nom de guerre, accepted the paper. As Bohannon indicates:

Acceptance was the norm, not the exception. The paper was accepted by journals hosted by industry titans Sage and Elsevier (Note: Bohannon also mentions Wolters Kluwer in the report). The paper was accepted by journals published by prestigious academic institutions such as Kobe University in Japan. It was accepted by scholarly society journals. It was even accepted by journals for which the paper's topic was utterly inappropriate, such as the Journal of Experimental & Clinical Assisted Reproduction.

This operation, termed a 'sting' in Bohannon's story, ostensibly tested the weaknesses, especially poor quality control exercised, of the Peer Review system of the Open Access publishing process. Bohannon chose only those journals which adhered to the standard Open Access model, the author pays if the paper is published. When a journal accepted either the original, or a revised (superficially, retaining all the fatal flaws) version, Bohannon sent an email requesting to withdraw the paper citing a 'serious flaw' in the experiment which 'invalidates the conclusion'. Bohannon notes that about 60% of the final decisions appeared to have been made with no apparent sign of any peer review; that the acceptance rate was 70% after review, only 12% of which identified any scientific flaws - and about half of them were nevertheless accepted by editorial discretion despite bad reviews.

As noted by some scientists and Open Access publishers like Hindawi whose journals rejected the submission, the poor quality control evinced by this sting is not directly attributable to the Open Access model. A scientific journal that doesn't perform peer review or does a shoddy job of it is critically detrimental to overall ethos of scientific publishing and actively undermines the process and credibility of scientific research and the communication of the observations thereof, regardless of whether the journal is Open Access or Pay-for-Play.

And that is one of the major criticisms of this report. Wrote Michael B Eisen, UC Berkeley Professor and co-founder of the Public Library of Science (PLoS; incidentally, the premier Open Access journal PLOS One was one of the few to flag the ethical flaws in, as well as reject, the submission) in his blog today:

... it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study didn’t do the control of submitting the same paper to subscription-based publishers [...] We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper [...] Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity...

I agree. This report cannot highlight any kind of comparison between Open Access and subscription-based journals. The shock-and-horror comes only if one places a priori Open Access journals on a hallowed pedestal for no good reason. For me, one aspect of the revealed deplorable picture stood out in particular - the question: Are all Open Access Journals created equal? The answer to that would seem to be an obvious 'No', especially given the outcome of this sting. But then it would beg the follow-up question, if this had indeed been a serious and genuine paper, would the author (in this case, Bohannon) seek out obscure OA journals for publishing it?

As I commented on Prof. Eisen's blog, rather than criticizing the Open Access model, the most obvious solution to ameliorate this kind of situation seems to be to institute a measure of quality assessment for Open Access journals. I am not an expert in the publishing business, but surely some kind of reasonable and workable metric can be worked out in the same way Thomson Reuters did all those years ago for Pay-for-Play journals? Dr. Eva Amsen of the Faculty of 1000 (and an erstwhile blog colleague at Nature Blogs) pointed out in reply that a simple solution would be to quality control for peer review via an Open Peer Review process. She wrote:

... This same issue of Science features an interview with Vitek Tracz, about F1000Research’s open peer review system. We include all peer reviewer names and their comments with all papers, so you can see exactly who looked at a paper and what they said.

Prof. Eisen, a passionate proponent of the Open Access system and someone who has been trying for a long time to reform the scientific publishing industry from within, agrees that more than a "repudiation [of the Open Access model] for enabling fraud", what this report reveals is the disturbing lesson that the Peer Review system, as currently exists, is broken. He wrote:

... the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. [...] there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work. [...] There are deep problems with science publishing. But the way to fix this is not to curtain open access publishing. It is to fix peer review.

I couldn't agree more. Even those who swear by peer review must acknowledge that the peer review system, as it exists now, is not a magic wand that can separate the grain from the chaff by a simple touch. I mean, look at the thriving Elsevier Journal Homeopathy, allegedly peer reviewed... Has that ever stemmed the bilge it churns out on a regular basis?

But the other question that really, really bothers me is more fundamental: As Bohannon notes, "about one-third of the journals targeted in this sting are based in India — overtly or as revealed by the location of editors and bank accounts — making it the world's largest base for open-access publishing; and among the India-based journals in my sample, 64 accepted the fatally flawed papers and only 15 rejected it."

Yikes! How and when did India become this haven for dubious, low quality Open-Access publishing? (For the context, see this interactive map of the sting.)

OA Fraud - Science sting operation; half globe
Image: RED: Fake Paper ACCEPTED by a Journal; GREEN: Fake Paper REJECTED; Area: One hemisphere of the globe from Science's interactive map, used for illustrative purposes only. Image and image content: ©Science.


6 Responses to “What Science’s “Sting Operation” Reveals: Open Access Fiasco or Peer Review Hellhole?”

  1. Stevan Harnad Reply | Permalink

    PRE-GREEN FEE-BASED FOOL'S GOLD VS, POST-GREEN NO-FAULT FAIR-GOLD

    To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

    But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason in simple: Fee-based OA publishing (fee-based "Gold OA") is premature, as are plans by universities and research funders to pay its costs:

    Funds are short and 80% of journals (including virtually all the top, "must-have" journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

    What is needed now is for universities and funders to mandate OA self-archiving (of authors' final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online ("Green OA").

    That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA , downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

    The natural way to charge for the service of peer review then will be on a "no-fault basis," with the author's institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

    That post-Green, no-fault Gold will be Fair Gold. Today's pre-Green (fee-based) Gold is Fool's Gold.

    None of this applies to no-fee Gold.

    Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead of subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume -- without testing it -- that non-OA journals would have come out unscathed, if they had been included in the sting.

    But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

    Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record -- and without paying an extra penny.

    But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals and not conclusions about the peer-review standards of OA -- which, with Green OA, are identical to those of non-OA.

    For some peer-review stings of non-OA journals, see below:

    Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

    Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

    Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

  2. Porelbiencomun Reply | Permalink

    You ask, "How and when did India become this haven for dubious, low quality Open-Access publishing?" This reminds me of an incident in February of this year when I was making some minor edits to the article titled "Buddhism and psychology" on Wikipedia. I found an article in the Indian Journal of Psychiatry, an India-based psychiatric journal that is indexed in PubMed, that appeared to be largely based on the aforementioned Wikipedia article but did not cite the Wikipedia article as would be expected under Wikipedia's Creative Commons Attribution-ShareAlike License. (The citation of the published article is: Aich, Tapas Kumar (2013). "Buddha philosophy and western psychology". Indian Journal of Psychiatry 55 (6): 165–170. doi:10.4103/0019-5545.105517.) I emailed the editor of the Indian Journal of Psychiatry and I mentioned that this article published in their journal was basically adapted from a Wikipedia article and that the Wikipedia article should be cited. But no change has been made to the published article. I see this as an example of dubious scholarly publishing practices in India. My notes on this incident can be found on the Talk page for the aforementioned article on Wikipedia: http://en.wikipedia.org/wiki/Talk:Buddhism_and_psychology#New_article_in_the_Indian_Journal_of_Psychiatry_based_on_this_Wikipedia_article.3F

  3. Martin Haspelmath Reply | Permalink

    So suddenty "peer review is broken"? Why would it ever have worked? I think the solution to the problem is obvious: Scientific publication should not only be open access, but "Platinum OA", i.e. the author pays no fee, and there is no subscription either. The costs are covered by scientific institutions that are motivated to increase the reputation and are willing to pay for it. If you pay for your reputation, then you have a strong incentive to publish good work, and this will lead to good peer review. If you publish to make money, by contrast, there's primarily an incentive to publish more, rather than to publish better work. I wrote about this recently in a Frontiers journal: http://www.frontiersin.org/Behavioral_Neuroscience/10.3389/fnbeh.2013.00057/full

Leave a Reply


6 + = ten