Skip to content

What Science’s “Sting Operation” reveals – reblog

This is a re-blog of “What Science’s “Sting Operation” Reveals” by Kausik Datta in Scilogs.

What Science’s “Sting Operation” Reveals: Open Access Fiasco or Peer Review Hellhole?

4 October 2013 by Kausik Datta,

The science-associated blogosphere and Twitterverse were abuzz today with the news of a Gotcha! story published in today’s Science, the premier science publication from the American Association for Advancement of Science. Reporter John Bohannon, working for Science, fabricated a completely fictitious research paper detailing the purported “anti-cancer properties of a substance extracted from a lichen”, and submitted it under an assumed name to no less than 304 Open Access journals all over the world, over a course of 10 months. He notes:

… it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.

Nevertheless, 157 journals, out of the 255 that provided a decision to the author’snom de guerre, accepted the paper. As Bohannon indicates:

Acceptance was the norm, not the exception. The paper was accepted by journals hosted by industry titans Sage and Elsevier (Note: Bohannon also mentions Wolters Kluwer in the report). The paper was accepted by journals published by prestigious academic institutions such as Kobe University in Japan. It was accepted by scholarly society journals. It was even accepted by journals for which the paper’s topic was utterly inappropriate, such as the Journal of Experimental & Clinical Assisted Reproduction.

This operation, termed a ‘sting’ in Bohannon’s story, ostensibly tested the weaknesses, especially poor quality control exercised, of the Peer Review system of the Open Access publishing process. Bohannon chose only those journals which adhered to the standard Open Access model, the author pays if the paper is published. When a journal accepted either the original, or a revised (superficially, retaining all the fatal flaws) version, Bohannon sent an email requesting to withdraw the paper citing a ‘serious flaw’ in the experiment which ‘invalidates the conclusion’. Bohannon notes that about 60% of the final decisions appeared to have been made with no apparent sign of any peer review; that the acceptance rate was 70% after review, only 12% of which identified any scientific flaws – and about half of them were nevertheless accepted by editorial discretion despite bad reviews.

As noted by some scientists and Open Access publishers like Hindawi whose journals rejected the submission, the poor quality control evinced by this sting is not directly attributable to the Open Access model. A scientific journal that doesn’t perform peer review or does a shoddy job of it is critically detrimental to overall ethos of scientific publishing and actively undermines the process and credibility of scientific research and the communication of the observations thereof, regardless of whether the journal is Open Access or Pay-for-Play.

And that is one of the major criticisms of this report. Wrote Michael B Eisen, UC Berkeley Professor and co-founder of the Public Library of Science (PLoS; incidentally, the premier Open Access journal PLOS One was one of the few to flag the ethical flaws in, as well as reject, the submission) in his blog today:

… it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study didn’t do the control of submitting the same paper to subscription-based publishers […] We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper […] Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity…

I agree. This report cannot highlight any kind of comparison between Open Access and subscription-based journals. The shock-and-horror comes only if one places a priori Open Access journals on a hallowed pedestal for no good reason. For me, one aspect of the revealed deplorable picture stood out in particular – the question: Are all Open Access Journals created equal? The answer to that would seem to be an obvious ‘No’, especially given the outcome of this sting. But then it would beg the follow-up question, if this had indeed been a serious and genuine paper, would the author (in this case, Bohannon) seek out obscure OA journals for publishing it?

As I commented on Prof. Eisen’s blog, rather than criticizing the Open Access model, the most obvious solution to ameliorate this kind of situation seems to be to institute a measure of quality assessment for Open Access journals. I am not an expert in the publishing business, but surely some kind of reasonable and workable metric can be worked out in the same way Thomson Reuters did all those years ago for Pay-for-Play journals? Dr. Eva Amsen of the Faculty of 1000 (and an erstwhile blog colleague at Nature Blogs) pointed out in reply that a simple solution would be to quality control for peer review via an Open Peer Review process. She wrote:

… This same issue of Science features an interview with Vitek Tracz, about F1000Research’s open peer review system. We include all peer reviewer names and their comments with all papers, so you can see exactly who looked at a paper and what they said.

Prof. Eisen, a passionate proponent of the Open Access system and someone who has been trying for a long time to reform the scientific publishing industryfrom within, agrees that more than a “repudiation [of the Open Access model] for enabling fraud”, what this report reveals is the disturbing lesson that the Peer Review system, as currently exists, is broken. He wrote:

… the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. […] there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work. […] There are deep problems with science publishing. But the way to fix this is not to curtain open access publishing. It is to fix peer review.

I couldn’t agree more. Even those who swear by peer review must acknowledge that the peer review system, as it exists now, is not a magic wand that can separate the grain from the chaff by a simple touch. I mean, look at the thriving Elsevier Journal Homeopathy, allegedly peer reviewed… Has that ever stemmed the bilge it churns out on a regular basis?

But the other question that really, really bothers me is more fundamental: As Bohannon notes, “about one-third of the journals targeted in this sting are based in India — overtly or as revealed by the location of editors and bank accounts — making it the world’s largest base for open-access publishing; and among the India-based journals in my sample, 64 accepted the fatally flawed papers and only 15 rejected it.

Yikes! How and when did India become this haven for dubious, low quality Open-Access publishing? (For the context, see this interactive map of the sting.)

Tags: