What Science’s “Sting Operation” reveals – reblog

This is a re-blog of “What Science’s “Sting Operation” Reveals” by Kausik Datta in Scilogs.

What Science’s “Sting Operation” Reveals: Open Access Fiasco or Peer Review Hellhole?

4 October 2013 by Kausik Datta,

The science-associated blogosphere and Twitterverse were abuzz today with the news of a Gotcha! story published in today’s Science, the premier science publication from the American Association for Advancement of Science. Reporter John Bohannon, working for Science, fabricated a completely fictitious research paper detailing the purported “anti-cancer properties of a substance extracted from a lichen”, and submitted it under an assumed name to no less than 304 Open Access journals all over the world, over a course of 10 months. He notes:

… it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.

Nevertheless, 157 journals, out of the 255 that provided a decision to the author’snom de guerre, accepted the paper. As Bohannon indicates:

Acceptance was the norm, not the exception. The paper was accepted by journals hosted by industry titans Sage and Elsevier (Note: Bohannon also mentions Wolters Kluwer in the report). The paper was accepted by journals published by prestigious academic institutions such as Kobe University in Japan. It was accepted by scholarly society journals. It was even accepted by journals for which the paper’s topic was utterly inappropriate, such as the Journal of Experimental & Clinical Assisted Reproduction.

This operation, termed a ‘sting’ in Bohannon’s story, ostensibly tested the weaknesses, especially poor quality control exercised, of the Peer Review system of the Open Access publishing process. Bohannon chose only those journals which adhered to the standard Open Access model, the author pays if the paper is published. When a journal accepted either the original, or a revised (superficially, retaining all the fatal flaws) version, Bohannon sent an email requesting to withdraw the paper citing a ‘serious flaw’ in the experiment which ‘invalidates the conclusion’. Bohannon notes that about 60% of the final decisions appeared to have been made with no apparent sign of any peer review; that the acceptance rate was 70% after review, only 12% of which identified any scientific flaws – and about half of them were nevertheless accepted by editorial discretion despite bad reviews.

As noted by some scientists and Open Access publishers like Hindawi whose journals rejected the submission, the poor quality control evinced by this sting is not directly attributable to the Open Access model. A scientific journal that doesn’t perform peer review or does a shoddy job of it is critically detrimental to overall ethos of scientific publishing and actively undermines the process and credibility of scientific research and the communication of the observations thereof, regardless of whether the journal is Open Access or Pay-for-Play.

And that is one of the major criticisms of this report. Wrote Michael B Eisen, UC Berkeley Professor and co-founder of the Public Library of Science (PLoS; incidentally, the premier Open Access journal PLOS One was one of the few to flag the ethical flaws in, as well as reject, the submission) in his blog today:

… it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study didn’t do the control of submitting the same paper to subscription-based publishers […] We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper […] Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity…

I agree. This report cannot highlight any kind of comparison between Open Access and subscription-based journals. The shock-and-horror comes only if one places a priori Open Access journals on a hallowed pedestal for no good reason. For me, one aspect of the revealed deplorable picture stood out in particular – the question: Are all Open Access Journals created equal? The answer to that would seem to be an obvious ‘No’, especially given the outcome of this sting. But then it would beg the follow-up question, if this had indeed been a serious and genuine paper, would the author (in this case, Bohannon) seek out obscure OA journals for publishing it?

As I commented on Prof. Eisen’s blog, rather than criticizing the Open Access model, the most obvious solution to ameliorate this kind of situation seems to be to institute a measure of quality assessment for Open Access journals. I am not an expert in the publishing business, but surely some kind of reasonable and workable metric can be worked out in the same way Thomson Reuters did all those years ago for Pay-for-Play journals? Dr. Eva Amsen of the Faculty of 1000 (and an erstwhile blog colleague at Nature Blogs) pointed out in reply that a simple solution would be to quality control for peer review via an Open Peer Review process. She wrote:

… This same issue of Science features an interview with Vitek Tracz, about F1000Research’s open peer review system. We include all peer reviewer names and their comments with all papers, so you can see exactly who looked at a paper and what they said.

Prof. Eisen, a passionate proponent of the Open Access system and someone who has been trying for a long time to reform the scientific publishing industryfrom within, agrees that more than a “repudiation [of the Open Access model] for enabling fraud”, what this report reveals is the disturbing lesson that the Peer Review system, as currently exists, is broken. He wrote:

… the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. […] there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work. […] There are deep problems with science publishing. But the way to fix this is not to curtain open access publishing. It is to fix peer review.

I couldn’t agree more. Even those who swear by peer review must acknowledge that the peer review system, as it exists now, is not a magic wand that can separate the grain from the chaff by a simple touch. I mean, look at the thriving Elsevier Journal Homeopathy, allegedly peer reviewed… Has that ever stemmed the bilge it churns out on a regular basis?

But the other question that really, really bothers me is more fundamental: As Bohannon notes, “about one-third of the journals targeted in this sting are based in India — overtly or as revealed by the location of editors and bank accounts — making it the world’s largest base for open-access publishing; and among the India-based journals in my sample, 64 accepted the fatally flawed papers and only 15 rejected it.

Yikes! How and when did India become this haven for dubious, low quality Open-Access publishing? (For the context, see this interactive map of the sting.)

Los Investigadores del Mañana

English: British Library (modern building in f...
English: British Library and St Pancras station with Euston Road on the right, London.

Parece ser que pocos estudiantes de doctorado exploran nuevas tecnologías en sus investigaciones o entienden la variedad de información disponible para ellos, de acuerdo a un reporte encargado por la British Library y JISC (un cuerpo para la tecnología en la educación superior en el Reino Unido). El reporte puede ser visto aqui (en inglés).

“Los investigadores del mañana” publicado el 28 de junio, encuestó a más de 17,000 estudiantes de doctorado (en el Reino Unido) en un período de tres años, siguientdo 60 a profundidad y en particular a los nacidos entre 1982 y 1994, la llamada Generación Y.

El reporte afirma que a pesar de ser conocedores de la tecnología, la Generación Y de estudiantes de doctorado saben muy poco sobre la variedad y la autenticidad de la información de investigación disponible en nuevos formatos, como bases de datos en línea, revistas electrónicas y depósitos, y pocos saben cómo acceder a esta información.

También tienen poca comprensión acerca del tema de acceso abierto y los derechos de autor. Muchos creen que los supervisores no aprobarían el citar documentos acceso abierto o libre y sólo el 26 por ciento saben que los donantes y fundaciones están empezando a esperar el acceso abierto a la investigación que apoyan.

Julie Carpenter, una de las co-autoras del reporte y directora de la consultora Education for Change afirma que los resultados sugieren un descuido hacia los estudiantes de doctorado, los cuales han experimentado una sensación de aislamiento.

Apoyo institucional – en términos de oferta de bibliotecas, información sobre el entorno de la investigación y de formación – no está funcionando y tiene que haber un “cambio de paradigma” en la forma en que el sector da ayuda y se compromete con los estudiantes de doctorado, dijo.

“Hay una desconexión entre las organizaciones estratégicas como JISC, [que] se han empeñado en decir que se deben utilizar estas herramientas maravillosas, promover el intercambio y mover a la investigación a la era electrónica dentro de las propias instituciones”, agregó Carpenter.

La aversión al riesgo

Esto se refleja en otro de los hallazgos del estudio: que aunque los estudiantes de la Generación Y utilizan algunas herramientas en línea tales como marcadores (bookmarks) y RSS, muy pocos emplean tecnologías de colaboración como los wikis, los blogs y Twitter en sus investigaciones, a pesar de utilizar estas herramientas en su vida personal.

Debbie McVitty, representante de investigación y políticas para postgraduados en la National Union of Students (Reino Unido) y miembro del grupo asesor de estudios, atribuye en parte la aversión al riesgo a la presión sobre los estudiantes de doctorado para completar sus estudios en lugar de crear una buena investigación.

“La gente que va a adoptar [tecnologías] tempranamente son probablemente las personas, tales como profesores, que están más establecidas en su posición y pueden permitirse el lujo de ser más experimentales”, dijo.

“El acceso a un trabajo académico puede ser un tanto difícil – y por tanto no se quiere correr ningún riesgo.”

Junto a personal de biblioteca y administradores de universidades, los supervisores tienen que desempeñar un mejor papel en informar a los estudiantes, con apoyo de la medida de sus campos de estudio, e dijo McVitty.

El informe también encontró una “dependencia sorprendente” por los estudiantes de doctorado en las conclusiones de otras personas en lugar de las fuentes originales.

Según la encuesta, en cuatro de cada cinco casos, los estudiantes de doctorado busca los libros y documentos publicados durante su búsqueda de información para apoyar su investigación, en lugar de material “primario” como muestras, archivos y bases de datos.

Los estudiantes también deben recopilar datos y hacer investigación original además de explorar esas fuentes secundarias, comentó Carpenter, pero este hallazgo puede identificar una tendencia que, si se verifica, tendría “consecuencias muy graves”.