Showing posts with label journals. Show all posts
Showing posts with label journals. Show all posts

Sunday, 7 June 2015

My collapse of confidence in Frontiers journals



Frontiers journals have become a conspicuous presence in academic publishing since they started in 2007 with the advent of Frontiers in Neuroscience. When they were first launched, I, like many people, was suspicious. This was an Open Access (OA) online journal where authors paid to publish, raising questions about the academic rigour of the process. However, it was clear that the publishers had a number of innovative ideas that were attractive to authors, with a nice online interface and a collaborative review process that made engagement with reviewers more of a discussion than a battle with anonymous critics. Like many other online OA journals, the editorial decision to publish was based purely on an objective appraisal of the soundness of the study, not on a subjective evaluation of importance, novelty or interest. As word got round that respectable scientists were acting as editors, reviewers and authors of paper in Frontiers, people started to view it as a good way of achieving fast and relatively painless publication, with all the benefits of having the work openly available and accessible to all.
The publishing model has been highly successful. In 2007, there were 45 papers published in Frontiers in Neuroscience, whereas in 2014 it was 3,012 (data from Scopus search for source title Frontiers in Neuroscience, which includes Frontiers journals in Human Neuroscience, Cellular Neuroscience, Molecular Neuroscience, Behavioral Neuroscience, Systems Neuroscience, Integrative Neuroscience, Synaptic Neuroscience, Aging Neuroscience, Evolutionary Neuroscience and Computational Neuroscience). If all papers attracted the author fee of US$1900 (£1243) for a regular article, this would bring in £3.7 million pounds in 2014: the actual income would be less than this because some articles are cheaper, but it's clear that the income is any in case substantial, especially since the journal is online and there are no print costs. But this is just the tip of the iceberg. Frontiers has expanded massively since 2007 to include a wide range of disciplines.  A Scopus search for articles with journal title that includes "Frontiers in" found over 54,000 articles since 2006, with 10,555 published in 2014.
With success, however, have come growing rumbles of discontent. Questions are being raised about the quality of editing and reviewing in Frontiers.  My first inkling of this was a colleague told me he would not review for Frontiers because his name was published with the article. This wasn't because he wanted confidentiality; rather he was concerned that it would appear he had given approval for the article, when in fact he had major reservations.
Then, there have been some very public criticisms of editorial practices at Frontiers. The first was associated with the retraction of a paper that claimed climate denialism was associated with a more general tendency to advocate conspiracy theories. Papers on this subject are always controversial and this one was no exception, attracting complaints to the editor. The overall impression from the account in Retraction Watch was that the editor caved in to legal threats, thereby letting critics of climate change muzzle academic freedom of speech. This led to the resignation of one Frontiers editor**.
Next, there was a case that posed the opposite problem: the scientific establishment were outraged that a paper on HIV denial had been published, and argued that it should be retracted. The journal editor decided that the paper should not be retracted, but instead rebranded it as Opinion – see Retraction Watch account here.
Most recently, in May 2015 there was a massive upset when editors of the journals Frontiers in Medicine and Frontiers in Cardiovascular Medicine mounted a protest at the way the publisher was bypassing their editorial oversight and allocating papers to associate editors who could accept them without the knowledge of the editor in chief. The editors protested and published a manifesto of editorial independence, leading to 31 of them being sacked by the publisher.   
All of these events have chipped away at my confidence in Frontiers journals, but it was finally exploded completely when someone on Twitter pointed me to this article entitled "First time description of dismantling phenomenon" by Laurence Barrer and Guy Giminez from Aix Marseille Université, France. I had not realised that Frontiers in Psychology had a subsection on Psychoanalysis and Neuropsychoanalysis, but indeed it does, and here was a paper proposing a psychoanalytic account of autism. The abstract states: "The authors of this paper want to demonstrate that dismantling is the main defense mechanism in autism, bringing about de-consensus of senses." Although the authors claim to be adopting a scientific method for testing a hypothesis, it is unclear what would constitute disproof. Their evidence consists of interpreting known autistic characteristics, such as fascination with light, in psychoanalytic terms. The source of dismantling is attributed to the death drive. This reads like the worst kind of pseudoscience, with fancy terminology and concepts being used to provide evidence for a point of view which is more like a religious belief than a testable idea. I wondered who was responsible for accepting this paper.  The Editor was Valeria Vianello Dri, Head of Child and Adolescent Neuropsychiatry Units in Trento, Italy. No information on her biography is provided on the Frontiers website. She lists four publications: these are all on autism genetics. All are multi-authored and she is not first or last author on any of these*. A Google search confirmed she has an interest in psychoanalysis but I could find no further information to indicate that she had any real experience of publishing scientific papers. There were three reviewers: the first two had no publications listed on their Frontiers profiles; the third had a private profile, but a Google search on his name turned up a CV but it did not include any peer-reviewed publications.
So it seems that Frontiers has opened the door to a branch of pseudoscience to set up its own little circle of editors, reviewers and authors, who can play at publishing peer-reviewed science. I'm not saying all people with an interest in psychoanalysis should be banished: if they do proper science, they can publish that in regular journals without needing this kind of specialist outlet. But this section of Frontiers is a disastrous development; there is no evidence of scientific rigour, yet the journal gives credibility to a pernicious movement that is particularly strong in France and Argentina, which regards psychoanalysis as the preferred treatment for autism. Many experts have pointed out that this approach is not evidence-based, but worse still, in some of its manifestations it amounts to maltreatment.  What next, one wonders? Frontiers in homeopathy?
Like the protesting editors of Frontiers in Medicine, I think the combined evidence is that Frontiers has allowed the profit motive to dominate. They should be warned, however, that once they lose a reputation for publishing decent science, they are doomed. I've already heard it said that someone on a grants review panel commented that a candidate's articles in Frontiers should be disregarded. Unless these journals can recover a reputation for solid science with proper editing and peer review, they will find themselves shunned.


*The Frontiers biography suggests she is last author on a paper in 2008, but the author list proved to be incomplete.
** Correction: Shortly after I posted this, Stephan Lewandowsky wrote to say that there were 3 editors who resigned over the RF retraction, plus another one voicing intense criticism

Sunday, 17 May 2015

Will traditional science journals disappear?


The Royal Society has been celebrating the 350th anniversary of Philosophical Transactions, the world's first scientific journal, by holding a series of meetings on the future of scholarly scientific publishing. I followed the whole event on social media, and was able to attend in person for one day. One of the sessions followed a Dragon's Den format, with speakers having 100 seconds to convince three dragons – Onora O'Neill, Ben Goldacre and Anita de Waard – of the fund-worthiness of a new idea for science communication. Most were light-hearted, and there was a general mood of merriment, but the session got me thinking about what kind of future I would like to see. What I came up with was radically different from our current publishing model.

Most of the components of my dream system are not new, but I've combined them into a format that I think could work. The overall idea had its origins in a blogpost I wrote in 2011, and has points in common with David Colquhoun's submission to the dragons, in that it would adopt a web-based platform run by scientists themselves. This is what already happens with the arXiv for the physical sciences and bioRxiv for biological sciences. However, my 'consensual communication' model has some important differences. Here's the steps I envisage an author going through:
1.  An initial protocol is uploaded before a study is done, consisting only of introduction, and a detailed methods section and analysis plan, with the authors anonymised. An editor then assigns reviewers to evaluate it. This aspect of the model draws on features of registered reports, as implemented in the neuroscience journal, Cortex.  There are two key scientific advantages to this approach; first, reviewers are able to improve the research design, rather than criticise studies after they have been done. Second, there is a record of what the research plan was, which can then be compared to what was actually done. This does not confine the researcher to the plan, but it does make transparent the difference between planned and exploratory analyses.
2. The authors get a chance to revise the protocol in response to the reviews, and the editor judges whether the study is of an adequate standard, and if necessary solicits another round of review. When there is agreement that the study is as good as it can get, the protocol is posted as a preprint on the web, together with the non-anonymised peer reviews. At this point the identity of authors is revealed.
3. There are then two optional extra stages that could be incorporated:
a) The researcher can solicit collaborators for the study. This addresses two issues raised at the Royal Society meeting – first, many studies are underpowered; duplicating a study across several centres could help in cases where there are logistic problems in getting adequate sample sizes to give a clear answer to a research question. Second, collaborative working generally enhances reproducibility of findings.
b)  It would make sense for funding, if required, to be solicited at this point – in contrast to the current system where funders evaluate proposals that are often only sketchily described. Although funders currently review grant proposals, there is seldom any opportunity to incorporate their feedback – indeed, very often a single critical comment can kill a proposal.
4. The study is then completed, written up in full, and reviewed by the editor. Provided the authors have followed the protocol, no further review is required. The final version is deposited with the original preprint, together with the data, materials and analysis scripts.
5. Post-publication discussion of the study is then encouraged by enabling comments.
What might a panel of dragons make of this? I anticipate several questions.
Who would pay for it? Well, if arXiv is anything to go by, costs of this kind of operation are modest compared with conventional publishing. They would consist of maintaining the web-based platform, and covering the costs of editors. The open access journal PeerJ has developed an efficient e-publishing operation and charges $99 per author per submission. I anticipate a similar charge to authors would be sufficient to cover costs.
Wouldn't this give an incentive to researchers to submit poorly thought-through studies? There are two answers to that. First, half of the publication charge to authors would be required at the point of initial submission. Although this would not be large (e.g. £50) it should be high enough to deter frivolous or careless submissions. Second, because the complete trail of a submission, from pre-print to final report, would be public, there would be an incentive to preserve a reputation for competence by not submitting sloppy work.
Who would agree to be a reviewer under such a model? Why would anyone want to put their skills in to improving someone else's work for no reward? I propose there could be several incentives for reviewers. First, it would be more rewarding to provide comments that improve the science, rather than just criticising what has already been done. Second, as a more concrete reward, reviewers could have submission fees waived for their own papers. Third, reviews would be public and non-anonymised, and so the reviewer's contribution to a study would be apparent. Finally, and most radically, where the editor judges that a reviewer had made a substantial intellectual contribution to a study, then they could have the option of having this recognised in authorship.
Why would anyone who wasn't a troll want to comment post-publication? We can get some insights into how to optimise comments from the model of the NIH-funded platform PubMed Commons. They do not allow anonymous comments, and require that commenters have themselves authored a paper that is listed on PubMed.  Commenters could also be offered incentives such as a reduction of submission costs to the platform.  To this one could add ideas from commercial platforms such as e-Bay, where sellers are rated by customers, so you can evaluate their reputation. It should be possible to devise some kind of star rating – both for the paper being commented on, and for the person making the comment. This could provide motivation for good commenters and make it easier to identify the high quality papers and comments.
I'm sure that any dragon from the publishing world would swallow me up in flames for these suggestions, as I am in effect suggesting a model that would take commercial publishers out of the loop. However, it seems worth serious consideration, given the enormous sums that could be saved by universities and funders by going it alone.  But the benefits would not just be financial; I think we could greatly improve science by changing the point in the research process when reviewer input occurs, and by fostering a more open and collaborative style of publishing.


This article was first published on the Guardian Science Headquarters blog on 12 May 2015

Monday, 20 April 2015

How long does a scientific paper need to be?





©CartoonStock.com

There was an interesting exchange last week on PubMedCommons between Maurice Smith, senior author of a paper on motor learning, and Bjorn Brembs, a neurobiologist at the University of Regensburg. The main thrust of Brembs' critique was that the paper, which was presented as surprising, novel and original, failed adequately to cite the prior literature. I was impressed that Smith engaged seriously with the criticism, writing a reasoned defence of the choice of material in the literature review, and noting that claims of over-hyped statements were based on selective citation.  What really caught my attention was the following statement in his rebuttal: "We can reassure the reader that it was very painful to cut down the discussion, introduction, and citations to conform to Nature Neuroscience’s strict and rather arbitrary limits. We would personally be in favor of expanding these limits, or doing away with them entirely, but this is not our choice to make."
As it happens, this comment really struck home with me, as I had been internally grumbling about this very issue after a weekend of serious reading of background papers for a grant proposal I am preparing. I repeatedly found evidence that length limits were having a detrimental effect on scientific reporting. I think there are three problems here.
1. The first is exemplified by the debate around the motor learning paper. I don't know this area well enough to evaluate whether omissions in the literature review were serious, but I am all too familiar with papers in my own area where a brief introduction skates over the surface of past work. One feels that length limits play a big part in this but there is also another dimension: To some editors and reviewers, a paper that starts documenting how the research builds on prior work is at risk of being seen as merely 'incremental', rather than 'groundbreaking'. I was once explicitly told by an editor that too high a proportion of my references were more than five years old. This obsession with novelty is in danger of encouraging scientists to devalue serious scholarship as they zoom off to find the latest hot topic.  
2. In many journals, key details of methods are relegated to a supplement, or worse still, omitted altogether. I know that many people rejoiced when the Journal of Neuroscience declared it would no longer publish supplementary material: I thought it was a terrible decision. In most of the papers I read, the methodological detail is key to evaluating the science, and if we only get the cover story of the research, we can be seriously misled. Yes, it can be tedious to wade through supplementary material, but if it is not available, how do we know the work is sound?
3. The final issue concerns readability. One justification for strict length limits is that it is supposed to benefit readers if the authors write succinctly, without rambling on for pages and pages.  And we know that the longer the paper, the fewer people will even begin to read it, let alone get to the end. So, in principle, length limits should help. But in practice they often achieve the opposite effect, especially if we have papers reporting several experiments and using complex methods. For instance, I recently read a paper that reported, all within the space of a single Results section about 2000 words long, (a) a genetic association analysis; (b) replications of the association analysis on five independent samples (c) a study of methylation patterns; (d) a gene expression study in mice; and (e) a gene expression study in human brains. The authors had done their best to squeeze in all essential detail, though some was relegated to supplemental material, but the net result was that I came away feeling as if I had been hit around the head by a baseball bat. My sense was that the appropriate format for reporting such a study would have been a monograph, where each component of the study could be given a chapter, but of course, that would not have the kudos of a publication in a high impact journal, and arguably fewer people would read it.
Now that journals are becoming online-only, a major reason for imposing length limits – cost of physical production and distribution of a paper journal – is far less relevant. Yes, we should encourage authors to be succinct, but not so succinct that scientific communication is compromised.