Showing posts with label academic publishing. Show all posts
Showing posts with label academic publishing. Show all posts

Sunday, 17 May 2015

Will traditional science journals disappear?


The Royal Society has been celebrating the 350th anniversary of Philosophical Transactions, the world's first scientific journal, by holding a series of meetings on the future of scholarly scientific publishing. I followed the whole event on social media, and was able to attend in person for one day. One of the sessions followed a Dragon's Den format, with speakers having 100 seconds to convince three dragons – Onora O'Neill, Ben Goldacre and Anita de Waard – of the fund-worthiness of a new idea for science communication. Most were light-hearted, and there was a general mood of merriment, but the session got me thinking about what kind of future I would like to see. What I came up with was radically different from our current publishing model.

Most of the components of my dream system are not new, but I've combined them into a format that I think could work. The overall idea had its origins in a blogpost I wrote in 2011, and has points in common with David Colquhoun's submission to the dragons, in that it would adopt a web-based platform run by scientists themselves. This is what already happens with the arXiv for the physical sciences and bioRxiv for biological sciences. However, my 'consensual communication' model has some important differences. Here's the steps I envisage an author going through:
1.  An initial protocol is uploaded before a study is done, consisting only of introduction, and a detailed methods section and analysis plan, with the authors anonymised. An editor then assigns reviewers to evaluate it. This aspect of the model draws on features of registered reports, as implemented in the neuroscience journal, Cortex.  There are two key scientific advantages to this approach; first, reviewers are able to improve the research design, rather than criticise studies after they have been done. Second, there is a record of what the research plan was, which can then be compared to what was actually done. This does not confine the researcher to the plan, but it does make transparent the difference between planned and exploratory analyses.
2. The authors get a chance to revise the protocol in response to the reviews, and the editor judges whether the study is of an adequate standard, and if necessary solicits another round of review. When there is agreement that the study is as good as it can get, the protocol is posted as a preprint on the web, together with the non-anonymised peer reviews. At this point the identity of authors is revealed.
3. There are then two optional extra stages that could be incorporated:
a) The researcher can solicit collaborators for the study. This addresses two issues raised at the Royal Society meeting – first, many studies are underpowered; duplicating a study across several centres could help in cases where there are logistic problems in getting adequate sample sizes to give a clear answer to a research question. Second, collaborative working generally enhances reproducibility of findings.
b)  It would make sense for funding, if required, to be solicited at this point – in contrast to the current system where funders evaluate proposals that are often only sketchily described. Although funders currently review grant proposals, there is seldom any opportunity to incorporate their feedback – indeed, very often a single critical comment can kill a proposal.
4. The study is then completed, written up in full, and reviewed by the editor. Provided the authors have followed the protocol, no further review is required. The final version is deposited with the original preprint, together with the data, materials and analysis scripts.
5. Post-publication discussion of the study is then encouraged by enabling comments.
What might a panel of dragons make of this? I anticipate several questions.
Who would pay for it? Well, if arXiv is anything to go by, costs of this kind of operation are modest compared with conventional publishing. They would consist of maintaining the web-based platform, and covering the costs of editors. The open access journal PeerJ has developed an efficient e-publishing operation and charges $99 per author per submission. I anticipate a similar charge to authors would be sufficient to cover costs.
Wouldn't this give an incentive to researchers to submit poorly thought-through studies? There are two answers to that. First, half of the publication charge to authors would be required at the point of initial submission. Although this would not be large (e.g. £50) it should be high enough to deter frivolous or careless submissions. Second, because the complete trail of a submission, from pre-print to final report, would be public, there would be an incentive to preserve a reputation for competence by not submitting sloppy work.
Who would agree to be a reviewer under such a model? Why would anyone want to put their skills in to improving someone else's work for no reward? I propose there could be several incentives for reviewers. First, it would be more rewarding to provide comments that improve the science, rather than just criticising what has already been done. Second, as a more concrete reward, reviewers could have submission fees waived for their own papers. Third, reviews would be public and non-anonymised, and so the reviewer's contribution to a study would be apparent. Finally, and most radically, where the editor judges that a reviewer had made a substantial intellectual contribution to a study, then they could have the option of having this recognised in authorship.
Why would anyone who wasn't a troll want to comment post-publication? We can get some insights into how to optimise comments from the model of the NIH-funded platform PubMed Commons. They do not allow anonymous comments, and require that commenters have themselves authored a paper that is listed on PubMed.  Commenters could also be offered incentives such as a reduction of submission costs to the platform.  To this one could add ideas from commercial platforms such as e-Bay, where sellers are rated by customers, so you can evaluate their reputation. It should be possible to devise some kind of star rating – both for the paper being commented on, and for the person making the comment. This could provide motivation for good commenters and make it easier to identify the high quality papers and comments.
I'm sure that any dragon from the publishing world would swallow me up in flames for these suggestions, as I am in effect suggesting a model that would take commercial publishers out of the loop. However, it seems worth serious consideration, given the enormous sums that could be saved by universities and funders by going it alone.  But the benefits would not just be financial; I think we could greatly improve science by changing the point in the research process when reviewer input occurs, and by fostering a more open and collaborative style of publishing.


This article was first published on the Guardian Science Headquarters blog on 12 May 2015

Friday, 5 April 2013

A short rant about numbered journal references

Well, I had not planned a blogpost this morning, but I am goaded to do so by growing frustration as I attempted to read a review that was published in Trends in Neurosciences. The authors were proposing a novel theory in a controversial area, and came out with definitive statements, such as “Imaging studies have revealed the involvement of brain regions including the basal ganglia and cerebellum”. Quite appropriately, they supported such assertions with references. But the journal format meant that these references were identified only by numbers, so I had to flip to the reference list at the back of the paper to find out what was being referred to. The paper had over 70 references, and in the course of reading it, I reckon I did this about 20 times. I would have done it more, but it was distracting and I had to put down my mug of tea each time while I hunted for the relevant page. If I forgot the number on the way (a not unusual occurrence at my age), I had to go through this loop again.

Needless to say, TINS is not alone in adopting this referencing format: it is especially common in journals that have strict length limits on papers. I recently had a paper accepted for such a journal, and I found that any mention of the authors of a referenced work was frowned upon, and I was required to rewrite so that the only identifier of a reference was the number.This often meant turning a short active sentence (Jones [54] showed that.X..) into a longer passive one ( X was found by another study ... [54]).

Why does this matter? Well, if the reader knows the topic pretty well, the referencing will indicate whether the author knows the relevant literature and whether the evidence that is cited is balanced or involves cherrypicking. References also give clues as to where to allocate one's attention. If I see a set of statements associated with a list of familiar names,  I can read more casually because I already know the subject matter; if there is reference to novel material, I’ll focus in more depth. Maybe this is just an obsessive tendency in me, but I want to know what is being referred to, and I find that numbered referencing impairs the readability of an article. Journals may save a bit of space by using numbers rather than names, but as more and more articles move to on-line only, this is not a major consideration, and what is gained in space is at the expense of the reader experience. Hrrmph!

P.S. 13.52 5/4/13
Thanks to Neil Martin (@ThatNeilMartin) for drawing my attention to this v. relevant article. Might explain why psychology journals, whose editors understand how cognition works, tend to prefer the non-numbered system.
Clauss, M., Müller, D., & Codron, D. (2013). Source References and the Scientist's Mind-Map: Harvard vs. Vancouver Style Journal of Scholarly Publishing, 44 (3), 274-282 DOI: 10.3138/jsp.44.3.005
Copy of the article available from the 1st author: mclauss@vetclinics.uzh.ch