Showing posts with label reviewing. Show all posts
Showing posts with label reviewing. Show all posts
Saturday, 1 October 2016
On the incomprehensibility of much neurogenetics research
Together with some colleagues, I am carrying out an analysis of methodological issues such as statistical power in papers in top neuroscience journals. Our focus is on papers that compare brain and/or behaviour measures in people who vary on common genetic variants.
I'm learning a lot by being forced to read research outside my area, but I'm struck by how difficult many of these papers are to follow. I'm neither a statistician nor a geneticist, but I have nodding acquaintance with both disciplines, as well as with neuroscience, yet in many cases I find myself struggling to make sense of what researchers did and what they found. Some papers that have taken hours of reading and re-reading to just get at the key information that we are seeking for our analysis, i.e. what was the largest association that was reported.
This is worrying for the field, because the number of people competent to review such papers will be extremely small. Good editors will, of course, try to cover all bases by finding reviewers with complementary skill sets, but this can be hard, and people will be understandably reluctant to review a highly complex paper that contains a lot of material beyond their expertise. I remember a top geneticist on Twitter a while ago lamenting that when reviewing papers they often had to just take the statistics on trust, because they had gone beyond the comprehension of all but a small set of people. The same is true, I suspect, for neuroscience. Put the two disciplines together and you have a big problem.
I'm not sure what the solution is. Making raw data available may help, in that it allows people to check analyses using more familiar methods, but that is very time-consuming and only for the most dedicated reviewer.
Do others agree we have a problem, or is it inevitable that as things get more complex the number of people who can understand scientific papers will contract to a very small set?
Labels:
complexity,
genetics,
neurogenetics,
neuroscience,
reviewing,
science,
statistics
Saturday, 11 June 2016
Editorial integrity: Publishers on the front line
Thanks to some
live tweeting by Anna Sharman (@sharmanedit), I've become aware that the 13th Conference of the European Association of Science Editors (EASE) is taking
place in Strasbourg this weekend.
The topic is
"Scientific integrity: editors on the front line", and the programme
acknowledges Elsevier, who presumably have contributed funding for the
conference.
It therefore
seems timely to give a brief update of developments following three blogposts I
wrote during February-March 2015, documenting some peculiar editorial behaviour
at four journals: Research in Autism Spectrum Disorders (RASD: Elsevier),
Research in Developmental Disabilities (RIDD: Elsevier), Developmental Neurorehabilitation
(DN: Informa Healthcare) and Journal of Developmental and Physical Disabilities
(JDPD: Springer).
To do the story
full justice, you need to read these blogposts, but in brief, blogpost 1
described how Johnny Matson, the then editor of both RASD and RIDD had
published numerous articles in his own journal, and engaged in frequent
self-citation, leading to his receiving a 'highly cited' badge from Thomson
Reuters. In the comments on that blogpost, another intriguing factor emerged,
which was Matson's tendency to accept papers with little or no review. This was
denied by Elsevier, despite clear evidence of very short acceptance lags that
were incompatible with review.
Blogpost 2 was
prompted by Matson defending himself against accusations of self-citation by
pointing out that he published in journals that he did not edit. I checked
this out and found he had numerous papers in two other journals: DN and JDPD, and that the median lag between
a paper of his being submitted and accepted in DN was one day. (JDPD does not provide
data on publication lags). I therefore looked at the editors of those journals,
and found that they themselves were publishing remarkable numbers of papers in RASD and
RIDD, again with extremely short publication lags. A trio of editors and editorial
board members (Jeff Sigafoos, Giulio Lancioni and Mark O'Reilly), co-authored no less than 140 papers in RASD and RIDD between
2010 and 2014, typically with acceptance times of less than 2 weeks. Some of
the papers in RIDD were not even in the topic area of developmental
disabilities, but covered neurological conditions acquired in adulthood.
In blogpost 3, I
turned the focus on to the publisher of RASD and RIDD, Elsevier, to query why
they had not done anything about such irregular editorial practices. I did a
further analysis of publication lags in RIDD, showing that they had dropped
precipitately between 2008 and 2012, and that there was a small band of authors
whose prolific papers were published there at amazing speed. I provided all the
statistical data to support my case, including interactive spreadsheets that
made it easy to determine which editors and authors had been benefiting from
the slack editorial standards at these journals.
There was some
interesting fall-out from all of this. The second blogpost drew fire from
supporters of the editors I had "outed", accusing me of bad behaviour
and threatening to complain to my university. Since everything I had said was
backed by evidence, this did not concern me. I received heartfelt messages of support from people who were appalled that a particular approach to autism intervention had been promoted by this group of editors, who were in effect using their status to gain the veneer of scientific credibility for work which was not in fact peer-reviewed. I was also contacted by several
academics telling me that everyone knew this had been going on for years, but nobody had done anything; this level of passivity was surprising given that many were angry that authors had reaped benefits from their staggeringly high publication
rate, while those who were outside the charmed circle were left behind. I was urged to go further and raise my concerns with the universities
employing those who were capitalising on, or engaging in, lax editorial
behaviour. I do, however, have an extremely demanding job and I hoped that I had
done enough by shining a light on dubious practices, and providing the full
datasets that provided evidence. However, I now wonder if I should have been more pro-active.
I wrote to
express my concerns to publishers of all four journals, and had my
correspondence acknowledged. But then? Well, not a lot.
It's clear that
Elsevier has taken some action. Indeed, my first blogpost was prompted by
Michelle Dawson noting on Twitter that the editorial boards of RASD and RIDD
had mysteriously disappeared from the online journals. She had previously noted
Matson's pattern of mega-self-citation, and I had written directly to him,
with copy to the publisher, some months previously to express concern, when I realised that I was
listed as a member of the editorial board of RASD. Elsevier did not acknowledge
my letter, but it is possible that the changes to the editorial boards that they had
started were linked to my concerns.
The first direct
response I had from Elsevier was some weeks after my final blogpost, when they
explained that they were looking into the situation regarding unreviewed
papers, but that this was a huge job and would take a long time. They were
presumably disinclined to rely on the files that I had deposited on Open
Science Framework, which show the identity and submission and acceptance data
for every paper in RASD and RIDD. They
did appoint new editors and a small group of associate editors for both
journals, all with good track records for integrity.
I have heard on
the grapevine that they are now evaluating articles published in those journals
that have been identified as not having undergone peer review; some of those approached to do these
evaluations have mentioned this to me. It's rather unclear how this is going to
work, given that, across the two journals, there are nearly 1000 papers where
the available data indicate a lag from receipt to acceptance of under 2 weeks.
I guess we should be glad that at least the publisher is taking some action,
albeit at a snail's pace, but I am dubious as to whether there will be any retractions.
Meanwhile,
Developmental Neurorehabilitation changed publisher around the time I was
writing, and is now under the care of Taylor and Francis. I wrote to the publisher
explaining my concerns and received a polite reply, but then heard no more. I
note that the Editor in Chief is now Wendy Machalicek, who previously co-edited
the journal with Russell Lang. Lang's doctoral advisor was Mark O'Reilly,
editor of JDPD, and one of the prolific trio who featured in blogpost 2. Lang
himself co-authored 24 papers in RASD and 13 in RIDD, and 35 of these 39 papers
were accepted within 2 weeks of receipt. Machalicek has published 11 papers in
RASD and 5 in RIDD, and 12 of these 16 papers were accepted within 2 weeks of
receipt. She also did her doctorate in
O'Reilly's department, and several of her papers are co-authored with him. In an editorial last year, Lang and
Machalicek announced changes to the journal, some of which seem to be
prompted by a desire to make the reviewing process more rigorous under the new
publisher. However, one change is of particular interest: the scope of the
journal will be broadened to consider "developmental disability from a
lifespan perspective; wherein, it is acknowledged that development occurs
throughout a person's life and a range of injuries, diseases and other
impairments can cause delayed or abnormal development at any stage of
life." That will be good news for Giulio Lancioni, who was previously
publishing papers on coma patients, amyotrophic lateral sclerosis, and
Alzheimer's disease in RIDD. He and his collaborators – Jeff Sigafoos, Mark
O'Reilly, as well as Russell Lang and Johnny Matson – are all current members
of the editorial board of the journal.
It seems to be
business as usual at the Springer title, Journal of Developmental and Physical
Disabilities. Mark O'Reilly is still the editor, with Lang and Sigafoos as
associate editors; Lancioni, Machalicek and Matson are all on the editorial
board. Springer's willingness to turn a blind eye to editors playing the system becomes clear when
one sees that a recent title, "Review Journal of Autism and Developmental
Disorders" has as Editor-in-Chief no less a personage than Johnny Matson.
And, surprise, surprise, the editorial board includes Lang, Sigafoos and
Lancioni.
One of the
overarching problems I uncovered when navigating my way around this situation
was that there is no effective route for a whistleblower who has uncovered
evidence of dubious behaviour by editors. Elsevier has developed a Publishing Ethics Resource Kit but
it is designed to help editors dealing with ethical issues that arise with
authors and reviewers. The general advice if you encounter an ethical problem
is to contact the editor. The Committee on Publication Ethics also issues
guidance, but it is an advisory body with no powers. One would hope that
publishers would act with integrity when a serious problem with an editor is
revealed, but if my experience is anything to go by, they are extremely
reluctant to act and will weave very large carpets to brush the problems under.
Labels:
editors,
Elsevier,
publishers,
publishing ethics,
reviewing,
Springer,
Taylor and Francis
Sunday, 7 June 2015
My collapse of confidence in Frontiers journals
Frontiers journals have become a conspicuous presence in academic
publishing since they started
in 2007 with the advent of Frontiers in Neuroscience. When they were first
launched, I, like many people, was suspicious. This was an Open Access (OA) online
journal where authors paid to publish, raising questions about the academic
rigour of the process. However, it was clear that the publishers had a number
of innovative ideas that were attractive to authors, with a nice online
interface and a collaborative
review process that made engagement with reviewers more of a discussion
than a battle with anonymous critics. Like many other online OA journals, the
editorial decision to publish was based purely on an objective appraisal of the
soundness of the study, not on a subjective evaluation of importance, novelty
or interest. As word got round that respectable scientists were acting as
editors, reviewers and authors of paper in Frontiers, people started to view it
as a good way of achieving fast and relatively painless publication, with all
the benefits of having the work openly available and accessible to all.
The publishing model has been highly successful. In 2007, there were 45
papers published in Frontiers in Neuroscience, whereas in 2014 it was 3,012
(data from Scopus search for source title Frontiers in Neuroscience, which includes
Frontiers journals in Human Neuroscience, Cellular Neuroscience, Molecular
Neuroscience, Behavioral Neuroscience, Systems Neuroscience, Integrative
Neuroscience, Synaptic Neuroscience, Aging Neuroscience, Evolutionary
Neuroscience and Computational Neuroscience). If all papers attracted the author
fee of US$1900 (£1243) for a regular article, this would bring in £3.7 million
pounds in 2014: the actual income would be less than this because some articles
are cheaper, but it's clear that the income is any in case substantial,
especially since the journal is online and there are no print costs. But this
is just the tip of the iceberg. Frontiers has expanded massively since 2007 to
include a wide range of disciplines. A
Scopus search for articles with journal title that includes "Frontiers
in" found over 54,000 articles since 2006, with 10,555 published in 2014.
With success, however, have come growing rumbles of discontent. Questions
are being raised about the quality of editing and reviewing in Frontiers. My first inkling of this was a colleague told
me he would not review for Frontiers because his name was published with the
article. This wasn't because he wanted confidentiality; rather he was concerned
that it would appear he had given approval for the article, when in fact he had
major reservations.
Then, there have been some very public criticisms of editorial practices at
Frontiers. The first was associated with the retraction of a paper that claimed
climate denialism was associated with a more general tendency to advocate
conspiracy theories. Papers on this subject are always controversial and this
one was no exception, attracting complaints to the editor. The overall
impression from the
account in Retraction Watch was that the editor caved in to legal threats, thereby
letting critics of climate change muzzle academic freedom of speech. This led
to the
resignation of one Frontiers editor**.
Next, there was a case that posed the opposite problem: the scientific
establishment were outraged that a paper on HIV denial had been published, and
argued that it should be retracted. The journal editor decided that the paper
should not be retracted, but instead rebranded it as Opinion – see Retraction
Watch account here.
Most recently, in May 2015 there was a massive upset when editors of the
journals Frontiers in Medicine and Frontiers in Cardiovascular Medicine mounted
a protest at the way the publisher was bypassing their editorial oversight and
allocating papers to associate editors who could accept them without the
knowledge of the editor in chief. The editors protested and published a
manifesto of editorial independence, leading to 31
of them being sacked by the publisher.
All of these events have chipped away at my confidence in Frontiers
journals, but it was finally exploded completely when someone on Twitter pointed
me to this article entitled "First
time description of dismantling phenomenon" by Laurence Barrer and Guy
Giminez from Aix Marseille Université, France. I had not realised that
Frontiers in Psychology had a subsection on Psychoanalysis and
Neuropsychoanalysis, but indeed it does, and here was a paper proposing a
psychoanalytic account of autism. The abstract states: "The authors of
this paper want to demonstrate that dismantling is the main defense mechanism
in autism, bringing about de-consensus of senses." Although the authors
claim to be adopting a scientific method for testing a hypothesis, it is
unclear what would constitute disproof. Their evidence consists of interpreting
known autistic characteristics, such as fascination with light, in
psychoanalytic terms. The source of dismantling is attributed to the death
drive. This reads like the worst kind of pseudoscience, with fancy terminology
and concepts being used to provide evidence for a point of view which is more
like a religious belief than a testable idea. I wondered who was responsible
for accepting this paper. The Editor was
Valeria Vianello Dri, Head of Child and Adolescent Neuropsychiatry Units in
Trento, Italy. No information on her biography is provided on the Frontiers
website. She lists four publications: these are all on autism genetics. All are
multi-authored and she is not first or last author on any of these*. A Google search
confirmed she has an interest
in psychoanalysis but I could find no further information to indicate that
she had any real experience of publishing scientific papers. There were three
reviewers: the first two had no publications listed on their Frontiers profiles;
the third had a private profile, but a Google search on his name turned up a CV
but it did not include any peer-reviewed publications.
So it seems that Frontiers has opened the door to a branch of pseudoscience
to set up its own little circle of editors, reviewers and authors, who can play
at publishing peer-reviewed science. I'm not saying all people with an interest
in psychoanalysis should be banished: if they do proper science, they can
publish that in regular journals without needing this kind of specialist
outlet. But this section of Frontiers is a disastrous development; there is no
evidence of scientific rigour, yet the journal gives credibility to a pernicious
movement that is particularly strong in France and Argentina, which regards psychoanalysis
as the preferred treatment for autism. Many experts have pointed out that
this approach is not evidence-based, but worse still, in some of its
manifestations it
amounts to maltreatment. What next,
one wonders? Frontiers in homeopathy?
Like the protesting editors of Frontiers in Medicine, I think the combined
evidence is that Frontiers has allowed the profit motive to dominate. They should
be warned, however, that once they lose a reputation for publishing decent
science, they are doomed. I've already heard it said that someone on a grants
review panel commented that a candidate's articles in Frontiers should be
disregarded. Unless these journals can recover a reputation for solid science
with proper editing and peer review, they will find themselves shunned.
*The Frontiers biography suggests she is last author on a
paper in 2008, but the author list proved to be incomplete.
** Correction: Shortly after I posted this, Stephan Lewandowsky wrote to say that there were 3 editors who resigned over the RF retraction, plus another one voicing intense criticism
** Correction: Shortly after I posted this, Stephan Lewandowsky wrote to say that there were 3 editors who resigned over the RF retraction, plus another one voicing intense criticism
Labels:
autism,
Frontiers,
journals,
psychoanalysis,
publishing,
reviewing
Friday, 3 January 2014
A New Year's letter to academic publishers
My relationships with journals are rather like a bad marriage: a mixture of dependency and hatred. Part of the problem is that journal editors and academics often have a rather different view of the process. Scientific journals could not survive without academics. We do the research, often spending several years of our lives to produce a piece of work that is then distilled into one short paper, which the fond author invariably regards as a fascinating contribution to the field. But when we try to place our work in a journal, we find that it's a buyer's market: most journals are overwhelmed with more submitted papers than they can cope with, and rejection rates are high. So there is a total mismatch: we set out naively dreaming of journals leaping at the opportunity to secure our best work, only to be met with coldness and rejection. As in the best Barbara Cartland novels, for a lucky few, persistence is ultimately rewarded, and the stony-hearted editor is won over. But many potential authors fall by the wayside long before that point.
But times are changing. We are moving from a traditional "dead tree technology" model, where journals have to be expensively printed and distributed, to electronic-only media. These not only cost less to produce, but also avoid the length limits that traditionally have forced journals to be so highly selective. Alongside the technological changes, there has been rapid growth of the Open Access movement. The main motivations behind this movement were idealistic (making science available to all) and economic (escaping the stranglehold of expensive library subscriptions to closed-access journals). It's early days, but I am starting to sense that there's another consequence of the shift, which is that, as the field opens up, publishers are starting to change how they approach authors: less as supplicants, and more as customers.
In the past, the top journals had no incentive to be accommodating to authors. There were too many of us chasing scarce page space. But there are now some new boys on the open access block, and some of them have recognised that if they want to attract people to publish with them, they should listen to what authors want. And if they want academics to continue to referee papers for no reward, then they had better treat them well too.
This really is not too hard to do. I have two main gripes with journals, a big one and a little one. The big one concerns my time. The older I get, the less patient I am with organizations that behave as if I have all the time in the world to do the small bureaucratic chores that they wish to impose on me. For instance, many journals specify pointless formatting requirements for an initial submission. I really, really resent jumping through arbitrary hoops when the world is full of interesting things I could be doing. And cutting my toenails is considerably more interesting than reformatting references.
I recently encountered a journal whose website required you to enter details (name/address/email) of all authors in order to submit a pre-submission enquiry. Surely the whole point of a pre-submission enquiry is to save time, so you can get a quick decision on whether it's likely to be worth your while battling with the submission portal! There's also the horror of journals that require signatures from all authors at the point when you submit a manuscript: seems a harmless enough requirement, except that authors are often widely dispersed - on maternity leave or sailing the Atlantic - by the time the paper is submitted. The idea is to avoid fraud, of course, but like so many ethics regulations, the main effect of this requirement is to encourage honest, law-abiding people to take up forgery.
Oh, and then there are the 'invitations to review' (makes it sound so enticing, like being invited to a party), which require you to login in order to register your response – which for me invariably means selecting the option that I have forgotten my password, then looking at email to find how to update the password, meanwhile getting distracted by other email messages so I forget what I was doing, and eventually returning to the site to find it wants me now to change the password and enter mandatory contact details before it will accept my response. Well, no. I'm usually a good citizen but I'm afraid I've just stopped responding to those.
You'd think the advent of electronic submission would make life easier, but in fact it can just open up a whole new world of tiny, fiddly things that you are required to do before your paper is submitted. Each individual thing is usually fairly trivial, but they do add up. So, for instance, if you'd like your authors to suggest referees, please allow them to paste in a list. DO NOT require them to cut and paste title, forename, initial, surname, email and institution into your horrible little boxes for each of six potential referees. It all takes TIME. And we have more important things in life to be getting on with. Including doing the science that allows us to get the point of writing a paper.
Even worse, some of the requirements of journals are just historical artefacts with no more rationale than male nipples. Here's a splendid post by Kate Jeffery which in fact was the impetus for this blogpost. I thought of Kate when, having carefully constructed a single manuscript document including figures, as instructed by the Instructions for Authors, I got to the submission portal to be strictly told that ON NO ACCOUNT must the figures be included in the main manuscript. Instead, they had to be separated, not only from the manuscript, but also from their captions (which had to be put as a list at the end of the manuscript). This makes sense ONCE THE PAPER IS ACCEPTED, when it needs to be typeset. But not at the point of initial submission, when the paper's fate is undecided: it may well be rejected, and if not, it will certainly require revision. And meanwhile, you have referees tearing their hair out trying to link up the text, the Figures and their captions.
The smaller gripe is just about treating people with respect. I do have a preference for journal editors whose correspondence indicates that they are a human being and not an automaton. I've moaned about this before, in an old post describing a taxonomy of journal editors, but my feeling is that in the three years since I wrote that, things have got worse rather than better. Publishers and editors may think they make their referees happy by writing and telling them how useful their review of a paper has been – but the opposite effect is created if it is clear that this is a form letter that goes to all referees, however hopeless.It is really better to be ignored than to be sent an insincere, meaningless email - it just implies that the sender thinks you are stupid enough to be taken in by it.
So my message to publishers in 2014 is really very simple. The market is getting competitive and if you want to attract authors to send their best work to you, and referees to keep reviewing for you, you need to become more sensitive to our needs. Two journals that appear to be trying hard are eLife and PeerJ, who avoid most of the bad practices I have outlined. I am hoping their example will cause others to up their game. We are mostly very simple souls who are not hard to please, but we hate having our time wasted, and we do like being treated like human beings.
But times are changing. We are moving from a traditional "dead tree technology" model, where journals have to be expensively printed and distributed, to electronic-only media. These not only cost less to produce, but also avoid the length limits that traditionally have forced journals to be so highly selective. Alongside the technological changes, there has been rapid growth of the Open Access movement. The main motivations behind this movement were idealistic (making science available to all) and economic (escaping the stranglehold of expensive library subscriptions to closed-access journals). It's early days, but I am starting to sense that there's another consequence of the shift, which is that, as the field opens up, publishers are starting to change how they approach authors: less as supplicants, and more as customers.
In the past, the top journals had no incentive to be accommodating to authors. There were too many of us chasing scarce page space. But there are now some new boys on the open access block, and some of them have recognised that if they want to attract people to publish with them, they should listen to what authors want. And if they want academics to continue to referee papers for no reward, then they had better treat them well too.
This really is not too hard to do. I have two main gripes with journals, a big one and a little one. The big one concerns my time. The older I get, the less patient I am with organizations that behave as if I have all the time in the world to do the small bureaucratic chores that they wish to impose on me. For instance, many journals specify pointless formatting requirements for an initial submission. I really, really resent jumping through arbitrary hoops when the world is full of interesting things I could be doing. And cutting my toenails is considerably more interesting than reformatting references.
I recently encountered a journal whose website required you to enter details (name/address/email) of all authors in order to submit a pre-submission enquiry. Surely the whole point of a pre-submission enquiry is to save time, so you can get a quick decision on whether it's likely to be worth your while battling with the submission portal! There's also the horror of journals that require signatures from all authors at the point when you submit a manuscript: seems a harmless enough requirement, except that authors are often widely dispersed - on maternity leave or sailing the Atlantic - by the time the paper is submitted. The idea is to avoid fraud, of course, but like so many ethics regulations, the main effect of this requirement is to encourage honest, law-abiding people to take up forgery.
Oh, and then there are the 'invitations to review' (makes it sound so enticing, like being invited to a party), which require you to login in order to register your response – which for me invariably means selecting the option that I have forgotten my password, then looking at email to find how to update the password, meanwhile getting distracted by other email messages so I forget what I was doing, and eventually returning to the site to find it wants me now to change the password and enter mandatory contact details before it will accept my response. Well, no. I'm usually a good citizen but I'm afraid I've just stopped responding to those.
You'd think the advent of electronic submission would make life easier, but in fact it can just open up a whole new world of tiny, fiddly things that you are required to do before your paper is submitted. Each individual thing is usually fairly trivial, but they do add up. So, for instance, if you'd like your authors to suggest referees, please allow them to paste in a list. DO NOT require them to cut and paste title, forename, initial, surname, email and institution into your horrible little boxes for each of six potential referees. It all takes TIME. And we have more important things in life to be getting on with. Including doing the science that allows us to get the point of writing a paper.
Even worse, some of the requirements of journals are just historical artefacts with no more rationale than male nipples. Here's a splendid post by Kate Jeffery which in fact was the impetus for this blogpost. I thought of Kate when, having carefully constructed a single manuscript document including figures, as instructed by the Instructions for Authors, I got to the submission portal to be strictly told that ON NO ACCOUNT must the figures be included in the main manuscript. Instead, they had to be separated, not only from the manuscript, but also from their captions (which had to be put as a list at the end of the manuscript). This makes sense ONCE THE PAPER IS ACCEPTED, when it needs to be typeset. But not at the point of initial submission, when the paper's fate is undecided: it may well be rejected, and if not, it will certainly require revision. And meanwhile, you have referees tearing their hair out trying to link up the text, the Figures and their captions.
The smaller gripe is just about treating people with respect. I do have a preference for journal editors whose correspondence indicates that they are a human being and not an automaton. I've moaned about this before, in an old post describing a taxonomy of journal editors, but my feeling is that in the three years since I wrote that, things have got worse rather than better. Publishers and editors may think they make their referees happy by writing and telling them how useful their review of a paper has been – but the opposite effect is created if it is clear that this is a form letter that goes to all referees, however hopeless.It is really better to be ignored than to be sent an insincere, meaningless email - it just implies that the sender thinks you are stupid enough to be taken in by it.
So my message to publishers in 2014 is really very simple. The market is getting competitive and if you want to attract authors to send their best work to you, and referees to keep reviewing for you, you need to become more sensitive to our needs. Two journals that appear to be trying hard are eLife and PeerJ, who avoid most of the bad practices I have outlined. I am hoping their example will cause others to up their game. We are mostly very simple souls who are not hard to please, but we hate having our time wasted, and we do like being treated like human beings.
Saturday, 7 January 2012
Time for academics to withdraw free labour
![]() |
© www.CartoonStock.com
|
Jack is a sheep farmer. He gets some government subsidies,
and also works long hours to keep his sheep happy and healthy. When his beasts
are ready for slaughter, he offers them to an abattoir. The abattoir is very
choosy and may reject Jack’s sheep, which is a disaster for him, as there is no
other route to the market. If he is lucky the abattoir will accept the animals,
slaughter them and sell them, at a large profit, to the supermarket. Jack does
not see any of this money. The populace struggle to afford the price of meat,
but the government has no control over this. When Jack feels like a nice piece
of lamb, he buys it from the supermarket. Meanwhile, Jack provides his services
for free as an inspector of other farmers’ animals.
Crazy story, right? But that’s the model that academic
publishing follows. Academics work their butts off to get research funding,
often from government. They then do the research and write up and submit it for
publication. They run the gauntlet of picky reviewers and editors to get the
work accepted for publication. Once it is published, it appears in a journal
which is sold on to academic institutions for large profits. Post publication,
the academic often has to pay a cost equivalent to several hardback books to
get a formatted electronic copy of the article. Meanwhile, the journals justify
this by arguing they have extensive costs. But in fact, it is the academic
community that does the bulk of the work for free, acting as editors and peer
reviewers. Increasingly, they are expected also to do copy editing and graphic
design, tasks that were previously undertaken by professional journal
staff.
It has taken many years for the torpid academic community to
wake up to this ludicrous situation, but things are slowly starting to change. In
some fields, academics are starting to take things into their own hands and cut
commercial publishers out of the loop, but this still the exception rather
than the rule. A more widely adopted innovation has been Open Access
publishing. On the one hand, electronic publishing has made it possible for
journal papers to be posted online and made freely accessible. On the other,
major funders, notably NIH in the USA
and the Wellcome Trust in the UK,
have insisted that researchers whom they fund must make their published work
Open Access. Obviously, something has to give: the publishers are not going to
do their work for nothing. But the system does work, with a combination of new
journals that are Open Access from the start, and older ones agreeing to make selected
articles Open Access, in both cases for a fee. In general, the funders agree to
pay the charge.
This week, however, a story broke suggesting that the
traditional publishers are trying to fight back and force NIH to backtrack on
its Open Access policy. Things hotted up with this post from Michael Eisen
who noted that one major publisher, Elsevier, has been lobbying a NY
Congresswoman, Carolyn Maloney, to persuade her to support a bill that would
limit Open Access publishing. Harvard
University gave a detailed
response to the bill, which can be found here.
I want my response to this story to go beyond just
tut-tutting and shaking my head. Academics do have some power here. We provide
the articles for Elsevier journals, and we do a lot of unpaid work reviewing
and editing for them. None of us wants to restrict our opportunities for
publishing, but these days there are a lot of outlets available. When deciding
where to submit a paper, I suspect that most academics, like me, take little
notice of who the publisher of a journal is. I focus more on whether the
journal has a good
editor, my prior experience of publication lags, and whether Open Access is
available. But as from now, I shall include publisher in the criteria I adopt,
and avoid Elsevier as far as I can. Also, if asked to review for a journal,
I’ll check if it is in the Elsevier stable, using this handy
website, and if so, I’ll explain why I’m not prepared to review. I suggest
that if you are as annoyed as I am by this story, you do likewise, and refuse
to engage with Elsevier journals.
Addendum, 10th January 2012
Some people on Twitter have asked if people should be paid
for the work they do as author/editor/reviewer. Definitely not. It would just make matters
worse, because publishers would factor in these costs and charge even more for
journals.
No, I just want a change in the model whereby publishers
make enormous and undeserved profits from academics. There are various ways
this could be done.
1. The publishers could charge less: currently if you try
and download a single journal article, you are charged around £20, even though
the production costs are minimal.
2. Retain the current model but remove commercial publishers
from the loop, with publication of research limited to learned societies,
universities, funders.
3. Retain the current model but make all journals Open
Access, with the funder or university paying a one-off publication fee.
4. More radically, move to a system such as arxiv, which I
discussed here.
On the whole, academics are an interesting bunch. We’re not
all that interested in money, but we are skilled and can produce things of
commercial value. It’s a golden opportunity for someone who does want to make
money to step in a make a profit. Publishers like Elsevier would have been fine
if they hadn’t been so greedy and had charged modest sums for their product.
Instead, they pushed costs as high as the market could bear, making
huge profits, while at the same time giving authors less and less.
(Copy-editors have become an endangered species). Instead of facilitating
scientific communication, they have put obstacles in the way. But part of the blame
lies with the academic community, who have been far too passive. We should have tackled this years ago before it got out of hand.
Labels:
Elsevier,
open access,
publishing,
Research Works Act,
reviewing
Subscribe to:
Posts (Atom)

