Sunday, 1 February 2015

Journals without editors: What is going on?



Last October, I was surprised to see a tweet from @autismcrisis (Michelle Dawson) saying "Belatedly noticed: @deevybee is on the editorial board of Johnny Matson's RASD?! Well I'm speechless. Wow."
Many of those reading this tweet were puzzled as to what this was all about, but I was aware that Michelle had earlier drawn attention to an odd feature of the journal Research in Autism Spectrum Disorders (RASD): an unusually high proportion of the papers in the journal were authored by the Editor, Johnny Matson*.
People were even more puzzled by my reaction, which was to express surprise and to thank Michelle for telling me. Surely, you might think, you'd know if you were on the Editorial Board of a journal. But, actually, that's not always the case. I had no recollection of having joined the Editorial Board, but my memory for the past is not good. I did remember having some correspondence with Johnny Matson around 2008, about a nice article he'd written concerning problems with some of the 'gold standard' diagnostic instruments (see this blogpost for more on this). I suspected that I'd agreed to serve on the Editorial Board in the course of this email exchange, but I had no details of this on file. Journals vary considerably in how far they treat their Editorial Board as emblematic, and how far they actually make use of the board in decision-making. I had barely had any interaction with Research in Autism Spectrum Disorders over the years, and had remained sufficiently unaware of my Editorial Board role to omit this from my curriculum vitae. But there I was, as Michelle had noted, listed as a member of the Editorial Board on the journal website.
I realised that I had to take action for two reasons. First, RASD was an Elsevier journal. I had been convinced by the arguments of Tim Gowers to sign the Cost of Knowledge statement, pledging not to support Elsevier, in the light of their exorbitant pricing of journals. In 2012 I had resigned from the editorial boards of other Elsevier journals. But I'd been unaware that I was on the Editorial Board of RASD, so I had not written to them. Second, I was interested in Michelle's concerns about editorial practices at the journal. I thought I had better check her claims out for myself. What I found was disturbing.  Matson had been editor of Research in Developmental Disabilities (RIDD) since the journal's inception in 1987, and he then took up editorship of the sister journal, Research in Autism Spectrum Disorders when it started in 2007. As can be seen in Figure 1, his publication rate shot up around 2007, and an analysis using Web of Science reveals that many of these papers were published in RASD or RIDD. Indeed, Matson is an author on over ten percent of the papers published in RASD since 2007.
Figure 1: Publications identified from Web of Science**



Around 2007, Matson's papers also started being highly cited as can be seen in Figure 2. 

Figure 2: Matson citations from Web of Science**
On Web of Science he currently has an H-index of 59: those of you who know about these things will recognise this as an impressive value that would usually be indicative of a scientist who does highly influential work. His University webpage proudly displays the 'Highly Cited' badge from Thompson Reuters. However, there is something unusual about Matson's citation profile: just over half of the citations are self-citations. I did some comparisons with other top scientists in the field of autism/intellectual disabilities, and it is clear that Matson is an outlier in terms of his rate of self-citation (see Figure 3).
Figure 3: Comparison of self-citation rate for autism/ID researchers***

I wrote to Matson to query my editorial board status and to explain why I wished to resign and received a curt reply, confirming that I had agreed to be on the Editorial Board, but he would remove me. Nevertheless, my name remained on the Editorial Board list of the journal website for a while. But then, a few weeks ago, there was a new development. For both RIDD and RASD, the pages on the journal website showing information about the Editors and Editorial Board disappeared. What, I wonder, is going on?

* (update 22 Mar 2015): Michelle's tweets on this topic can be accessed from the sidebar on her blogpost
**Search terms were AUTHOR: (matson j*) AND TOPIC: (autism OR intellectual)
***Here I simply selected well-known researchers whose names were distinctive enough to make a search unchallenging. Where necessary to disambiguate authors I added the same Topic search term as for Matson  

Update 5th February 2015
In the past few days there have been some new developments. 
On 2nd Feb, Alicia Wise (@wisealic) of Elsevier responded to a concerned tweet from @DavidPriceUCL to say
 Hi, David - colleagues have been speaking to community about new Editors-in-Chief; appointments will be announced shortly. 
Then on 4th Feb, Michelle Dawson (@autismcrisis) tweeted:RASD finally has (cryptic) editorial board info for March 2015 issue compare to Feb 2015 issue  
You have to download the pdfs to see the information. The Feb issue list Matson as editor and gives the full editorial board. The March issue deletes the editorial board but retains Matson as editor. We might, from Alicia Wise's tweet, have expected the opposite.

I have, of course,  no idea if these changes relate to anything in this blogpost. 

Update 7th February 2015
Michelle Dawson pointed out on Twitter that back in June 2013 she suggested that someone should look at how far the impact factor of RASD and RIDD is affected by Matson's self-citations. I doubt there'd be much effect on RIDD, where Matson's papers constitute only around 1% of articles, but I did the sums for RASD. Readers who have access to Web of Science can see the historical impact factor data here. I simply recomputed the data after searching for papers in a 2-year period with the search term NOT Matson J* as author. I had to read the N citation data from the bar plot you get from citation reports in Web of Science, so precision not guaranteed, but here's what it looks like:


P.S. I am hearing on the grapevine that people have had papers accepted in RASD and RIDD without reviewing, but nobody seems willing to say that publicly. If there are any brave souls out there who are prepared to speak out, please can you do so via Comments. Thanks. 

Update 14th February 2015

In the comments below, Michael Osuch, Publishing Director for Neuroscience & Psychology journals at Elsevier, has added some notes of clarification. He states: "Under Dr Matson’s editorship of both RIDD and RASD all accepted papers were reviewed, and papers on which Dr Matson was an author were handled by one of the former Associate Editors. Dr Matson and his team of Associate Editors stepped down at the end of 2014."
The issue of Matson's papers being handled by an Associate Editor is key. When discussing this point, some people have argued that editors should never publish in their own journal. I disagree, and think it is reasonable provided (a) it is a relatively rare occurrence and (b) the paper is handled by an Associate Editor (AE) who ensures that the paper is subjected to a rigorous review. In such cases it is crucial that the AE is someone who will not be unduly influenced by their relationship with the editor.
In this regard it is worrying to see that two people who were, until 2015, the first listed AEs on both RASD and RIDD are relatively junior with close links to Matson. Thompson Davis III is an associate professor in Matson's department with expertise in autism and anxiety disorders. To identify his publications on Web of Science I searched for AUTHOR: (Davis T*) AND TOPIC: (autism OR anxiety) AND ADDRESS: (Louisiana State University). This brought up 41 publications. Only a few of these are published in RASD or RIDD (five papers from 2007 to 2011), but he has eleven co-authored papers with Matson. Jill Fodstad's CV indicates she took the Clinical Psychology program at Louisiana State University, where it seems that Matson encouraged students to co-author papers with him. (This is evident from an analysis of his coauthors in RASD/RIDD). She is now based at Indiana State University.  55 of her 59 publications are co-authored with Matson, and 30 of them are published in RASD or RIDD. 
Good editorial practice would dictate that neither of these AEs should be assigned papers authored by Matson because the unequal power relationship with him would make it extremely difficult to give a dispassionate appraisal of the work.
Of course, I do not know which AEs did handle Matson's papers, but I would be surprised if an experienced senior editor would have accepted his manuscripts without challenging the high rate of self-citation.

Update 15th February 2015
An email from a colleague reads as follows re another Associate Editor at RASD, Jeff Sigafoos:
"I have heard of several colleagues having papers accepted extremely quickly at Developmental Neurorehabilitation.  The editor at that time was Jeff Sigafoos, but I think he has since stepped down from that position.
Anyway, I looked up Jeff Sigafoos website...look at his publication list: http://www.victoria.ac.nz/education/about/staff/publications-jeff-sigafoos
...an amazing number of papers in RIDD and RASD.  "
I also checked the submission/acceptance lag for Sigafoos' papers in RIDD. For the first 20 I found, 7 were accepted within one day and a total of 14 within one week. This does rather question the claim by Michael Osuch that Matson acted as sole referee 'in a minority of cases'.
I should add that Developmental Neurorehabilitation is not an Elsevier journal: it is published by Informa Healthcare.
Finally, you might ask whether anyone is hurt by this. I think the answer is that it is damaging to other people who published in any of these journals in good faith, and who now will have the validity of their work questioned because of inadequate peer review.

Thursday, 18 December 2014

Dividing up the pie in relation to REF2014

OK, I've only had an hour to look at REF results, so this will be brief, but I'm far less interested in league tables than in the question of how the REF results will translate into funding for different departments in my subject area, psychology.

I should start by thanking HEFCE, who are a model of efficiency and transparency: I was able to download a complete table of REF outcomes from their website here.

What I did was to create a table with just the Overall results for Unit of Assessment 4, which is Psychology, Psychiatry and Neuroscience (i.e. a bigger and more diverse grouping than for the previous RAE). These Overall results combine information from Outputs (65%), Impact (20%) and Environment (15%). I excluded institutions in Scotland, Wales and Northern Ireland.

Most of the commentary on the REF focuses on the so-called 'quality' rankings. These represent the average rating for an institution on a 4-point scale. Funding, however, will depend on the 'power' - i.e. the quality rankings multiplied by the number of 'full-time equivalent' staff entered in the REF. Not surprisingly, bigger departments get more money. The key things we don't yet know are (a) how much funding there will be, and (b) what formula will be used to translate the star ratings into funding.

With regard to (b), in the previous exercise, the RAE, you got one point for 2*, three points for 3* and seven points for 4*. It is anticipated that this time there will be no credit for 2* and little or no credit for 3*. I've simply computed the sums according to two scenarios: original RAE, and formula where only 4* counts. From these scores one can readily compute what percentage of available funding will go to each institution. The figures are below. Readers may find it of interest to look at this table in relation to my earlier blogpost on The Matthew Effect and REF2014.

Unit of Assessment 4:
Table showing % of subject funding for each institution depending on funding formula

Funding formula
Institution RAE 4*only
University College London 16.1 18.9
King's College London 13.3 14.5
University of Oxford 6.6 8.5
University of Cambridge 4.7 5.7
University of Bristol 3.6 3.8
University of Manchester 3.5 3.7
Newcastle University 3.0 3.4
University of Nottingham 2.7 2.6
Imperial College London 2.6 2.9
University of Birmingham 2.4 2.7
University of Sussex 2.3 2.4
University of Leeds 2.0 1.5
University of Reading 1.8 1.6
Birkbeck College 1.8 2.2
University of Sheffield 1.7 1.7
University of Southampton 1.7 1.8
University of Exeter 1.6 1.6
University of Liverpool 1.6 1.6
University of York 1.5 1.6
University of Leicester 1.5 1.0
Goldsmiths' College 1.4 1.0
Royal Holloway 1.4 1.5
University of Kent 1.4 1.0
University of Plymouth 1.3 0.8
University of Essex 1.1 1.1
University of Durham 1.1 0.9
University of Warwick 1.1 1.0
Lancaster University 1.0 0.8
City University London 0.9 0.5
Nottingham Trent University 0.9 0.7
Brunel University London 0.8 0.6
University of Hull 0.8 0.4
University of Surrey 0.8 0.5
University of Portsmouth 0.7 0.5
University of Northumbria 0.7 0.5
University of East Anglia 0.6 0.5
University of East London 0.6 0.5
University of Central Lancs 0.5 0.3
Roehampton University 0.5 0.3
Coventry University 0.5 0.3
Oxford Brookes University 0.4 0.2
Keele University 0.4 0.2
University of Westminster 0.4 0.1
Bournemouth University 0.4 0.1
Middlesex University 0.4 0.1
Anglia Ruskin University 0.4 0.1
Edge Hill University 0.3 0.2
University of Derby 0.3 0.2
University of Hertfordshire 0.3 0.1
Staffordshire University 0.3 0.2
University of Lincoln 0.3 0.2
University of Chester 0.3 0.2
Liverpool John Moores 0.3 0.1
University of Greenwich 0.3 0.1
Leeds Beckett University 0.2 0.0
Kingston University 0.2 0.1
London South Bank 0.2 0.1
University of Worcester 0.2 0.0
Liverpool Hope University 0.2 0.0
York St John University 0.1 0.1
University of Winchester 0.1 0.0
University of Chichester 0.1 0.0
University of Bolton 0.1 0.0
University of Northampton 0.0 0.0
Newman University 0.0 0.0


P.S. 11.20 a.m. For those who have excitedly tweeted from UCL and KCL about how they are top of the league, please note that, as I have argued previously, the principal determinant of the % projected funding is the number of FTE staff entered. In this case the correlation is .995.













Monday, 8 December 2014

Why evaluating scientists by grant income is stupid




©CartoonStock.com

As Fergus Millar noted in a letter to the Times last year, “in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding”.
This topsy-turvy logic has become evident in some universities, with blatant demands for staff in science subjects to match a specified quota of grant income or face redundancy. David Colquhoun’s blog is a gold-mine of information about those universities who have adopted such policies. He notes that if you are a senior figure based in the Institute of Psychiatry in London, or the medical school at Imperial College London you are expected to bring in an average of at least £200K of grant income per annum.  Warwick Medical School has a rather less ambitious threshold of £90K per annum for principal investigators and £150K per annum for co-investigators1.
So what’s wrong with that? It might be argued that in times of financial stringency, Universities may need to cut staff to meet their costs, and this criterion is at least objective. The problem is that it is stupid. It damages the wellbeing of staff, the reputation of the University, and the advancement of science.
Effect on staff 
The argument about wellbeing of staff is a no-brainer, and one might have expected that those in medical schools would be particularly sensitive to the impact of job insecurity on the mental and physical health of those they employ. Sadly, those who run these institutions seem blithely unconcerned about this and instead impress upon researchers that their skills are valued only if they translate into money. This kind of stress does not only impact on those who are destined to be handed their P45 but also on those around them. Even if you’re not worried about your own job, it is hard to be cheerfully productive when surrounded by colleagues in states of high distress. I’ve argued previously that universities should be evaluated on staff satisfaction as well as student satisfaction: this is not just about the ethics of proper treatment of one’s fellow human beings, it is also common-sense that if you want highly skilled people to do a good job, you need to make them feel valued and provide them with a secure working environment. 
Effect on the University
The focus on research income seems driven by two considerations: a desire to bring in money, and to achieve status by being seen to bring in money. But how logical is this? Many people seem to perceive a large grant as some kind of ‘prize’, a perception reinforced by the tendency of the Times Higher Education and others to refer to ‘grant-winners’. Yet funders do not give large grants as gestures of approval: the money is not some kind of windfall. With rare exceptions of infrastructure grants, the money is given to cover the cost of doing research. Even now we have Full Economic Costing (FEC) attached to research council grants, this covers no more than 80% of the costs to universities of hosting the research. Undoubtedly, the money accrued through FEC gives institutions leeway to develop infrastructure and other beneficial resources, but it is not a freebie, and big grants cost money to implement.
So we come to the effect of research funding on a University’s reputation. I assume this is a major driver behind the policies of places like Warwick, given that it is one component of the league tables that are so popular in today’s competitive culture. But, as some institutions learn to their costs, a high ranking in such tables may count for naught if a reputation for cavalier treatment of staff makes it difficult to recruit and retain the best people. 
Effect on science
The last point concerns the corrosive effect on science if the incentive structure encourages people to apply for numerous large grants. It sidelines people who want to do careful, thoughtful research in favour of those who take on more than they can cope with. There is already a glut of waste in science, with many researchers having a backlog of unpublished work which they don’t have time to write up because they are busy writing the next grant.  Four years ago I argued that we should focus on what people do with research funding rather than how much they have. On this basis, someone who achieved a great deal with modest funding would be valued more highly than someone who was failed to publish many of the results from a large grant. I cannot express it better than John Ioannidis, who in a recent paper put forward a number of suggestions for improving the reproducibility of research. This was his suggested modification to our system of research incentives:
“….obtaining grants, awards, or other powers are considered negatively unless one delivers more good-quality science in proportion. Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities.”
 
1If his web-entry is to be believed, then Warwick’s Dean of Medicine, Professor Peter Winstanley, falls a long way from this threshold, having brought in only £75K of grant income over a period of 7 years. He won’t be made redundant though, as those with administrative responsibilities are protected.

Ioannidis, J. (2014). How to Make More Published Research True PLoS Medicine, 11 (10) DOI: 10.1371/journal.pmed.1001747