the publication referenced in the graph above, I concluded that there was disproportionate amounts of research, and research funding, going to ASD relative to other neurodevelopmental disorders.
Now, I don’t want to knock autism research. ASD is an intriguing condition which can have major effects on the lives of affected individuals and their families. It was great to see the recent publication of a study by Jonathan Green and his colleagues showing that a parent-based treatment with autistic toddlers could produce long-lasting reduction in severity of symptoms. Conducting a rigorous study of this size is hugely difficult to do and only possible with substantial research funding.
But I do wonder why there is such a skew in interest towards autism, when many children have other developmental disorders that have long-term impacts. Where are all the enthusiastic young researchers who want to work on developmental language disorders? Why is it that children with general learning disabilities (intellectual retardation) are so often excluded from research, or relegated to be a control group against which ASD is assessed?
Together with colleagues Becky Clark, Gina Conti-Ramsden, Maggie Snowling, and Courtenay Norbury, I started the RALLI campaign in 2012 to raise awareness of children’s language impairments, mainly focused on a YouTube channel where we post videos providing brief summaries of key information, with links to more detailed evidence. This year we also completed a study that brought together a multidisciplinary, multinational panel of experts with the goal of producing consensus statements on criteria and terminology for children’s language disorders – leading to one published paper and another currently in preprint stage. We hope that increased consistency in how we define and refer to developmental language disorders will lead to improved recognition.
We still have a long way to go in raising awareness. I doubt we will ever achieve a level of interest to parallel that of autism. And I suspect this is because autism fascinates because it does not appear just to involve cognitive deficits, but rather a qualitatively different way of thinking and interacting with the world. But I would urge those considering pursuing research in this field to think more broadly and recognise that there are many fascinating conditions about which we still know very little. Finding ways to understand and eventually ameliorate language problems or learning disabilities could help a huge number of children and we need more of our brightest and best students to recognise this potential.
Showing posts with label research funding. Show all posts
Showing posts with label research funding. Show all posts
Friday, 28 October 2016
Thursday, 18 December 2014
Dividing up the pie in relation to REF2014
OK, I've only had an hour to look at REF results, so this will be brief, but I'm far less interested in league tables than in the question of how the REF results will translate into funding for different departments in my subject area, psychology.
I should start by thanking HEFCE, who are a model of efficiency and transparency: I was able to download a complete table of REF outcomes from their website here.
What I did was to create a table with just the Overall results for Unit of Assessment 4, which is Psychology, Psychiatry and Neuroscience (i.e. a bigger and more diverse grouping than for the previous RAE). These Overall results combine information from Outputs (65%), Impact (20%) and Environment (15%). I excluded institutions in Scotland, Wales and Northern Ireland.
Most of the commentary on the REF focuses on the so-called 'quality' rankings. These represent the average rating for an institution on a 4-point scale. Funding, however, will depend on the 'power' - i.e. the quality rankings multiplied by the number of 'full-time equivalent' staff entered in the REF. Not surprisingly, bigger departments get more money. The key things we don't yet know are (a) how much funding there will be, and (b) what formula will be used to translate the star ratings into funding.
With regard to (b), in the previous exercise, the RAE, you got one point for 2*, three points for 3* and seven points for 4*. It is anticipated that this time there will be no credit for 2* and little or no credit for 3*. I've simply computed the sums according to two scenarios: original RAE, and formula where only 4* counts. From these scores one can readily compute what percentage of available funding will go to each institution. The figures are below. Readers may find it of interest to look at this table in relation to my earlier blogpost on The Matthew Effect and REF2014.
Unit of Assessment 4:
Table showing % of subject funding for each institution depending on funding formula
P.S. 11.20 a.m. For those who have excitedly tweeted from UCL and KCL about how they are top of the league, please note that, as I have argued previously, the principal determinant of the % projected funding is the number of FTE staff entered. In this case the correlation is .995.
I should start by thanking HEFCE, who are a model of efficiency and transparency: I was able to download a complete table of REF outcomes from their website here.
What I did was to create a table with just the Overall results for Unit of Assessment 4, which is Psychology, Psychiatry and Neuroscience (i.e. a bigger and more diverse grouping than for the previous RAE). These Overall results combine information from Outputs (65%), Impact (20%) and Environment (15%). I excluded institutions in Scotland, Wales and Northern Ireland.
Most of the commentary on the REF focuses on the so-called 'quality' rankings. These represent the average rating for an institution on a 4-point scale. Funding, however, will depend on the 'power' - i.e. the quality rankings multiplied by the number of 'full-time equivalent' staff entered in the REF. Not surprisingly, bigger departments get more money. The key things we don't yet know are (a) how much funding there will be, and (b) what formula will be used to translate the star ratings into funding.
With regard to (b), in the previous exercise, the RAE, you got one point for 2*, three points for 3* and seven points for 4*. It is anticipated that this time there will be no credit for 2* and little or no credit for 3*. I've simply computed the sums according to two scenarios: original RAE, and formula where only 4* counts. From these scores one can readily compute what percentage of available funding will go to each institution. The figures are below. Readers may find it of interest to look at this table in relation to my earlier blogpost on The Matthew Effect and REF2014.
Unit of Assessment 4:
Table showing % of subject funding for each institution depending on funding formula
| Funding formula | ||
| Institution | RAE | 4*only |
| University College London | 16.1 | 18.9 |
| King's College London | 13.3 | 14.5 |
| University of Oxford | 6.6 | 8.5 |
| University of Cambridge | 4.7 | 5.7 |
| University of Bristol | 3.6 | 3.8 |
| University of Manchester | 3.5 | 3.7 |
| Newcastle University | 3.0 | 3.4 |
| University of Nottingham | 2.7 | 2.6 |
| Imperial College London | 2.6 | 2.9 |
| University of Birmingham | 2.4 | 2.7 |
| University of Sussex | 2.3 | 2.4 |
| University of Leeds | 2.0 | 1.5 |
| University of Reading | 1.8 | 1.6 |
| Birkbeck College | 1.8 | 2.2 |
| University of Sheffield | 1.7 | 1.7 |
| University of Southampton | 1.7 | 1.8 |
| University of Exeter | 1.6 | 1.6 |
| University of Liverpool | 1.6 | 1.6 |
| University of York | 1.5 | 1.6 |
| University of Leicester | 1.5 | 1.0 |
| Goldsmiths' College | 1.4 | 1.0 |
| Royal Holloway | 1.4 | 1.5 |
| University of Kent | 1.4 | 1.0 |
| University of Plymouth | 1.3 | 0.8 |
| University of Essex | 1.1 | 1.1 |
| University of Durham | 1.1 | 0.9 |
| University of Warwick | 1.1 | 1.0 |
| Lancaster University | 1.0 | 0.8 |
| City University London | 0.9 | 0.5 |
| Nottingham Trent University | 0.9 | 0.7 |
| Brunel University London | 0.8 | 0.6 |
| University of Hull | 0.8 | 0.4 |
| University of Surrey | 0.8 | 0.5 |
| University of Portsmouth | 0.7 | 0.5 |
| University of Northumbria | 0.7 | 0.5 |
| University of East Anglia | 0.6 | 0.5 |
| University of East London | 0.6 | 0.5 |
| University of Central Lancs | 0.5 | 0.3 |
| Roehampton University | 0.5 | 0.3 |
| Coventry University | 0.5 | 0.3 |
| Oxford Brookes University | 0.4 | 0.2 |
| Keele University | 0.4 | 0.2 |
| University of Westminster | 0.4 | 0.1 |
| Bournemouth University | 0.4 | 0.1 |
| Middlesex University | 0.4 | 0.1 |
| Anglia Ruskin University | 0.4 | 0.1 |
| Edge Hill University | 0.3 | 0.2 |
| University of Derby | 0.3 | 0.2 |
| University of Hertfordshire | 0.3 | 0.1 |
| Staffordshire University | 0.3 | 0.2 |
| University of Lincoln | 0.3 | 0.2 |
| University of Chester | 0.3 | 0.2 |
| Liverpool John Moores | 0.3 | 0.1 |
| University of Greenwich | 0.3 | 0.1 |
| Leeds Beckett University | 0.2 | 0.0 |
| Kingston University | 0.2 | 0.1 |
| London South Bank | 0.2 | 0.1 |
| University of Worcester | 0.2 | 0.0 |
| Liverpool Hope University | 0.2 | 0.0 |
| York St John University | 0.1 | 0.1 |
| University of Winchester | 0.1 | 0.0 |
| University of Chichester | 0.1 | 0.0 |
| University of Bolton | 0.1 | 0.0 |
| University of Northampton | 0.0 | 0.0 |
| Newman University | 0.0 | 0.0 |
P.S. 11.20 a.m. For those who have excitedly tweeted from UCL and KCL about how they are top of the league, please note that, as I have argued previously, the principal determinant of the % projected funding is the number of FTE staff entered. In this case the correlation is .995.
Labels:
neuroscience,
psychiatry,
psychology,
REF2014,
research funding
Monday, 8 December 2014
Why evaluating scientists by grant income is stupid
![]() |
|
©CartoonStock.com
|
As Fergus Millar noted in a
letter to the Times last year, “in the modern British university, it is not
that funding is sought in order to carry out research, but that research
projects are formulated in order to get funding”.
This topsy-turvy logic has become evident in some
universities, with blatant demands for staff in science subjects to match a specified
quota of grant income or face redundancy. David Colquhoun’s blog is a gold-mine
of information about those universities who have adopted such policies. He notes that if you
are a senior figure based in the Institute
of Psychiatry in London, or the medical school at Imperial
College London you are expected to bring in an average of at least £200K of
grant income per annum. Warwick Medical School has a rather less
ambitious threshold of £90K per annum for principal investigators and £150K per
annum for co-investigators1.
So what’s wrong with that? It might be argued that in times of financial stringency,
Universities may need to cut staff to meet their costs, and this criterion is
at least objective. The problem is that it is stupid. It damages the wellbeing of staff, the reputation of the University, and the
advancement of science.
Effect on staff
The argument about wellbeing of staff is a no-brainer, and
one might have expected that those in medical schools would be particularly
sensitive to the impact of job insecurity on the mental and physical health of
those they employ. Sadly, those who run these institutions seem blithely
unconcerned about this and instead impress upon researchers that their skills
are valued only if they translate into money. This kind of stress does not only
impact on those who are destined to be handed their P45 but also on those
around them. Even if you’re not worried about your own job, it is hard to be
cheerfully productive when surrounded by colleagues in states of high distress.
I’ve argued previously that universities should
be evaluated on staff satisfaction as well as student satisfaction: this is
not just about the ethics of proper treatment of one’s fellow human beings, it
is also common-sense that if you want highly skilled people to do a good job,
you need to make them feel valued and provide them with a secure working
environment.
Effect on the University
The focus on research income seems driven by two
considerations: a desire to bring in money, and to achieve status by being seen
to bring in money. But how logical is this? Many people seem to perceive a
large grant as some kind of ‘prize’, a perception reinforced by the tendency of
the Times Higher Education and others to refer to ‘grant-winners’.
Yet funders do not give large grants as gestures of approval: the money is not
some kind of windfall. With rare exceptions of infrastructure grants, the money
is given to cover the cost of doing research. Even now we have Full Economic
Costing (FEC) attached to research council grants, this covers no more than
80% of the costs to universities of hosting the research. Undoubtedly, the money
accrued through FEC gives institutions leeway to develop infrastructure and
other beneficial resources, but it is not a freebie, and big grants cost money
to implement.
So we come to the effect of research funding on a
University’s reputation. I assume this is a major driver behind the policies of
places like Warwick, given that it is one component of the league tables that
are so popular in today’s competitive culture. But, as some institutions learn
to their costs, a high ranking in such tables may count for naught if a reputation
for cavalier treatment of staff makes it difficult to recruit and retain the
best people.
Effect on science
The last point concerns the corrosive effect on science if
the incentive structure encourages people to apply for numerous large grants.
It sidelines people who want to do careful, thoughtful research in favour of
those who take on more than they can cope with. There is already a glut of waste
in science, with many researchers having a backlog of unpublished work
which they don’t have time to write up because they are busy writing the next
grant. Four
years ago I argued that we should focus on what people do with research
funding rather than how much they have. On this basis, someone who achieved a
great deal with modest funding would be valued more highly than someone who was
failed to publish many of the results from a large grant. I cannot express it
better than John Ioannidis, who in a recent
paper put forward a number of suggestions for improving the reproducibility
of research. This was his suggested modification to our system of research
incentives:
“….obtaining grants,
awards, or other powers are considered negatively unless one delivers more
good-quality science in proportion. Resources and power are seen as
opportunities, and researchers need to match their output to the opportunities
that they have been offered—the more opportunities, the more the expected
(replicated and, hopefully, even translated) output. Academic ranks have no
value in this model and may even be eliminated: researchers simply have to
maintain a non-negative balance of output versus opportunities.”
1If his web-entry is to be believed, then Warwick’s Dean of
Medicine, Professor Peter Winstanley, falls a long way from this threshold,
having brought in only £75K of grant income over a period of 7 years. He won’t
be made redundant though, as those with administrative responsibilities are
protected.
Tuesday, 15 October 2013
The Matthew effect and REF2014
For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath. Matthew 25:29
So you’ve slaved over your departmental submission for REF2014, and shortly will be handing it in. A nervous few months await before the results are announced. You’ve sweated blood over deciding whether staff publications or impact statements will be graded as 1*, 2*, 3* or 4*, but it’s not possible to predict how the committee will judge them, nor, more importantly, how these ratings will translate into funding. In the last round of evaluation, in 2008, a weighted formula was used, such that a submission earned 1 point for every 2* output, 3 points for every 3* output, and 7 points for every 4* output. Rumour has it that this year there may be no money for 2* outputs and even more for 4*. It will be more complicated than this, because funding allocations will also take into account ratings of ‘impact statements’, and the ‘environment’.
I’ve blogged previously about concerns I have with the inefficiency of the REF2014 as a method for allocating funds. Today I want to look at a different issue: the extent to which the REF increases disparities between universities over time. To examine this, I created a simulation which made a few simple assumptions. We start with a sample of 100 universities, each of which is submitting 50 staff in a Unit of Assessment. At the outset, we start with all universities equal in terms of the research quality of their staff: they are selected at random from a pool of possible staff whose research quality is normally distributed. Funding is then allocated according to the formula used in RAE2008. The key feature of the simulation is that over every assessment period there is turnover of staff (estimated at 10% in simulation shown here), and universities with higher funding levels are able to recruit replacement staff with higher scores on the research quality scale. These new staff are then the basis for computing funding allocations in the next cycle – and so on, through as many cycles as one wishes. This simulation shows that funding starts out fairly normally distributed, but as we progress through each cycle, it becomes increasingly skewed, with the top-performers moving steadily away from the rest (Figure A). In the graphs, funding is shown over time for universities grouped in deciles, i.e., bands of 10 universities after ranking by funding level.
| Simulation: Mean income for universities in each of 10 deciles over 6 funding cycles |
We could do things differently. Figure B shows how tweaking the funding model could avoid opening up such a wide gulf between the richest and poorest, and retain a solid core of middle-ranking universities.
| Simulation using linear weighting of * levels. Each line is average for institutions in a given decile |
| Simulation where 4* outputs get favoured. Each line is average for institutions in a given decile |
However, given that finances are always limited, there will be a cost to the focus on an elite; the middle-ranking universities will get less funding, and be correspondingly less able to attract high-calibre researchers. And it could be argued that we don’t just need an elite: we need a reasonable number of institutions in which there is a strong research environment, where more senior researchers feel valued and their graduate students and postdocs are encouraged to aim high. Our best strategy for retaining international competitiveness might be by fostering those who are doing well but have potential to do even better. In any case, much research funding is awarded through competition for grants, and most of this goes to people in elite institutions, so these places will not be starved of income if we were to adopt a more balanced system of awarding central funds.
What worries me most is that I haven’t been able to find any discussion of this issue – namely, whether the goal of a funding formula should be to focus on elite institutions or distribute funds more widely. The nearest thing I’ve found so far is a paper analysing a parallel issue in grant awards (Fortin & Curry, 2013) – which comes to the conclusion that broader distribution of smaller grants is more effective than narrowly distributed large grants. Very soon, somebody somewhere is going to decide on the funding formula, and if rumours are to be believed, it will widen the gap between the haves and have-nots even further. I'm concerned that if we continue to concentrate funding only in those institutions with a high proportion of research superstars, we may be creating an imbalance in our system of funding that will be bad for UK research in the long run.
Reference
Fortin JM, & Currie DJ (2013). Big Science vs. Little Science: How Scientific Impact Scales with Funding. PloS one, 8 (6) PMID: 23840323
Labels:
funding,
HEFCE,
REF2014,
research funding,
universities
Thursday, 9 May 2013
The academic backlog
![]() |
| Photo from http://www.pa-legion.com |
Here’s an interesting
question to ask any scientist: If you were to receive no more research funding,
and just focus on writing up the data you have, how long would it take? The
answer tends to go up with seniority, but a typical answer is 3 to 5 years.
I don’t have any hard
data on this – just my own experience and that of colleagues – and I suspect it
varies from discipline to discipline. But my impression is that people generally
agree that the academic backlog is a real phenomenon, but they disagree on
whether it matters.
One view is that
completed but unpublished research is not important, because there’s a kind of
“survival of the fittest” of results. You focus on the most interesting and
novel findings, and forget about the rest. It’s true that we’ve all done failed studies with inconclusive results,
and it would be foolish trying to turn such sow’s ears into silk purses. But I suspect there’s a large swathe of
research that doesn’t fall into that category, but still never gets written up. Is that right, given the time and money that
have been expended in gathering data? Indeed, in clinical fields, it’s not only
researchers putting effort into the research – there are also human participants
who typically volunteer for studies on the assumption that the research will be
published.
I’m not talking about
research that fails to get published because it’s rejected by journal editors,
but rather about studies that don’t get to the point of being written up for
publication. Interest in this topic has been stimulated by Ben Goldacre’s book
Bad Pharma, which has highlighted the numerous clinical trials that go
unreported – often because they have negative results. In that case the concern
is that findings are suppressed because they conflict with the financial
interests of those doing the research, and the Alltrials campaign is doing a sterling job to tackle that issue. But beyond the field of clinical trials, there’s still a
backlog, even for those of us working in areas where financial interests are
not an issue.It’s worth pausing to consider why this is so. I think it’s all to do with the incentive structure of academia. If you want to make your way in the scientific world, there are two important things you have to do: get grant funding and publish papers. This creates an optimisation problem, because both of these activities take time, and time is in short supply for the average academic. It’s impossible to say how long it takes to write a paper, because it will depend on the complexity of the data, and will vary from one subject area to the next, but it’s not something that should be rushed. A good scientist checks everything thoroughly, thinks hard about alternative interpretations of results, and relates findings to the existing research literature. But if you take too much time, you’re at risk of being seen as unproductive, especially if you aren’t bringing in grant income. So you have to apply for grants, and having done so, you have then to do the research that you said you’d do. You may also be under pressure to apply for grants to keep your research group going, or to fund your own salary.
When I started in research, a junior person would be happy to have one grant, but that was before the REF. Nowadays heads of department will encourage their staff to apply for numerous grants, and it’s commonplace for senior investigators have several active grants, with estimates of around 1-2 hours per week spent on each one. Of course, time isn’t neatly divided up, and it’s more likely that the investigator will get the project up and running and then delegate it to junior staff, then putting in additional hours at the end of the project when it’s time to analyse and write up the data. The bulk of the day-to-day work will be done by postdocs or graduate students, and it can be a good training opportunity for them. All the same, it’s often the case that the amount of time specified by senior investigators is absurdly unrealistic. Yet this approach is encouraged: I doubt anyone ever questions a senior investigator’s time commitment when evaluating a grant, few funding bodies check whether you’ve done what you said you’d do, and even if they do, I’ve never heard of a funder demanding that a previous project be written up before they’ll consider a new funding application.
I don’t think the research community is particularly happy about this: many people have a sense of guilt at the backlog, but they feel they have no option. So the current system creates stress as well as inefficiency and waste. I’m not sure what the solution is, but I think this is something that research funders should start thinking about. We need to change the incentives to allow people time to think. I don’t believe anyone goes into science because they want to become rich and famous: we go into it because we are excited by ideas and want to discover new things. But just as bankers seem to get into a spiral of greed whereby they want higher and higher bonuses, it’s easy to get swept up in the need to prove yourself by getting more and more grants, and to lose sight of the whole purpose of the exercise – which should be to do good, thoughtful science. We won’t get the right people staying in the field if we value people solely in terms of research income, rather than in terms of whether they use that income efficiently and effectively.
Subscribe to:
Posts (Atom)


