I was appalled a couple of weeks ago to see this advert by PETA - People for the Ethical Treatment of Animals.
Readers of this blog will be aware that spurious claims of links with autism is one subject that gets me extremely cross - parents have quite enough to contend with, without the host of irresponsible people who come up with a new autism cause every day.
Needless to say, there is no scientific evidence for a link between milk consumption and autism, and so I wondered why on earth PETA should be promoting such an idea.
I didn't know much about them until this advert cropped up. I first came across PETA when I was in Australia and they had a campaign protesting about "sheep ships" - live export of animals to the Middle East. I was impressed at the way they highlighted the treatment of the animals, who were taken on a long sea voyage in vile conditions just so that they could be slaughtered in a particular way on arrival.
Now that I've read more background, I realise that the autism advert is, alas, a classic case of the "wishful thinking" fallacy, which tends to crop up in debate when there are complex ethical issues involved. Human rights and animal rights often come into conflict, and many of us find it difficult to steer a course between the competing demands. For instance, should I eat meat? I like the taste and it's a good source of protein and iron, but does that justify farming and killing animals? Should we experiment on animals? It may lead to important breakthroughs for conditions such as Parkinson's disease and Alzheimer's disease, but large numbers of animals will be subjected to unpleasant procedures as a result. Many people when confronted with such choices will give priority to the needs of humans, while at the same time trying to ensure the treatment of animals is as humane as possible.
PETA campaigners, however, take an absolute stance, arguing that "animals are not ours to eat, wear, experiment on, use for entertainment or abuse in any way." Their agenda, then, is not to improve conditions for animals used by humans, but to prevent such use altogether.
This poses a problem. Most people aren't going to be persuaded to become vegan on ethical grounds. It seems that PETA therefore decided we need to be presented with other motivations, namely the idea that milk is bad for you. It would be convenient for PETA if this were true, because it would mean
that sensible people would stop drinking milk, regardless of their
attitude to animals. In this regard, it's wishful thinking. The claim was challenged last week by Sense About Science, whose conversation with a representative from PETA is described here.
Rather than accepting that the evidence for an autism link was not supported by science, Ben Williamson of PETA responded with further outlandish claims, maintaining that consumption of milk contributes to “asthma, constipation, recurrent ear infections, iron deficiency, anaemia and even some cancers”.
The same wishful thinking style of argument is used by those who want to ban all animal experimentation and who consequently argue that all such work is pointless, has never achieved anything, and is only done to promote the careers of those doing the experiments. A moment's thought reveals the fallacy of this viewpoint: it would have to mean that all of those doing animal experiments are either so stupid they can't see the pointlessness of their work, or are sadists who enjoy being unpleasant to animals. Neither proposition is credible. But if it were true, the argument would be easy to win.
The wishful thinkers try to bypass ethically difficult decisions by arguing that there are good practical reasons for adopting their preferred solution. We should become vegan to avoid autism, they say. We should stop animal experimentation because it achieves nothing, they say. Would that life were so simple.
Not only is it logically indefensible to take this line, it is actually counterproductive. PETA's response to Sense About Science confirms that this is an organisation that cannot be trusted to get things right. They will say whatever is convenient to promote their views and will distort the evidence if it helps. Their latest campaign has destroyed any credibility they might have had with the scientifically literate public.
Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts
Monday, 9 June 2014
Thursday, 9 January 2014
Off with the old and on with the new: the pressures against cumulative research
Yesterday I escaped a very soggy Oxford to make it down to London for a symposium on "Increasing value, reducing waste" in Research. The meeting marked the publication of a special issue of the Lancet containing five papers and two commentaries, which can be downloaded here.
I was excited by the symposium because, although the focus was on medicine, it raised a number of issues that have much broader relevance for science, including several that I have raised on this blog, including pre-registration of research, criteria used by high-impact journals, ethics regulation, academic backlogs, and incentives for researchers. It was impressive to see that major players in the field of medicine are now recognizing that there is a massive problem of waste in research. Better still, they are taking seriously the need to devise ways in which this could be fixed.
I hope to blog about more of the issues that came up in the meeting, but for today I'll confine myself to one topic that I hadn't really thought about much before, but which I see as important, namely the importance of doing research that builds on previous research, and the current pressures against this.
Iain Chalmers presented one of the most disturbing slides of the day, a forest plot of effect sizes found in medical trials for a treatment to prevent bleeding during surgery.
Time is along the x-axis, and the horizontal line corresponds to a result where the active and control treatments do not differ. Points which are below the line and whose fins do not cross it show a beneficial effect of treatment. The graph shows that the effectiveness of the treatment was clearly established by around 2002, yet a further 20 studies including several hundred patients were reported in the literature after that date. Chalmers made the point that it is simply unethical to do a clinical trial if previous research has already established an effect. The problem is that researchers often don't check the literature to see what has already been done, and so there is wasteful repetition of studies. In the field of medicine this is particularly serious because patients may be denied the most effective treatment if they enrol in a research project.
Outside medicine, I'm not sure this is so much of an issue. In fact, as I've argued elsewhere, in psychology and neuroscience I think there's more of a problem with lack of replication. But there definitely is much neglect of prior research. I lose count of the number of papers I review where the introduction presents a biased view of the literature that supports the authors' conclusions. For instance, if you are interested in the relation between auditory deficit and children's language disorders, it is possible to write an introduction presenting this association as an established fact, or to write one arguing that it has been comprehensively debunked. I have seen both.
Is this just lazy, biased or ignorant authors? In part, I suspect it is. But I think there is a deeper problem which has to do with the insatiable demand for novelty shown by many journals, especially the high-impact ones. These journals typically have a lot of pressure on page space and often allow only 500 words or less for an introduction. Unless authors can refer to a systematic review of the topic they are working on, they are obliged to give the briefest account of prior literature. It seems we no longer value the idea that research should build on what has gone before: rather, everyone wants studies that are so exciting that they stand alone. Indeed, if a study is described as 'incremental' research, that is typically the death knell in a funding committee.
We need good syntheses of past research, yet these are not valued because they are not deemed novel. One point made by Iain Chalmers was that funders have in the past been reluctant to give grants for systematic reviews. Reviews also aren't rated highly in academia: for instance, I'm proud of a review on mismatch negativity that I published in Psychological Bulletin in 2007. It not only condensed and critiqued existing research, but also discovered patterns in data that had not previously been noted. However, for the REF, and for my publications list on a grant renewal, reviews don't count.
We need a rethink of our attitude to reviews. Medicine has led the way and specified rigorous criteria for systematic reviews, so that authors can't just cherrypick specific studies of interest. But it has also shown us that such reviews are an invaluable part of the research process. They help ensure that we do not waste resources by addressing questions that have already been answered, and they encourage us to think of research as a cumulative, developing process, rather than a series of disconnected, dramatic events.
Reference
Chalmers, Iain, Bracken, Michael B., Djulbegovic, Ben, Garattini, Silvio, Grant, Jonathan, Gülmezoglu, A. Metin, Howells, David W., Ioannidis, John P. A., & Oliver, Sandy (2014). How to increase value and reduce waste when research priorities are set Lancet : 10.1016/S0140-6736(13)62229-1
I was excited by the symposium because, although the focus was on medicine, it raised a number of issues that have much broader relevance for science, including several that I have raised on this blog, including pre-registration of research, criteria used by high-impact journals, ethics regulation, academic backlogs, and incentives for researchers. It was impressive to see that major players in the field of medicine are now recognizing that there is a massive problem of waste in research. Better still, they are taking seriously the need to devise ways in which this could be fixed.
I hope to blog about more of the issues that came up in the meeting, but for today I'll confine myself to one topic that I hadn't really thought about much before, but which I see as important, namely the importance of doing research that builds on previous research, and the current pressures against this.
Iain Chalmers presented one of the most disturbing slides of the day, a forest plot of effect sizes found in medical trials for a treatment to prevent bleeding during surgery.
![]() |
| Based on Figure 3 of Chalmers et al, 2014 |
Outside medicine, I'm not sure this is so much of an issue. In fact, as I've argued elsewhere, in psychology and neuroscience I think there's more of a problem with lack of replication. But there definitely is much neglect of prior research. I lose count of the number of papers I review where the introduction presents a biased view of the literature that supports the authors' conclusions. For instance, if you are interested in the relation between auditory deficit and children's language disorders, it is possible to write an introduction presenting this association as an established fact, or to write one arguing that it has been comprehensively debunked. I have seen both.
Is this just lazy, biased or ignorant authors? In part, I suspect it is. But I think there is a deeper problem which has to do with the insatiable demand for novelty shown by many journals, especially the high-impact ones. These journals typically have a lot of pressure on page space and often allow only 500 words or less for an introduction. Unless authors can refer to a systematic review of the topic they are working on, they are obliged to give the briefest account of prior literature. It seems we no longer value the idea that research should build on what has gone before: rather, everyone wants studies that are so exciting that they stand alone. Indeed, if a study is described as 'incremental' research, that is typically the death knell in a funding committee.
We need good syntheses of past research, yet these are not valued because they are not deemed novel. One point made by Iain Chalmers was that funders have in the past been reluctant to give grants for systematic reviews. Reviews also aren't rated highly in academia: for instance, I'm proud of a review on mismatch negativity that I published in Psychological Bulletin in 2007. It not only condensed and critiqued existing research, but also discovered patterns in data that had not previously been noted. However, for the REF, and for my publications list on a grant renewal, reviews don't count.
We need a rethink of our attitude to reviews. Medicine has led the way and specified rigorous criteria for systematic reviews, so that authors can't just cherrypick specific studies of interest. But it has also shown us that such reviews are an invaluable part of the research process. They help ensure that we do not waste resources by addressing questions that have already been answered, and they encourage us to think of research as a cumulative, developing process, rather than a series of disconnected, dramatic events.
Reference
Chalmers, Iain, Bracken, Michael B., Djulbegovic, Ben, Garattini, Silvio, Grant, Jonathan, Gülmezoglu, A. Metin, Howells, David W., Ioannidis, John P. A., & Oliver, Sandy (2014). How to increase value and reduce waste when research priorities are set Lancet : 10.1016/S0140-6736(13)62229-1
Labels:
ethics,
funding,
neophilia,
replication,
research,
reviews,
systematic reviews,
waste
Monday, 17 June 2013
Research fraud: More scrutiny by administrators is not the answer
I read this piece in the Independent this morning and an icy chill gripped me. Fraudulent researchers have been damaging Britain's scientific reputation and we need to do something. But what? Sadly, it sounds like the plan is to do what is usually done when a moral panic occurs: increase the amount of regulation.
So here is my, very quick, response – I really have lots of other things I should be doing, but this seemed urgent, so apologies for typos etc.
According to the account in the Independent, Universities will not be eligible for research funding unless they sign up to a Concordat for Research Integrity which entails, among other things, that they "will have to demonstrate annually that each team member’s graphs and spreadsheets are precisely correct."
We already have massive regulation around the ethics of research on human participants that works on the assumption that nobody can be trusted, so we all have to do mountains of paperwork to prove we aren't doing anything deceptive or harmful.
So, you will ask, am I in favour of fraud and sloppiness in research? Of course not. Indeed, I devote a fair part of my blog to criticisms of what I see as dodgy science: typically, not outright fraud, but rather over-hyped or methodologically weak work, which is, to my mind, a far greater problem. I agree we need to think about how to fix science, and that many of our current practices lead to non-replicable findings. I just don't think more scrutiny by administrators is the solution. To start scrutinising datasets is just silly: this is not where the problem lies.
So what would I do? The answers fall into three main categories: incentives, publication practices, and research methods.
Incentives is the big one. I've been arguing for years that our current reward system distorts and damages science. I won't rehearse the arguments again: you can read them here. The current Research Excellence Framework is, to my mind, an unnecessary exercise that further incentivizes researchers against doing slow and careful work. My first recommendation is therefore that we ditch the REF and use simpler metrics to allocate research funding to University, freeing up a great deal of time and money, and improving the security of research staff. Currently, we have a situation where research stardom, assessed by REF criteria, is all-important. Instead of valuing papers in top journals, we should be valuing research replicability.
Publication practices are problematic, mainly because the top journals prioritize exciting results over methodological rigour. There is therefore a strong temptation to do post hoc analyses of data until an exciting result emerges. Pre-registration of research projects has been recommended as a way of dealing with this - see this letter to the Guardian on which I am a signatory. It might be even more effective if research funders adopted the practice of requiring researchers to specify the details of their methods and analyses in advance on a publicly-available database. And once the research was done, the publication should contain a link to a site where data are openly available for scrutiny – with appropriate safeguards about conditions for re-use.
As regards research methods, we need better training of scientists to become more aware of the limitations of the methods that they use. Too often statistical training is a dry and inaccessible discipline. All scientists should be taught how to generate random datasets: nothing is quite as good at instilling a proper understanding of p-values as seeing the apparent patterns in data that will inevitably arise if you look hard enough at some random numbers. In addition, not enough researchers receive training in best practices for ensuring quality of data entry, or in exploratory data analysis to check the numbers are coherent and meet assumptions of the analytic approach.
In my original post on expansion of regulators, I suggested that before a new regulation is introduced, there should be a cold-blooded cost-benefit analysis that considers, among other things, the cost of the regulation both in terms of the salaries of people who implement it, and the time and other costs to those affected by it. My concern is that among the 'other costs' is something rather nebulous that could easily get missed. Quite simply, doing good research takes time and mental space of the researchers. Most researchers are geeks who like nothing better than staring at data and thinking about complicated problems. If you require them to spend time satisfying bureaucratic requirements, this saps the spirit and reduces creativity.
I think we can learn much from the way ethics regulations have panned out. When a new system was first introduced in response to the Alder Hey scandal, I'm sure many thought it was a good idea. It has taken several years for the full impact to be appreciated. The problems are documented in a report by the Academy of Medical Sciences, which noted "Urgent changes are required to the regulation and governance of health research in the UK because unnecessary delays, bureaucracy and complexity are stifling medical advances, without additional benefits to patient safety"
If the account in the Independent is to be believed, then the Concordat for Research Integrity could lead to a similar outcome. I'm glad I will retire before the it is fully implemented.
So here is my, very quick, response – I really have lots of other things I should be doing, but this seemed urgent, so apologies for typos etc.
According to the account in the Independent, Universities will not be eligible for research funding unless they sign up to a Concordat for Research Integrity which entails, among other things, that they "will have to demonstrate annually that each team member’s graphs and spreadsheets are precisely correct."
We already have massive regulation around the ethics of research on human participants that works on the assumption that nobody can be trusted, so we all have to do mountains of paperwork to prove we aren't doing anything deceptive or harmful.
So, you will ask, am I in favour of fraud and sloppiness in research? Of course not. Indeed, I devote a fair part of my blog to criticisms of what I see as dodgy science: typically, not outright fraud, but rather over-hyped or methodologically weak work, which is, to my mind, a far greater problem. I agree we need to think about how to fix science, and that many of our current practices lead to non-replicable findings. I just don't think more scrutiny by administrators is the solution. To start scrutinising datasets is just silly: this is not where the problem lies.
So what would I do? The answers fall into three main categories: incentives, publication practices, and research methods.
Incentives is the big one. I've been arguing for years that our current reward system distorts and damages science. I won't rehearse the arguments again: you can read them here. The current Research Excellence Framework is, to my mind, an unnecessary exercise that further incentivizes researchers against doing slow and careful work. My first recommendation is therefore that we ditch the REF and use simpler metrics to allocate research funding to University, freeing up a great deal of time and money, and improving the security of research staff. Currently, we have a situation where research stardom, assessed by REF criteria, is all-important. Instead of valuing papers in top journals, we should be valuing research replicability.
Publication practices are problematic, mainly because the top journals prioritize exciting results over methodological rigour. There is therefore a strong temptation to do post hoc analyses of data until an exciting result emerges. Pre-registration of research projects has been recommended as a way of dealing with this - see this letter to the Guardian on which I am a signatory. It might be even more effective if research funders adopted the practice of requiring researchers to specify the details of their methods and analyses in advance on a publicly-available database. And once the research was done, the publication should contain a link to a site where data are openly available for scrutiny – with appropriate safeguards about conditions for re-use.
As regards research methods, we need better training of scientists to become more aware of the limitations of the methods that they use. Too often statistical training is a dry and inaccessible discipline. All scientists should be taught how to generate random datasets: nothing is quite as good at instilling a proper understanding of p-values as seeing the apparent patterns in data that will inevitably arise if you look hard enough at some random numbers. In addition, not enough researchers receive training in best practices for ensuring quality of data entry, or in exploratory data analysis to check the numbers are coherent and meet assumptions of the analytic approach.
In my original post on expansion of regulators, I suggested that before a new regulation is introduced, there should be a cold-blooded cost-benefit analysis that considers, among other things, the cost of the regulation both in terms of the salaries of people who implement it, and the time and other costs to those affected by it. My concern is that among the 'other costs' is something rather nebulous that could easily get missed. Quite simply, doing good research takes time and mental space of the researchers. Most researchers are geeks who like nothing better than staring at data and thinking about complicated problems. If you require them to spend time satisfying bureaucratic requirements, this saps the spirit and reduces creativity.
I think we can learn much from the way ethics regulations have panned out. When a new system was first introduced in response to the Alder Hey scandal, I'm sure many thought it was a good idea. It has taken several years for the full impact to be appreciated. The problems are documented in a report by the Academy of Medical Sciences, which noted "Urgent changes are required to the regulation and governance of health research in the UK because unnecessary delays, bureaucracy and complexity are stifling medical advances, without additional benefits to patient safety"
If the account in the Independent is to be believed, then the Concordat for Research Integrity could lead to a similar outcome. I'm glad I will retire before the it is fully implemented.
Labels:
Concordat for Research Integrity,
ethics,
fraud,
regulation,
research
Saturday, 30 June 2012
Schoolgirls' health put at risk by Catholic view on vaccination
Today's Time Newsfeed carries a remarkable story: parents of children attending Catholic schools in Calgary were sent a special letter to accompany details of a vaccination programme against human papillomavirus (HPV), which protects against cervical cancer. In it, local bishops wrote: "Although school-based immunization delivery systems generally result in high numbers of students completing immunization, a school-based approach to vaccination sends a message that early sexual intercourse is allowed.”
I find this amazing for several reasons:
- There's a complete failure to understand what affects teenagers' behaviour. Do the bishops seriously think that teenaged girls who are thinking of having sex say to themselves "Oh, wait a minute. I might get HPV. Let's not do it." Potential consequences of sex include a host of sexually transmitted diseases, as well as pregnancy. If these don't put girls off, then why should a risk of HPV?
- HPV is a sexually transmitted disease. You can get it if you are a virgin who marries someone with HPV. You can get it if you are raped (something which has been known to occur in Catholic schools).
- The recommendation seems theologically dubious. I'm an atheist, but my understanding of Catholicism is that whether or not something is a sin is largely to do with motivation rather than action. So if you are tempted to sex but desist because it would upset God, then that's good. If you are tempted to sex but desist only because of a fear of disease, that's still a sin. The church should be teaching girls to love God so much that they won't do things that offend him, not to conform to standards of sexual behaviour out of fear. No doubt religious readers will put me right if I've misunderstood this distinction.
- These girls are attending a Catholic school where I assume morality is drummed into them day and night. The bishops assume that their grasp of that morality is so weak that having a HPV vaccination will be sufficient to overturn everything they have been told about sexual ethics. Doesn't say much for the religious teaching in the schools, or for the intelligence of the pupils.
- One thing Jesus really understood is that humans aren't perfect and frequently fall short of the moral standards they try to adhere to. There's a huge emphasis on forgiveness of sin in his teachings. The Bishops are in effect saying that God won't forgive you if you stray from the straight and narrow: he'll commit you to a life with an unpleasant disease, and increase your risk of dying from cancer. That's not the Christian God I was taught about.
Labels:
Catholicism,
Christianity,
ethics,
HPV,
morality,
vaccination
Friday, 25 November 2011
The weird world of US ethics regulation
There has been a lot of interest over the past week in the
Burzynski Clinic, a US
organisation that offers unorthodox treatment to those with cancer. To get up
to speed on the backstory see this blogpost by Josephine
Jones.
As someone who spends more of my time than I’d like
grappling with research ethics committees, there was one aspect of this story
that surprised me. According
to this blogpost, the clinic is not allowed to offer medical treatment, but
is allowed to recruit patients to take part in clinical trials. But this is
expensive for participants. The Observer piece that started all the uproar this
week described how a family needed to raise £200,000 so that their very sick
little girl could undergo Burzynski’s treatment.
I had assumed that this trial hadn’t undergone ethical
scrutiny, because I could not see how any committee could agree that it was
ethical to charge someone enormous sums of money to take part in a research
project in which there was no guarantee of benefit. I suspect that many people
would pay up if they felt they’d exhausted all other options. But this doesn’t
mean it’s right.
I was surprised, then, to discover that the Burzynski trial had undergone review by an Institutional
Review Board (IRB - the US
term for an ethics committee). A
letter describing the FDA’s review of the relevant IRB is available on the
web. It concludes that “the IRB did not adhere to the applicable statutory
requirements and FDA regulations governing the protection of human
subjects.” There’s a detailed exposition
of the failings of the Burzynski Institute IRB, but no mention of fees charged to
patients. So I followed a few more links and came to a US government
site that described regulatory guidelines for ethics committees, which had a
specific section on Charging
for Investigational Products. It seems the practice of passing on research
costs to research participants is allowed in the US system.
There has been considerable debate in academic circles about
the opposite situation, where participants are paid to take part in a study. I know of cases where such payments
have been prohibited by an ethics committee on the grounds that they provide
‘inducement’, which is generally regarded as a Bad Thing, though there are convincing
counterarguments.
But I am having difficulty in tracking down any literature at all on the ethics
of requiring participants to pay a fee to take part in research. Presumably
this is a much rarer circumstance than cases where participants are paid,
because in general people need persuading to take part in research. The only
people who are likely to pay large sums to be a research participant are those
who are in a vulnerable state, feeling they have nothing to lose. But these are
the very people who need protection by ethics committees because it’s all too
easy for unscrupulous operators to exploit their desperation. Anyone who
doesn’t have approval to charge for a medical treatment could just redescribe
their activities as a clinical trial and bypass regulatory controls. Surely
this cannot be right.
Subscribe to:
Posts (Atom)

