Pages

Friday, January 10, 2014

Matt Ridley from the UK: The real risks of cherry picking scientific data

The Tamiflu tale is that some years ago the pharmaceutical company Roche produced evidence that persuaded the World Health Organisation that Tamiflu was effective against flu, and governments such as ours began stockpiling the drug in readiness for a pandemic. But then a Japanese scientist pointed out that most of the clinical trials on the drug had not been published. It appears that the unpublished ones generally showed less impressive results than the published ones.
Roche has now ensured that all 77 trials are in the public domain, so a true assessment of whether Tamiflu works will be made by the Cochrane Collaboration, a non-profit research group. The person who did most to draw the world’s attention to this problem was Ben Goldacre, a doctor and writer,whose book Bad Pharma accused the industry of often omitting publication of clinical trials with negative results. Others took up the issue, notably the charity Sense About Science, the editor of the British Medical Journal , Fiona Godlee, and the Conservative MP Sarah Wollaston. The industry’s reaction, says Goldacre, began with “outright denials and reassurance, before a slow erosion to more serious engagement”.
The pressure these people exerted led to the hard-hitting PAC report last week, which found that discussions “have been hampered because important information about clinical trials is routinely and legally withheld from doctors and researchers by manufacturers”.
The problem seems to be widespread. A paper in the BMJ in 2012 reported that only one fifth of clinical trials financed by the US National Institutes of Health released summaries of their results within the required one year of completion and one third were still unpublished after 51 months.
The industry protests that it would never hide evidence that a drug is dangerous or completely useless, and this is probably so: that would risk commercial suicide. Goldacre’s riposte is that it is also vital to know if one drug is better than another, say, saving eight lives per hundred patients rather than six. He puts it this way: “If there are eight people tied to a railway track, with a very slow shunter crushing them one by one, and I only untie the first six before stopping and awarding myself a point, you would rightly think that I had harmed two people. Medicine is no different.”
Imbued as we are with an instinctive tendency to read meaning into nature, we find it counter-intuitive that many experiments get significant results by chance and that the way to check if this has happened is to repeat the experiment and publish the result. When the drug company Amgentried to replicate 53 key studies of cancer, they got the same result in just six cases. All too often scientists publish chance results, or “false positives”, like gamblers or fund managers who tell you about winners they backed.
Outside medicine, we popular science authors are probably guilty of too often finding startling results in the scientific literature and drawing lessons from them without waiting for them to be replicated. Or as Christopher Chabris, of Union College in Schenectady, New York, harshly put itabout the pop-psychology author Malcolm Gladwell: cherry-picking studies to back his just-so stories. Dr Chabris points out that a key 2007 experiment cited by Gladwell in his latest book, which found that people did better on a problem if it was written in hard-to-read script, had been later repeated in a much larger sample of students with negative results.
To illustrate how far this problem reaches, a few years ago there was a scientific scandal with remarkable similarities, in respect of the non-publishing of negative data, to the Tamiflu scandal. A relentless, independent scientific auditor in Canada named Stephen McIntyre grew suspicious of a graph being promoted by governments to portray today’s global temperatures as warming far faster than any in the past 1,400 years — the famous “hockey stick” graph. When he dug into the data behind the graph, to the fury of its authors, especially Michael Mann, he found not only problems with the data and the analysis of it but a whole directory of results labelled “CENSORED”.
This proved to contain five calculations of what the graph would have looked like without any tree-ring samples from bristlecone pine trees. None of the five graphs showed a hockey stick upturn in the late 20th century: “This shows about as vividly as one could imagine that the hockey stick is made out of bristlecone pine,” wrote Mr McIntyre drily. (The bristlecone pine was well known to have grown larger tree rings in recent years for non-climate reasons: goats tearing the bark, which regrew rapidly, and extra carbon dioxide making trees grow faster.)
Mr McIntyre later unearthed the same problem when the hockey stick graph was relaunched to overcome his critique, with Siberian larch trees instead of bristlecones. This time the lead author, Keith Briffa, of the University of East Anglia, had used only a small sample of 12 larch trees for recent years, ignoring a much larger data set of the same age from the same region. If the analysis was repeated with all the larch trees there was no hockey-stick shape to the graph. Explanations for the omission were unconvincing.
Given that these were the most prominent and recognisable graphs used to show evidence of unprecedented climate change in recent decades, and to justify unusual energy policies that hit poor people especially hard, this case of cherry-picked publication was just as potentially shocking and costly as Tamiflugate. Omission of inconvenient data is a sin in government science as well as in the private sector.

Matt Ridley, a member of the British House of Lords, an acclaimed author who blogs at www.rationaloptimist.com.

No comments: