Correlation, Cause & Effect, and Crappy Science Reporting

The internets were ablaze this morning with a potentially very important result from a large clinical trial… ibuprofen might cause heart attack!  Here’s a snapshot of the Google News page for the story…

google

There are multitide of things wrong with these headlines:

(1) Very few of the news outlets even bother to report which journal the study is published in.  If you’re a science reporter worth your salt, FFS at least tell the public where to go to find out more!

(2) For those outlets that did list the source, the UK’s Daily Mail got it wrong, claiming it was published in BMJ Open, rather than the actual venue, the BMJSome got the journal correct but didn’t actually provide a hyper-link to the paper.  Time got the journal correct, and provided a link, but it was a dead link! (they linked to the wrong issue of the journal).

(3) The end point measured in the study was Heart Failure. I would hope it’s not necessary to explain that heart attack and heart failure are not the same thing! Heart attack is an acute (short term) phenomenon, involving blockage of coronary arteries starving the heart of oxygen and causing it to stop functioning. Heart failure is a progressive long-term condition that involves structural changes to the heart muscle, resulting in impaired heart function. When reporters use phrases like “trigger heart problems”, implying short-term effects, it’s clear they mean heart attack, and don’t understand the distinction from heart failure.

(4) Aside from the crappy proof-reading and journalistic incompetence listed above, there’s the problem of what the study actually measured, and reported, and as usual there seems to be some conflation of cause-and-effect with correlation here…

Here’s what was actually done…The authors of the study (here’s an actual working link to it) asked a bunch of people whether they’d taken NSAIDs (non-steroidal anti-inflammatory drugs), then stratified them into 2 groups – those who had done so within the past couple of weeks, and those who had done so more than 6 months ago.  They then looked at how many of those people were admitted to hospital with heart failure.  What they found, is that about 20% more people in the recent NSAID group ended up in hospital for heart failure.

THIS DOES NOT MEAN NSAIDS CAUSE HEART FAILURE !!!

What it means, is there’s a very slight positive correlation between being admitted to hospital for heart failure, and having taken an NSAID in the past couple of weeks.

Firstly, there are a whole bunch of problems with one of the things being measured… namely, being admitted to hospital for heart failure may not actually mean you have heart failure.  It’s a pretty wide diagnostic spectrum to begin with (several different scales available from the different interested bodies, NYHA, BHF, AHA etc.), and of course there’s the possiblity that the patients may not have actually ended up being diagnosed with true structural HF at all.

Secondly, it’s important to emphasize that the 2 things (HF hospitalization & NSAID use) correlate, but it is not possible to say which one of the two was causative! In particular, there’s no way to exclude that the people who took NSAIDS did so BECAUSE they were suffering from heart failure !!!

What does heart failure do to a person?  Well, generally, it results in shortness of breath and the inability to get around, get up stairs, walk long distances, etc.  As a result, such persons would be expected to be more sedentary.  Well guess what happens if you don’t exercise or move around… aches and pains!  And what do most people do when they have aches or pains?  Pop an NSAID!

There is a very real possibility that people may have been taking more NSAIDs because they had heart failure. This could be why HF and NSAID use are correlated. Of course, sick people popping more pain med’s doesn’t make for such attractive, attention-grabbing headlines as “drugs do bad things”.

There’s also the problem that many NSAIDs are contra-indicated for use in patients taking many popular high blood pressure medications (ACE inhibitors, ARBs, beta blockers, etc). High blood pressure is one of the leading (if not THE leading) risk factors for heart failure. So, it might be expected that at least some of the patients in the study who went into hospital for heart failure had high blood pressure and were taking anti-hypertensive medications.  Given the already widely-known ability of NSAIDs to do bad things in patients on such medications, could we simply be looking at a re-statement of a well-known drug interaction result here?  By choosing HF as an end point, did the authors accidentally select for patients who are on drugs that NSAIDs interact badly with?

Either way, it’s clear we have a long way to go, to educate science journalists to avoid this continued sensationalism in their reporting.  Maybe NSAIDs do cause heart failure, but this study doesn’t actually prove that point.

BioRxiv FTW !

Today we posted a pre-print on BioRxiv for the first time. Here‘s the paper, entitled “Acidic pH is a Metabolic Switch for 2-Hydroxyglutarate Generation”.  It reports pretty much what the title says!

A little background… 2-HG has been known about for quite some time as an “oncometabolite”, with the D enantiomer (D-2-HG) being made by mutant forms of isocitrate dehydrogenase found in various cancers.

However, that all changed last year with 2 back-to-back papers in Cell Metabolism from the labs of Craig Thompson & Joe Loscalzo, reporting that L-2-HG is made in response to hypoxia, by the enzymes lactate deydrogenase and malate dehydrogenase (side note – we also found 2-HG elevated in the ischemic heart as early as 2013, but were slow in publishing).

We’ve been messing around with 2-HG regulation for a while, and just sort of stumbled on the fact that the enzymes that make and degrade it are all sensitive to pH. This makes sense when you consider that acidic pH is a common feature of hypoxia, and it’s also found in the tumor micro-environment, so our results might also be relevant for cancer.

The other upshot – this work might be relevant for a set of devastating metabolic diseases, the 2-hydroxyglutaric-acidurias. These often co-present with lactic acidosis, so it’s possible that lactic acid crises may be precipitating events.  This suggests that careful management of pH in these patients (e.g. using dichloroacetate to drive PDH activity) might be therapeutically beneficial.

The work is currently churning its way through the regular journal peer review system, but you’ll see if you read the paper that the experiments are actually pretty simple. As such, we thought the results might be particularly “scoop-worthy”, and so that drove the decision to post a pre-print.

This was our first time posting on BiRxiv (kinda like our most recent paper was the first time we posted a complete underlying data set on FigShare). Both processes were completely intuitive and painless. I’m beginning to like this open science game!

 

Editorial and institutional blindness is facilitating scientific fraud

via GIPHY

Exhibit A

Today (finally!) a J. Neurosci. paper I’ve been battling to get retracted for over 3 years was finally removed from the literature. Cause for celebration right?  Not exactly. The following is a quote from the former Editor in Chief of the journal back in 2013…

The SfN requested institutional investigations […] institutions produced detailed reports that included the original images used to construct the figures. The images clearly documented that the figure elements that appeared to be replicates in the article in J. Neurosci. were in fact from different gels. Because there was no evidence that any results had been misrepresented, the SfN has dismissed the case. We consider the matter closed.

What followed (outlined in part here) was a long and protracted battle between YHN and the journal, eventually resulting in the SfN ethics committee (which oversees ethics at the journal) re-engaging the case. This was followed by another couple of years of them ignoring my emails, during which time J. Neurosci got a new EiC who claimed not to know about the case. The paper has been cited 77 times during the 3 & 1/2 years it’s been allowed to pollute the literature.

The retraction notice reads:

It was brought to our attention that the Donmez et al., 2012 paper has numerous examples of unindicated splicing of gel lanes and of duplications and inversions of gel images. The prevalence of these occurrences is unacceptable and compels us to retract the paper. We offer our most sincere apologies to readers.

Apology not accepted.

How about releasing the contents of the MIT report given to the journal in 2012 and relied on so heavily by Maunsell?  It’s important to establish if Maunsell was making a decision based on the best information available to him (thereby suggesting the MIT investigation itself was flawed), OR did the MIT report contain information suggesting there may be something wrong, and it was Maunsell who made the wrong call?

There’s a chain of communication  (Authors > MIT > J. Neurosci. > Me) and the public has a right to know who along the chain was economical with the truth, which led to dismissal of this case in 2013?  If it was the authors (as I suspect those involved will claim), at the very least this suggests MIT needs to step up their investigative game!

If you’re an SfN member, please consider asking the leadership why it took more than 3 years for them to realize they were wrong?  Why was the extensive evidence presented to them prima facie in 2013 not enough to make the call? There’s really no excuse.

 

Exhibit B

This case brings to light another example of a paper from the same authors in Cell.  As outlined in detail here, in that case the journal responded as follows:

Thank you for your e-mails. In addition to having been informed of the results of the institutional investigation, we have also examined the implicated figure panels editorially. Despite some apparent superficial similarities, upon extensive examination we were unable to find any compelling evidence for manipulation or duplication in those panels and therefore are not taking any further action at this time.
Best wishes,
Sri Devi Narasimhan, PhD, Scientific Editor, Cell

The problem was, the editor who handled the case was a scientific “grand-child” of the lead author on the paper (trained with a former postdoc of theirs). Several angry emails regarding this confict-of-interest went unanswered by Cell EiC Emilie Marcus , so I had to get the Committee on Publication Ethics (COPE) involved. Eventually the paper was retracted after 2 years (having accrued over 200 citations). No action against the journal was taken by COPE (which, as I’ve said elsewhere, appears to be nothing more than a narcissistic trade association for the publishing industry).

Again, the Cell case highlights that Editors were simply content to take things at face value, rather than invest the effort to dig deeper. Only after a LOT of badgering (I would estimate these 2 papers account for 100+ emails on my part) did they come to the realization there might actually be a problem.

 

How does this blindness damage science?

There’s more fallout than just the citations accrued by these papers before they were retracted (and please note, the Cell paper has been cited another 50 times since it was retracted!)

The lead author Gizem Donmez, was the recipient of a prestigious Ellison Foundation** New Scholars in Aging Award for 2012. There can be little doubt that the award and the high impact publications in Cell and J. Neurosci. helped her to secure a faculty position at Tufts. That position is now gone, but what of the others who applied for it and were beaten out by Donmez? What a sad waste of young scientific talent, to be cheated out of a job by someone who didn’t play by the rules. Indeed, one of my former post-doctoral fellows applied for an Ellison Award the same year – maybe his career prospects would be different now if Donmez’ hadn’t cheated?

Then there’s the unknowable cost of scientific effort to try and replicate or build on these retracted studies. The millions of dollars in grant funds wasted on antibodies and other reagents purchased after reading these papers.

There’s also the effort expended by myself and several other bloggers and social media activists to get these papers retracted. This stuff doesn’t pay the bills!  Throw in all the wasted hours of the search committee at Tufts who hired Donmez based on false pretenses. What about the peer reviewers who were hoodwinked during review of these papers – how do they feel now?

All of this was preventable!

Editors: Sit up and pay attention when confronted with evidence. The current model (partly driven by fear of being sued) allows journals to place too much reliance on institutional investigations.  As I told Cell several months ago, the multi-billion dollar publishing industry needs to start spending some of their precious cash on forensic investigators. Bring the analysis in-house. Hire some out-of-work members of the post-docalypse. Have the courage to question reports provided by institutions.

Universities: Look beyond the indirect co$ts from the grant dollars held by the author in question. Respond to emails and allegations of misconduct. Make your RIO and other ethics folks more accessible. Hire more forensic investigators. Have the courage to question “original data” provided by authors.

All of this takes effort and money (which both Universities and publishers have a lot of), but the current system we have is just not working.  It should not take 3 years and ridiculous amounts of prodding by a 3rd party, to correct the scientific record!

——————

** The Ellison Foundation completely refused to answer any of my emails or voice-mails about Donmez, not even to acknowledge receipt. However, some time between June and September 2015 they quietly removed Donmez from their website.

 

 

 

Open Letter to PLoS Biology – [UPDATE – Plagiarism too!]

Dear editors,

I’m writing to demand that you immediately retract this paper by Mario Saad and colleagues (PLoS Biol. 2011 9: e1001212. doi: 10.1371/journal.pbio.1001212.) due to overwhelming evidence of data manipulation that has been known to you for almost 3 years.

The paper was originally criticized in comments on the journal website in July 2013. In February 2014 I posted these and more problems to PubPeer and PubMedCommons, and forwarded allegations directly to the journal by email. The case was featured in an article in The Scientist in August 2014, and has been widely discussed in comment threads relating to other papers by Mario Saad on the Retraction Watch website. In March 2015, while reporting on a different Saad paper in PLoS One, Retraction Watch asked someone at PLoS Biology to comment. They responded…

The paper you’ve asked about is one that the PLOS Biology team is currently looking into. We can’t share any details and, as is standard, we are not willing to discuss this externally. As you are likely aware, journals have to follow a clear process when investigating issues on papers and at all the PLOS journals we follow the COPE guidelines. Out of respect for both the scientific process, and for all involved, these things take time.

That was over a year ago. Today, 4 more papers by Saad were retracted, thereby disavowing the findings of an internal University investigation, and rebutting a lawsuit between Saad and the American Diabetes Association.

Your continued inaction on this paper is beginning to border on gross negligence. According to ISI Web of Science the paper has been cited 96 times, with more than two-thirds of these citations coming after the initial concerns were raised in June 2013. This is bad for science, and very bad indeed for your journal’s reputation.

Sincerely,

Paul S. Brookes, PhD.

P.S. I should note this is not the first time I’ve had problems getting your journal to take action on a problematic paper! That one only took 8 months to fix.

P.P.S. I recently had the good fortune to teach a class on scientific communication, which included a session on how to critique a paper (journal club style). I used this as an example of a spectacularly bad paper riddled with data problems. In publishing this letter, I’m now making my powerpoint slides freely available for teachers in science ethics and other classes to use as an example. I’m not sure this is what you envisaged for this paper, but I think we can agree this was a preventable outcome.

[UPDATE 3/28/2016]

It has just been bought to my attention that, in addition to all of the above figure problems, large sections of the text in this paper are plagiarized from another publication!  The paper in question is this one, and the two images below detail the extent of the copied passages based on a quick look.  There may be more to be revealed, via the use of software…

saad plos plagiat 2 - Copy

saad plos plagiat 1 - Copy

The journal has been informed, although I do not have high hopes, based on zero response whatsoever to tweeting the above letter to both the @PLoS and @PLoSBiology twitter accounts last week.

 

The self-fulfilling prophecy of a downward trend in retraction delays

As reported on Retraction Watch today, folks interested in the field of publications and retractions are “geeking out” at a new study from authors at Thomson Reuters, who mined the ISI Web of Science database to learn some new things.

Among their key conclusions, based on Figure 1c of the paper, is a downward trend in the time delay between publication and retraction…. ret tend orig

The problem is, such a conclusion is a self-fulfilling prophecy. The study was based on a 2014 version of the database, so for example there cannot possibly be any papers published in 2012 that would have more than a 2 year delay in retraction, because it hasn’t been long enough yet!  Here’s another way to look at the data, with the red line indicating the maximum possible retraction delay for a paper published in that year….

ret trend annotated

Are we to believe that in the coming years, papers from 2005 onward which take 10 years to retract, will simply not arise?  No, of course those papers are out there, and over time they will of course raise the average delay to retraction, of their year’s cohort of papers.

TL/DR – it’s too early to conclude a downward trend.

——————–

But, what could be done differently?  Well, they could have taken the average delay for the whole data set (it looks to be about 7-8 years) and just set that as the cut-off, ignoring any data between 2006 and 2014.  In my estimation, doing so would have nullified the conclusion of a downward trend.

——————–

The other major issue that impinges on this outcome, is the authors being based at Thomson Reuters, a large publishing conglomerate.  The paper contains no conflict-of-interest statement, but I do find it rather “convenient” that a group of people paid by the publishing industry found a favorable trend in the speed at which the industry deals with problems as they arise.  In my own experience (N=1 anecdote), the delay for publishers to deal with problem papers is either static or has risen recently due to increased workload of this kind.