Science Fraud?

You may have arrived here via a redirect from, or maybe just clicked to this page from the PSBLAB website main page. What’s this all about? This post (May 2023) attempts to summarize the events of a decade ago, and discuss developments in the field of scientific integrity since then.

From 2012 to 2013 I ran a blog (“Science-Fraud”) in the style of many similar sites at the time, reporting on problematic data in the bio-science literature. The site was run under the pseudonym fraudster, and frequently used foul language and snarky humor in the discussion of problematic papers. Over the course of its short existence, the site covered around 300 papers and dozens of labs, with many now notorious purveyors of scientific pap receiving their first exposure there (Bharat Aggarwal, Gizem Donmez, Rui Curi, Mario Saad, Jun Ren, Michael Karin, Rakesh Kumar, Sam W. Lee).

Unfortunately, lax security led to my doxxing, swiftly followed by threats of lawsuits from scientists who had been written about. Given my position at the time as a fairly junior pre-tenured faculty, I followed the advice of my institution to shut down the site. I then hired a lawyer with my own money, to rebut 6 defamation lawsuits. Notably, every one of the individuals who tried to sue me is now either fired for misconduct, retired, or dead, and every one of them has had multiple papers retracted. Many made extensive use of astro-turfing or reputation-building websites during their downfalls. It turns out that when you ask “please explain exactly what was written that is untrue”, the litigious have a habit of going quiet. Getting a lawyer to write a nastygram to make unsavory internet content disappear is cheap, but following through and risking the discovery process is not.

The immediate events as they unfolded in 2013 were covered here. In the aftermath, I have continued to work in this area, and regularly blog about science integrity here on PSBLAB as well as posting on PubPeer and Twitter.

An byproduct of this whole process was a paper published in 2014, which used data from the various papers collected while running the science-fraud blog. Specifically, in addition to those papers I reported on, a large number of papers were submitted during the same time frame and put through the same verification and analysis process, but never blogged about (due to the sudden closing of the site). Importantly, this control set of papers were also reported to journal editors and institutions at the same frequency. Subsequent analysis showed that the simple act of talking about problematic paper on the internet was associated with a 6-7 fold higher levels of action (correction, retraction, etc.) In the most recent reanalysis of this data set (spring 2021), even 8 years later the papers I blogged about have been corrected or retracted 4-fold more than those not. In simple terms – sunlight is the best form of bleach. Having a bad paper featured on a public facing website is a good way of improving the chances that something will be done about it.

A lot has changed since 2012. Retraction Watch was still kind of a new thing, and PubPeer had only just launched (psst.. please install their browser Plug-In if you haven’t already!) Several blogs of a similar flavor have long-since shut down (most tragically Abnormal Science, which was awesome), while others are still around with different focus (e.g. k2p). Many legacy sites such as those run by blogger Juiichi Jigen are still around, or can be accessed via the Wayback archive (as can many of the original pages). NCBI-PubMed’s public commenting system PubMed Commons only lasted a couple of years, and the Committee on Publication Ethics (COPE) continues to be mostly useless.

But, perhaps the best thing to happen since then, is the realization that talking about crappy science on the internet is not something we have to do in secret anymore. The web is now awash with brave folks writing 24/7 about the tsunami of bullshit engulfing science. Individuals such as Elisabeth Bik, Leonid Schneider, Nick Brown, James Heathers and Nick Wise are doing amazing work, Retraction Watch is still around, and Helene Hill is still going strong with a new book. There are also exciting new AI tools such as ImageTwin which make detecting problems easier (but the uptake of these tools by journal publishers is predictably slow).

There’s still a lot of work to do. In my day job as a scientist, I would estimate I reject fully 50% of the manuscripts I receive to review because they contain manipulated data or other signs of misconduct. The manner in which many journals and institutions handle misconduct is still a complete mess, and I’m not confident that any of the anti-misconduct systems we have in place are remotely capable of dealing with a predicted onslaught of AI-generated bullshit.

Over the past decade I’ve investigated and documented hundreds of papers and thousands of images, which barely scratches the surface of the problem. As ever, I’m willing to take a look at anything suspicious that anyone sends me on an ad-hoc basis, and of course would be willing to discuss more formal consulting roles if appropriate. It’s been encouraging to see the whole anti-bullshit field come out of the scientific shadows and now be a vibrant part of the scientific community. I just wish it hadn’t taken so long!