A Conflict-Of-Interest Tale from long ago

This story spans several years between 2010 and 2014. It was sitting in my drafts folder for over a decade, so some links may be dead and the current status of various people or companies or science may no longer be accurate.

Here’s the short version…

  • My lab got a drug from a company via a material transfer agreement (MTA)
  • We found some really cool results
  • We tried to publish results
  • The company tried to sue us
  • The company went bankrupt
  • We published the paper anyway
  • Rival paper from former company employees came out, doesn’t cite us

Background on Conflict of Interest in Basic Science
For life scientists in academia, one of the most common examples of potential conflict-of-interest (COI) is receiving resources from a drug company to do a research project. Sometimes this takes the form of money directly intended to fund the research. Other times the reward can be a position on the scientific advisory board of the company, or being granted access to rare chemicals/reagents that are not commercially available to everyone. In the US, such support is usually required to be disclosed both to the employing University, and federal or other funding agencies.

Usually when the resource being shared is a new drug, companies like to control what happens to the material, such as whether/how the academics can discuss the results with others, including publication. Such legal matters are usually dealt with via Confidentiality/Non-Disclosure Agreements (CNDAs) and Material Transfer Agreements (MTAs). These documents are drafted up jointly between the lawyers in the company and the University’s Technology Transfer Office. Herein, I discuss an example of such an agreement gone awry….

The research that led us to request an MTA for a new drug
My lab’ has a long-standing interest in mitochondrial K+ channels, and in our work we’ve often used a drug called NS1619, which is reported to specifically activate mitochondrial KCa channels. The nomenclature is a bit odd here… a family of channels is encoded by the Slo genes, of which there are 4 isotypes in mammals (Slo1, Slo2.1, Slo2.2, Slo3). The channel proteins are also called “BK” channels, since they are large conductance (big) K+ channels. They’re also often referred to as KCa channels, even though the Slo2 variants are not Ca2+ activated (they’re Na+ activated, so KNa channels). There are also some hybrid terms such as BKCa, or mBK (mitochondrial BK). Anyway, the important thing is these channels are thought to be important for protecting the heart against ischemia-reperfusion (IR) injury, and the debate mainly centers on which isotype is present in mitochondria and responsible for these effects. Other people think it’s Slo1 but our results suggest otherwise.

NS1619 was originally made by a Danish company called Neurosearch (archived link from 2019 shortly before site went dead), but is readily available from chemical suppliers such as Sigma. However, over decade or so since the drug was made available, it emerged that NS1619 might have non-specific side effects in mitochondria. So, interest in the field shifted toward a related newer compound, NS11021, reported to be more potent and specific for the mito’ channel. NS11021 is not commercially available, so in late 2011 we wrote to Neurosearch to obtain some. Following a few rounds of back-and-forth between lawyers, an MTA was executed and shortly afterward we were sent an aliquot of NS11021. The scientist within the company who liaised with us, Morten Grunnet, was collaborative and very open to discussing experimental details and results. I consider him a colleague and friend to this day, and he was a co-author on our paper that came out of this work.

Although publishing the entire MTA here would be inappropriate, below I want to highlight specifically the part describing the work to be performed in my lab using the drug. Note the mention of potential use of knockout mice for various types of K+ channel…

Appendix A: Study Protocol. The overall direction of the research project, which the study will be part of, is the identification of K+ channels in mitochondria, which are involved in cardioprotection by ischemic or anesthetic preconditioning. The aim of the study would be to investigate the role of the BK channel in anesthetic preconditioning using FVB mice. In Langendorff perfused rat hearts, the volatile anesthetic isoflurane provides protection against ischemia-reperfusion injury in a manner that is sensitive to the BK channel inhibitor paxilline.  University of Rochester would like to determine if this manner of isoflurane mediated protection can be mimicked by the administration of the specific BK channel activator NS11021. University of Rochester would also like to look at the effect of NS11021 on BK channel activity in isolated mitochondria, using novel thallium-flux assay we recently published (PMID:20185796, PDF attached). None of the K+ channels in the mitochondrial membrane have been identified at the molecular genetic level. Knockout mice for various types of KCa and KATP channels may be available to us in future, and knowing which channel to look for could be greatly facilitated by the use of highly specific pharmacologic reagents.

So, we did some experiments and the data looked great. Without getting into details (all of which can be found in our published paper on this topic), the key finding was that NS11021 protected hearts against IR injury, and this effect was lost in Slo1-/- mice. Furthermore, the effect was likely mediated not by Slo1 channels inside mitochondria of cardiomyocytes, but instead by channels in intrinsic cardiac neurons.

Things get nasty
You might think if a company was touting a drug as acting on a specific target, and someone came along with data showing the effect of the drug disappears in a knockout mouse for the target, they’d be happy?  Not Neurosearch, they lawyered-up and tried to stop us from publishing.  They essentially said that if we attempted to publish, it would be a breach of the MTA, and they would sue for breach of contract.

Backing up for a second, during the time we were collecting our data and preparing our manuscript, we were in regular discussion with Dr. Grunnet, who was essentially our contact within the company. However, in the summer of 2012 he left Neurosearch to work for a different Danish Biotech, Lundbeck. We continued communicating while the manuscript went through several iterations, then in June 2012 we heard via Dr. Grunnet that Neurosearch were “rather reluctant to let us publish since they claim that the part including KO mice is not covered by the MTA”.  Also “they have decided to perform a small study on 11021 in WT and BK KO mice with I/R in Langendorff. The primary aim is to confirm BK selectivity of 11021 and they are aiming at a small publication”. The suggested solution was that we hold off until their study was published. In other words, the company scientists didn’t want to get scooped!

Given our study was complete and ready to publish, and theirs had barely even begun, we tried explaining that it might be better to simply add the interested parties onto our paper as co-authors. In my view this was an outreach effort that went above and beyond necessary collegiality (remember – they were trying to sue us). However, we were then contacted by Søren-Peter Olesen who oversaw Neurosearch’s collaboration with the University of Copenhagen, and also holds an academic appointment there.

In the remainder of fall 2012, while our paper was being put through the wringer by journal reviewers, a conversation ensued between myself, Dr. Olesen, and the Tech’ Transfer lawyers here in Rochester. This culminated in an email from Dr. Olesen in November, stating “Neurosearch will then close the matter and conclude that you do not want any legal permission to publish on NS11021 in relation to transgenic animals”.

We had the University lawyers look over the MTA again, and they made two important conclusions… First, the language was sufficiently broad regarding knockout mice that in their view we hadn’t breached the terms. Second (and more important) the company had no right to bar us from publishing because the MTA itself made no mention of a right-of-veto on publications. The MTA simply requested we send a copy of any manuscript to Neurosearch 30 days before journal submission, for them to review and comment. We did this, and in-fact our company point-person had been kept in the loop from the earliest stages, so Neurosearch knew about these knockout mouse studies for almost a year before the paper was submitted.

Neurosearch goes kaput
Eventually, after 4 journals and 8 months of review/reject/resubmit, we got our paper published in Peer J  in February 2013. But then something strange happened… Neurosearch went belly up. Well, technically the terms used were restructuring, transfer of assets, prosecution for share price manipulation. So, the company that threatened to block us from publishing no longer exists.  Another company trading under the same name and with new management did emerge from the ashes of the old Neurosearch, but disappeared in 2019.

What about the company-backed study?
Olesen and colleagues finally published their version of the story in PLoS One, in collaboration with a group from the University of Tübingen. Although the new paper cites our work, there are a number of problems with it…

  • The paper purports to show patch-clamp data of BK channel activity in mitoplasts (isolated mitochondrial inner membranes). The problem is, they used a “Port-a-Patch” system from NanION. This system works by using a vacuum to pull down a spherical object (ideally a mitoplast) onto a pre-formed patch pipet, with no microscope or any other confirmation that what you’re actually patching IS a mitoplast. The method depends on the purity of the “soup” you put into the chamber. We tested one of these systems in my lab in 2012 and determined it was useless for mitoplasts. Any contamination with other membrane fragments gave false readings, and notably the PLoS paper contains no information on the purity of preparations used for patching. The image of mitoplasts in Figure S1 shows a lot of membrane fragments apart from mitoplasts.
  • The IR injury data concludes that hearts from Slo1 KO mice cannot be protected by ischemic preconditioning. This experiment is completely opposite to our published findings. No attempt is made to explain this (remember, they don’t cite us) but here are some suggestions… we did everything in male mice, on a single genetic background (FVB), whereas they used both male and female mice in a mixed background (Sv129/C57BL/6). We measured cardiac function with a pressure balloon, but they only measured heart rate, so did not have any functional data for the heart perfusions. They had a 4 minute delay on ice between heart extraction and perfusion. The mouse heart is exquisitely temperature sensitive, and our delay is typically <30s. with no ice. Any longer and contractility is compromised.  In effect, they may have been looking at hearts with drastically compromised function, before any IR injury.
  • We use a constant flow system so the heart is always sufficiently oxygenated and function is not affected by coronary vascular tone. The Lukowski study used constant pressure in which coronary flow (O2 delivery) is affected by vascular tone. They showed that IPC improved post-IR coronary flow, and this was absent in BK knockouts. It cannot be ruled out that the knockout mice had compromised post-IR coronary flow. This links the effects of BK in IPC to coronary vasculature, NOT mitochondrial channels inside cardiomyocytes.
  • The dose of NS11021 used in the patch-clamp studies is very high (10 μM). Previously we showed 50 nM could activate a paxilline sensitive K+ channel in mitochondria. The PLoS paper claims that their patch data complement findings of another paper touting Slo1 as a mito’ BK channel, but that paper didn’t show any patch data.

Overall not a good situation. Sufficient time has passed since these events that it’s probably OK to talk about it now. Olesen is retired/emeritus. The company that NS sold some of its IP/assets to is still around, but has yet to bring a single product to market based on the Neurosearch drugs.  The lawyers involved have all moved on in their careers. My lab no longer does much work on mitochondrial ion channels (but we do have some unpublished data).  People are still publishing on NS11021, and would I doubt that any of the more recent folks using this molecule are aware of the above troubles.

Another OAA “Clinical Trial”

Continuing the saga of Alan Cash and Terra Biological, trying to get a dietary supplement containing oxaloacetate into clinical trials for all sorts of conditions from long-covid to PMS, there appears to have been a “breakthrough“!

The company finally got around to publishing the results of a clinical trial on the effects of OAA on self-reported symptoms of chronic fatigue syndrome / myalgic encephalomyoelitis (CFS/ME).  Now, this is a condition that scientist George Monbiot has called “The greatest Medical Scandal of the 21st Century”, so straight away that should start raising flags. Why pick a poorly-defined condition, the very existence of which is hotly debated?

Prima facie, the results appear promising and statistically significant, and it’s also commendable that the group chose to provide the complete (not actually) data set.  However, digging into the data there are are several problems

The number of patients is “wobbly”

The trial started out with 40 patients in the control arm and 42 in the treatment arm. However, there was apparently a greater attrition rate in the controls (12 leaving) than the OAAs (5 leaving).  This would leave 28 controls and 37 OAAs as “completers” of the trial, as nicely explained in the flow diagram in Figure 3 of the paper and in the manuscript text…

…which makes it a bit weird when we go to the original data set (re-hosted here) and see there are actually 29 controls!  Where did the extra control patient come from?

Biased reporting of patients who got worse

Things get real squirrelly when we look at Figure 5 of the paper. The y-axis here is number of patients, the x-axis is the change in fatigue score pre/post trial. The orange are controls and the blue OAAs.

It’s a fairly simple matter to look at the dots and count the number of patients at each score point, then add them up. Doing so, we again arrive at 29 for the controls. BUT, importantly there are only 35 patients in the OAA set. We’re missing two of them, and by looking at the original data we find there were indeed two patients on OAA whose scores got worse (-4). They were simply eliminated from this graph, making it appear as if no patients on OAA got worse.

There are other discrepancies between the data and this graph, as shown in the table below. Anything highlighted blue doesn’t match up. There are 3 instances where a ‘1’ was assigned to the control group in the figure, suggesting a patient got worse, when in-fact there was no patient getting worse at that score level.  Furthermore, there are 2 patients who got better in the control group but were not counted on the graph…

Plotting Figure 5 as shown in the paper (left image below) alongside the real data (on the right) shows the way in which the paper makes it appear the controls got worse. In the published version there are more orange (control) points above the x-axis on the left side of the graph (worse scores).  Notice the missing 2 OAA patients who also got worse (at -4 in the right hand graph).

Just for fun, I also switched the order of the data series, so now the controls (orange) appear on top of the OAAs (blue) in the “real” graph on the right. This highlights the 3 controls who had a big improvement of 11 points… the same as the OAA group.  It’s amazing what little differences like this can make to the perception of a result.

Trial non-completers?

Now remember, Figure 5 is only the folks who completed the trial. As already mentioned, many did not. By comparing the data from the completers vs. the whole set, we can gain some more insight into the non-completers group, as shown in this table below.

What this shows, is of the 11 controls who quit early, they had an average improvement score of 2, and none of them got worse during the trial. However, of the 5 in the OAA arm who quit early, 3 of them actually got worse.

Combining the analysis of the completers and non-completers, overall in the control group 7 patients got worse during the trial, but none of them quit early. However, in the OAA group 8 patients got worse during the trial, and 3 of them quit early. Readers can judge for themselves whether there may have been any sort of “encouragement” applied to certain groups who were feeling worse to quit the trial early, but no such encouragement applied to those in the other group.

Appropriate Statistical Tests

Lastly, I’m not a statistician but my understanding is that when testing if 2 groups are different from each other, without knowing in advance which direction any difference might be, you should use a 2-tailed T-test.  For some reason, here the authors chose to use 1-tailed tests for everything, i.e., they only hypothesized that the results would go in one direction (presumably OAA being better than control), and not the opposite possibility. Needless to say, if you re-do the tests applying the proper criteria, some of the differences get a lot smaller or vanish altogether.  For example, in Figure 4 the p-value of 0.057 is described in the text as “trending toward significance”.

Performing this test with 2 tails yields a far-less impressive p-value of 0.114, quite literally nothing to write home about.

Wrapping this up

Let’s not get into the numerous typos in the paper that really speak more about the shitty editorial standards at Frontiers than anything else. Further illustrating such problems – one person who reviewed the paper (Alison Bested) has published with at least two authors on the paper (Yellman and Bateman) as recently as 2021, so it was reviewed by familiar folks. The discussion of the paper repeats many of the mistakes regarding the simple biochemistry of OAA that I’ve written about in the past. Bottom line, yet again, the company shilling a $600 a year supplement has managed to get something published with a veneer of scientific legitimacy in a not very good (predatory?) journal. A not-very-deep-dive shows problems with data reporting and the basic arithmetic keeping track of numbers of patients. Don’t human patients with hard-to-diagnose-and-measure chronic diseases deserve better than this?

The Guy Foundation & Quantum Biology – No Mitochondria do not Communicate With Light!

Quote

I’ve been meaning to write about this for a while, but what with running the lab, teaching, trying to stay on top of the poly-crisis of the NIH funding situation, starting an MBA, and doing my best to avoid GenAI wherever possible, it got put down the priority list. That’s unfortunate because this is a fun story…

If you work in mitochondria circles, you may have heard of the Guy Foundation?  It’s a charitable trust in the UK – a philanthropic effort from Geoffrey Guy, who made his money as co-founder of GW pharmaceuticals, the first company to get cannabis derived medications approved for human use. GW was sold to a US company for $7.2bn in 2022. Dr. Guy seems like a decent… guy… (sorry), and let me be clear this post is not intended as a personal attack on him, the company he founded, or the mission or operations of the Guy Foundation (GF).  Rather, I want to highlight the absolute bonkers batshit crazy science the foundation is spending its money on.

Flashy Science

Let’s get right into it, with the idea that mitochondria can communicate using light, explained in a YT video from Dr. Guy himself. Building on the widely accepted idea that mitochondria run on electricity (something, something, membrane potential, millivolts – go read Peter Mitchell’s “little grey book”), the GF seems to think mitochondria produce photons as well.  In-fact, much of the “science” funded by the GF is on the topic of biophotonics – the release and sensing of photons by biological systems.

Aside: There are myriad examples of biological systems that produce light – firefly luciferase being most familiar. These have been extensively characterized at the molecular level and reconstituted outside the original organisms – all useful and well-documented science.  That’s not what this is about.

The paper that warrants the batshit label is this one. Published in that bastion of scientific excellence Frontiers in Physiology, it comes from the lab of Jimmy Bell at the University of Westminster in London. Bell and first author Rhys Mould are part of the Research Center for Optimal Health at Westminster. The work was funded by the GF, as proudly claimed on their website and noted in the paper.

So what’s the problem?  Well, let’s start with a description of the experimental set-up from figure 1 of the paper: 3 cuvets with stir-bars at the bottom, each containing a suspension of isolated mitochondria. The left cuvet is separated from the others by a piece of cardboard (the thick black line), while the one on the right is next to the central one (i.e., “unshielded”).  Into each cuvet is placed a needle-type O2 electrode, to measure O2 consumption by the mitochondria.  Let’s number these cuvets 1, 2, 3 from left to right.

Now here’s the kicker… they add the mitochondrial inhibitor antimycin to the central cuvet (#2), and then claim the mitochondria in cuvet #3 respond by changing their O2 consumption (respiration), but the mito’s in the cuvet #1 do not. Not only that, the mechanism of communication involves mitochondria in cuvet #2 emitting photons and those in cuvet #2 sensing them.

Let’s ignore for just a second a couple of other minor problems with the methods, such as the use of 244uM antimycin (enough to tranq’ a horse) and the complete lack of any added metabolic substrate to support the mitochondrial respiration.  Let’s instead focus on section 3.4 of the discussion…

“3.4 Comparing “light” and “dark” experiments. Finally, we compared the change in OCR between unshielded mitochondria in light conditions versus dark conditions, shown in Figure 6. In MCF7, the rate of OCR change was significantly higher in light conditions compared to dark conditions.”

OCR here stands for “oxygen consumption rate”.  In other words, this magical communication between cuvets via photons was actually more pronounced when the system was flooded with gazillions of photons from the room lights, versus in the dark – exactly opposite to how low-intensity light phenomena are supposed to work. Batshit is not nearly a strong enough descriptor.

Yeah yeah nit-picky details, what’s the real problem?  Dear reader, take a look again at Figure 1 above.  Do you see any type of lid on the apparatus, or anything separating the liquid part (blue) from the air above it?

The problem is, the way to measure mitochondrial respiration / OCR / O2 consumption, is in a CLOSED system such as a Clark type oxygen electrode, of which there are numerous vendors available, or alternative platforms. Every one of these instrument makers will tell you that the liquid in which [O2] is measured (e.g. a suspension of mitochondria or cells at the bottom of a well) MUST be sealed off from the atmosphere… even the tiniest air bubble will mess up the readings.

This is simple chemistry – air is 21% oxygen – much higher than the solubility of oxygen in water (about 200 micromolar at 37C). If there’s any contact between the liquid and the air, any O2 consumed by mitochondria in solution will be instantly replaced by that from the air, so the concentration of O2 in the liquid will stay the same. It’s the CHANGE in O2 concentration that’s used to determine the RATE of O2 consumption (that’s the “R” in OCR).  No change in [O2]?  No rate.

Simply put… it is damn near impossible to measure OCR in an open system such as this (well, technically it’s possible using precise mixing at the gas-liquid interface, a complex series of engineering formulas, lots of math, and a full understanding of the problem just described, but that’s not what happened here).

Show Me The Data

The “data” in the paper are presented in a manner completely opaque and unfamiliar to anyone who does these types of experiments.  The change in oxygen consumption rate (remember – there is no rate) in cuvets #1 and #3 following antimycin addition to cuvet #2 is shown in Figure 2…

Your eyes do not deceive you.. there was a 0.004% change in OCR in the cuvet that was exposed to a big old dose of pew-pew mito lazer beemz (#3, pink bar) but only a 0.002% change in the mito’s that were shielded by a bit of cardboard (cuvet #1, black). Tip – remember the direction these bars are pointing, for later on.

Far be it from me (a non-statistician) to wonder at the levels of intricacy and convolution required to detect a 0.004% change in anything, but this does not seem to be: (A) at all measurable, (B) of any importance. For context, this is like someone who earns $100k a year getting a $4 pay cut.  It should therefore come as no surprise that the statistics section of the paper will also appear odd to anyone familiar with such experiments…

“Differences in the rate of mitochondrial oxygen consumption
were analysed using a mixed linear effects model written in R”

Complete overkill. This is not how it’s done. You calculate the rate (slope of the O2/time trace) and (after checking for normalcy) use simple statistical tests to see if the values are different, with appropriate multiple-testing corrections if necessary. If you want to get fancy you could even use the real-statistics plug-in for Excel, but you absolutely do not need a mixed linear effects model to see the difference between 2 diagonal lines on a trace. Here’s an example from one of my very old (fuzzy) papers.  This is what real oxygen consumption traces look like.  You can see that one line is different from the other.  Compare this to what you’re about to see further down the page.

Show Me The Data – Part 2

As a polite science critic, my first course of action upon reading this paper (well, OK, not my first choice) was to make a nice post over on PubPeer, bringing up the above points and more.

To the authors’ credit, they did respond to clarify some of the experimental details, but in some cases this only confirmed my concerns (e.g. there was a lid made of parafilm, but still a head-space, and indeed the buffer contained no fuel for the mitochondria to respire on). Most importantly, the authors were gracious enough to share the original data set on FigShare.  Absolute stellar move. Kudos. Love it. I’ve been using FigShare since 2016 and everyone should do this!  The problem is when you actually plot the data from the Excel sheet, as predicted from the impossibilities discussed above, there is no rate.  The concentration of oxygen is flat over time, because it’s at equilibrium with the atmosphere.

The y-axis here is dissolved oxygen (in micromolar, you’ll note it’s a bit higher than the usual 200 uM, likely because these experiments were done at room temp’ and O2 solubility is an inverse function of temperature). The x-axis is time in seconds, and the antimycin A was added at 120s. The black lines are 8 samples from the unshielded cuvet, and the red lines are the shielded cuvet.

Put simply THERE IS NO DOWNWARD SLOPE, THERE IS NO OXYGEN CONSUMPTION RATE TO SPEAK OF. Some of the lines even slope upwards, so the [O2] in solution is actually increasing over time.  If we calculate the rates before and after the antimycin addition, the numbers (mean +/- standard error) come out as follows:

The chart shows the average rate (N=8 per group) for the unshielded cuvets (left 2 bars) vs. the shielded cuvets (right 2 bars), pre vs. post antimycin addition to the central cuvet. There is no difference between bars 1 and 2 – if anything the slope of the line (which is already close to zero!) increases instead of decreasing as claimed in the paper (compare this to Figure 2, above). The error bars (standard error of the mean) also show that whatever difference there may be is statistically non-significant – no fancy linear mixed regression what-have-you.  Looking at the unshielded vs. shielded groups at baseline (bar #1 vs. #3) you could even say they show a bigger difference than pre/post antimycin.  In other words, regardless of any mito’ communication voodoo, having a bit of cardboard next to you causes your O2 consumption to drop from very close to zero, to very actually zero, before any mito’ poisons are added to your neighbor.

Furthermore, the control data (cuvet #2) are not included, so we don’t even know if the massive dose of antimycin did anything.  It probably inhibited the mitochondria, but there would be no change in [O2] because, as explained above, any O2 consumed by the mito’s would be replaced by that from the atmosphere, so the trace would remain flat.  The difference between a flat line and a flat line, is nothing.

Anyway, I’d encourage anyone who enjoys playing around in Excel or R to take a look at these data and produce a single shred of evidence that the main conclusion is valid.

Let me be clear – not only is the key claim of this paper utterly bonkers, it’s based on a total lack of evidence, with a complete misunderstanding of how to measure mitochondrial function.

Surely this should be retracted?

Of course it should!  But since when did doing the right thing matter for publisher Frontiers? (just ask Leo Schneider). I wrote to the person who edited the paper, YoungChan Kim at the University of Surrey, UK, but he didn’t bother to respond. Neither did John Imig the chief editor for this field section of Frontiers in Physiology. A good friend who is involved with the journal did respond to my inquiry, but they were not involved with handling the paper and so could not do anything about it.  There was a bit of chatter in the PubPeer thread, which died down as soon as I posted a “no rate” graph similar to the one above.

Importantly, David Fernig raised the point that this is not an impossible experiment to do properly. You just need a very sensitive photomultiplier tube to detect photons, and tightly controlled other conditions (sealed chambers, dark room, temperature regulation, filters to control the wavelengths of light passing between samples, etc).  Maybe this IS something worth investigating?  Maybe the GF should give the money instead to someone who understands this?

How will this “knowledge” be used?

The paper is still out there, and people are still using it to make ridiculous claims.  The GF is lauding the work in their newsletter, as well as the aforementioned video, the lead author also talks about it in video format.  The foundation appears very proud of the work it funded, which is unfortunate because it’s not very good at all.

Of course, if you believe mitochondria can influence each other via light, it’s a small step away to hack that biology by shining bright lights on various body parts, which brings us neatly into crazy anti-aging therapies, such as the claims that red light can energise your mito’s. I’ve made my thoughts on the shit-show that is anti-aging biotech abundantly clear before. While there are certainly biological effects that can be attributed to certain wavelengths of light, they are usually based on rigorous experimentation (such as the paper just cited, from a well-known group in the field at MCW). The Frontiers paper is not an example of quality science.


There’s a lot more to be written on the various out there projects and papers claiming all sorts of weird quantum effects on mitochondria. It’s not just light… there are all kinds of papers about electromagnetic fields and mitochondria.  Claims that mitochondria operate at high temperatures inside cells. Mitochondria allegedly respond to music and other sounds.

But for now, we just have to wonder how does a charitable foundation hand over non-trivial amounts money to fund “pew-pew mito space photon lazer beemz”, and then when the project execution and results are complete garbage, they just accept everything at face value?  No admission that anything silly happened.  No acknowledgement that the work is absolutely fundamentally flawed, and was not performed in a way that could give meaningful outcomes. It’s an alternate reality!

Spring 2025 Update

It’s been a while! Several updates to the lab over the past few months…

  • Rahiim Lake passed his PhD qualifier exam last fall (congrats!)
  • We hosted the 11th annual TRiMAD (Translational Research in Mitochondria Aging & Disease) conference here in October, attended by over 160 people.
  • Paul spoke at a number of conferences and outside institutions, including a conference at UPenn on research integrity, resulting in this paper.
  • The University finally adopted and published the Open Access Publishing Guidelines that we worked to bring through the faculty senate. It only took 7 years!
  • Paul went back to school in the Executive MBA program at UofR’s Simon School of Business (anticipated graduation May 2026). It resulted in finally (reluctantly) getting a linked-in page.
  • Post-doc Sabarna Chowdhury got married in April 2025 (congrats!)
  • Lots of other papers got published, including some exciting work on taurine metabolism with the Bajaj lab in the (newly NCI designated) Cancer Center, which is accepted for publication in Nature.

Buchi Rotavap Parts

A rotary evaporator (RotaVap) is a core piece of equipment for any lab that does chemical synthesis, and Buchi make some of the best ones. Unfortunately, ours is about 25 years old and they no longer make parts for it.

In the heart of the apparatus there’s a worm-drive… the motor is connected to a helical spindle which drives a ring gear. This is what spins the round-bottomed flask.

Over the past few months we’d been noticing that at low spin speeds, the rotation would skip.  It would intermittently speed up and slow down through the rotation cycle.  So, we opened up the gearbox to see what was inside (note – you may need to use a hammer to get the parts to separate out).

The problem is, Buchi decided to make the gear out of plastic, and years of exposure to grease and chemicals causes the plastic to crack, as shown here…

The result is the motor spins but the plastic rung just grinds around and doesn’t spin the metal part that’s eventually attached to the flask.  The worm gear is basically slipping once per rotation. We tried buying a used model off eBay, and upon dissection it had exactly the same problem, so it appears this is a common fault on this particular model (RE-111).

The solution is not as simple as “glue it back together”. There’s too much grease everywhere, so it needs completely stripping down to remove all that so the glue will stick. And then, removing the ring gear can only be done by snapping it in two. The problem then is there’s not much surface area to glue back together so it will probably break again soon.

Enter the 3D printer!

With some quick work in Sketchup  I was able to make a model of the gear with a break in the middle and plenty of surface area for gluing the 2 halves together, as well as gluing to the metal spindle inside the machine.

Here’s the new gear, printed in Elegoo ABS-like resin on a Mars 3 printer. The STL file is here to download if your Buchi Rotavap is in need of a fix!