Saturday, December 11, 2010

Pensées - entry 00004

So... here. As promised in Pensées from Moi.

Original publishing date: March 30, 2010.

Original Title: Publication Bias in animal experiments?

A Nature News item caught my attention this morning. It is a report by Janelle Weaver, titled: Animal studies paint misleading picture, a title which has rather unfortunate connotations, and which, in all probability, will become a rallying point for the committed anti-animal experimentation folks. The report is based on a paper in PLoS Biology, published today, by Sena et al., titled: Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy. I draw your attention to the glaring discrepancy right there - this meta-analytical study focuses on acute ischemic stroke, a small subset of the entire spectrum of research that utilizes animals; yet, Ms. Weaver saw it fit to use a title for her report that tars animal experimentation with an egregiously broad brush.

The study authors raise some valid and serious concerns, demonstrating the existence of publication bias in the reporting of studies with animal models of ischemic stroke. Using the CAMARADES (Collaborative Approach to Meta-analysis and Review of Animal Data in Experimental Studies) database and meticulous statistical analysis, the authors identified 16 systematic reviews of interventions studied in animal models of acute ischemic stroke, that resulted in 525 independent publications. A trim-and-fill analysis suggested that publication bias - the preponderant propensity to publish only studies with positive outcomes, while neglecting those with negative outcomes - might account for around one-third of the efficacy reported in systematic reviews, with reported efficacy falling from 31.3% to 23.8% after adjustment for publication bias. The authors estimate that 14% of studies conducted (not 16% as mentioned in Ms. Weaver's report) are never published; they rightly state, "Nonpublication of data raises ethical concerns, first because the animals used have not contributed to the sum of human knowledge, and second because participants in clinical trials may be put at unnecessary risk if efficacy in animals has been overstated", and speculate that this publication bias may not be restricted to the area they studied, and may in fact be more pervasive in other biological studies with animals.

The sensational titling of Ms. Weaver's report fails to take several facts under advisement, starting with the fact that speculation is not evidence. Largely, there have not yet been similar studies conducted for the vast swathe of studies for which animal experiments are indispensable, including preclinical efficacy studies in infectious and metabolic disease models. In a similar vein, there haven't been studies either that report similar publication bias in biological studies not utilizing animal models. Why single out animal experiments, then, to make a point, in a report that would be widely read - even by non-technical individuals, and possibly misconstrued?

Publication bias is a serious concern in scientific research in general (not just restricted to animal experimentation) and translation of the research outcomes to patient care. What I would have loved to see in Ms. Weaver's report is some focus on the reasons for this bias, but she did not dwell on it sufficiently. UCLA researcher S. Tom Carmichael, quoted in the report, raises a great point: "If a result is negative, the investigator doesn't want to go through the work of writing it up and publishing it, because they know it won't get into a good journal and it won't really enhance their career." This mindset doesn't only affect reporting of studies, but grant writing and funding applications as well, in all types of scientific studies.

Secondly, the authors themselves point out the shortcomings (in the discussion) of their method of using aggregate checklist scores in comparison to assessing the impact of individual study quality items; additionally, they point out that the scores have not been formally validated. Thirdly, they also indicate that given the importance of this concern, even the unvalidated scores have been taken into consideration by an international consensus statement of Good Laboratory Practice in the modelling of ischemic stroke (Reference 39 in the paper).

Where is the reflection of these facts in Ms. Weaver's report? I call it sloppy reporting.

An important distinction to be made is also the determination of what constitutes negative data. Is it merely the lack of the desired outcome, or any outcome? Is it the presence of an outcome in a control group where none is expected? Every bit of such data needs to be documented and reported beyond the notebooks of researchers for wider circulation, for it adds to the baseline knowledge.

[Personal Anecdote Alert!] When I was working with murine systemic fungal infection models via the intraperitoneal route, I found that even in my negative control group, receiving plain sterile phosphate buffered saline, there was a cytokine response in splenocytes. After checking my reagents ad nauseam for endotoxin contamination and racking my brain, I finally found confirmation in a paper in an obscure journal which had also reported that plain PBS irritates the peritoneal surfaces generating an inflammatory response and causing an early migration of neutrophils into the peritoneal cavity, and substantial migration of monocytes after 24 hours, as well as sustained protein leakage from 48 hours onwards. Though the paper considered this to be the baseline effect in the control group, the effect was undoubtedly present. I found that this effect could be alleviated by using plain isotonic saline instead of PBS, and switched to using saline. But if I had not found this paper, it would have been difficult for me - because normally one doesn't find papers on the effects of PBS, a near-universally used buffer.

What is encouraging is that - as Ms. Weaver reported - a few journals, such as Nature Precedings, the Nature group journal - Journal of Cerebral Blood Flow and Metabolism (JCBFM), and a dedicated journal, the Journal of Negative Results in BioMedicine, are including negative results from rigorous studies. Echoing Ulrich Dirnagl, editor-in-chief of JCBFM - quoted in Ms. Weaver's report - I, too, hope that the PLoS Biology study is among the first of many such required studies that can potentially convince professional societies and funding agencies to provide appropriate weightage to negative data and support the publication of such data for the benefit of researchers everywhere.


Original Comments

2 COMMENTS

The Journal of Negative Results in BioMedicine, you say? But if you only get the negative result once, there's still hope...

(For some reason, I'm unable to format italics, links or line-spacing here???)


Thanks for mentioning the Journal of Irreproducible Results (JIR), Lee. I wish I could publish in that! Now, all we need to do is convince the funding agencies to fund the studies published in JIR... But seriously speaking, in the current practice, negative data are too frequently chucked out or buried deep without another consideration. I think that there should be some modicum of onus on the producer of negative data to provide a rational explanation for its appearance. After all, all knowledge is worth having. For italics and links, please use the corresponding HTML tags when posting, such as <em> and <a href>.

No comments:

Post a Comment