A number of controversies related to retractions in academic journals have occurred in recent years. Two in particular come to mind.

I recall the case of Michael LaCour, whose work claimed that people who interacted with a gay person were less likely to oppose gay marriage. When other researchers attempted to replicate this, they didn't find the result. When they asked to see his data, LaCour was unable to produce the original survey data. The paper on which he was a co-author was retracted, as it should have been.

Similarly, though more damaging, is the case of Andrew Wakefield whose cherry picked study falsely asserted a link between vacines and autism (a claim which still has no evidence to support it). Wakefield also failed to disclose he had an interest in a competitive vaccine method, thus presenting a significant conflict of interest. His paper was also retracted, as it should have been.

Sites like Retraction Watch monitor and report on what retractions have been done. I've noticed a tendancy for people to see retractions as a sort of faux pa. In many cases, like the two cases of fraud mentioned above, it absolutely is a faux pa, but are all retractions like this?

I think not. There are cases where errors in data recording or methodology are identified after publication has occured. That's unfortunate, but not fraudulent. While we can and should look for opportunities to improve the peer review process and reduce the possibility of errors, the only mechanism a journal can use to guarentee 100% error free publications is to never accept any publications.

Science includes the process of discovery. Discovery, in my experience, is the conclusion to a process of iterative failure. The lack of interest in publishing studies about negative results (e.g. finding that a proposed drug doesn't actually help patients) is generally referred to as the file drawer effect. Some have critized journals for not publishing more negative studies, and I believe some journals specializing in negative studies have emerged. I support those efforts, but they don't fully address the file drawer effect.

Researchers themselves may also contribute to the file drawer effect. After all, why invest time polishing and finalizing a disappointing result when that time could be invested on a new investigation? There's been some discussion, the results of which I'm not deeply familiar with, proposing that clinical trials must pre-register their study. In this way, a record of any file drawer study exists. I like this idea. This could potentially help unassociated researchers with similar interests from repeatedly treding the same dead end. Some might express concern that their ideas could be "stolen" by mining the proposed studies and beating a competitor to publication and/or patent. I believe this problem could easily be solved with some sort of time delay protocol for entering studies into the system of record. Perhaps other interesting anonymization efforts could be introduced, such as a means of searching for a specific idea in the database without being able browse the database. Despite efforts like these, I believe some amount of file drawer effect will always exist.

On the occasion where a novel result is achieved, publication will almost surely follow. Readers are likely familiar with the phrase "publish or perish" that academics use to describe a major source of stress in their profession. They have an incentive, particularly for novel results, to generate as many publications as possible.

This doesn't mean that those well trained in the scientific method are going to be willing to compromise on their work. But every human being is prone to making mistakes. The peer review process exists to prevent faulty work from publication. This process invariably produces false positives (papers which are worth publishing but blocked by reviewers) and false negatives (papers which are not fit for publication but get approved).

We should also acknowledge that, statistically, we expect that if a large enough number of studies are conducted, there is a non-zero probability that results which appear significant but are actually due to chance will emerge. Peer review will not catch these types of errors, but replication should.

While the process of generated peer reviewed scientific publications should be continuously reviewed for flaws, fraud, and mis-aligned incentives, I believe no matter what mechanism we adopt, we're going to live in a world where the occasional incorrect paper is published. I should say that the peer review process is, in my opinion, the best process we have for validating scientific research. The occasional scandal, while troubling, is imbalanced reporting. It fails acknowledge the vast majority of occasions in which the process works superbly.

In addition to weeding out fraud, retraction should also occur as a cleaning process for the inevitable incorrect paper which slips through. Journals should strive to be prestine sources of known truth. Correcting errors is a fundamental part of the scientific process.

However, I'd like to see two things change in this process.

First, when reporting on retractions, I believe science communicators should be presenting more statistical data on rates of retraction and rationale for retraction. Retracted due to fraud is a much different story than retraction due to the presence of a deeply obfuscated bug in some computer code. The public should have a better understanding for the distinctions.

Second, I would like to see more people studying and discussing the policies and processes of journals. In discussions, when people would present the claim "studies have shown" without being able to provide a such reference, I tend to get pretty annoyed. But we're entering a much worse world. I used to feel that saying "there are no peer reviewed studies to support your claim" was enough to dismiss a claim. I think that used to be true, but we're starting to see the emergence of fake journals as well. Different journals always held different levels of prestige, but it seemed there was alwyas a minimum bar. That's not true any more.

Only through statistical and game theoretic studies of the mechanisms of publication can we be vigilent about maintaining a well groomed, annotated body of scientific literature. The mechanism by which journals follow peer review, process for monitoring the need for retraction, execution of retration, and rates of all such activities should be available in a common, open schema for outside study.