This Pub has been submitted to - but is not yet featured in - this journal.
This publication can be found online at http://pubpub.ito.com/pub/retractions.
Exploring Scientific Retractions
Travis Rich
We explore the process of scientific retractions, their causes, their consequences, and a series of analyses to better understand the efficacy of this socially stigmatized mechanism.

Introduction

Scientists are not infallible. Unsurprisingly, the work produced and published by these scientists is sometimes found to be wrong. The central mechanism for correcting these published errors is for the publisher to issue a Retraction Notice. Unfortunately, this mechanism is bundled with such strong social and political context that its use and value deviates from the core goal of enabling science to be a self-correcting process. Alberts et al. expand upon this notion and even suggest that reliance on the word “retraction” causes behavior that is against the best interest of the scientific process [1]. In this paper we explore several dimensions of the academic publishing process that give rise to retractions and provide an analysis of the efficacy of current retraction mechanisms. We find the mechanism of retractions to be surprisingly ineffective.
The challenges surrounding academic retraction are further exacerbated by a change in the culture of scientific publishing towards using publications as a tool for career advancement or self-promotion. Because negative results rarely get the same attention (or even opportunity to publish) as eye-catching headlines, there is an incentive towards the outrageous and surprising [2]. While this is good for journalism, it is damaging to the scientific process. Unfortunately, as the following analysis shows, retractions may not be a sufficient tool to repair such behavior.
Such work also speaks more broadly to general behaviors around the dissemination of changing documents. For example, staying up to date with legislation (e.g. how does a realtor stay up to date as realty laws change) or hardware documentation (e.g. how do you know when your circuit board component has been discontinued) can be extremely costly or difficult. One domain that has conquered many of these challenges is software development. Updated software packages are often well documented and many services exist to provide notifications when an update is made or a package is out of date

Types of Retractions

We identify four underlying causes why a published piece of work may be invalidated and retracted.
  1. Honest mistakes in documentation (e.g. typos, calculation errors, etc)
  2. Findings invalidated by newer science (e.g. the discovery of a new piece of evidence invalidating a conclusion)
  3. Fraudulent reporting (e.g. creating false data, fabricating results)
  4. Incompetence (e.g. not being sufficiently skilled in a technique or domain to identify a mistake)
Of these, two of them are contributory towards the scientific process (items 1 and 2), and two of them are detrimental to the scientific process and require correcting repercussions (items 3 and 4). Two questions that can be asked are 1) How does one identify and discriminate between different types of retractions, and 2) What is the proportion of retractions due to each type?
A comprehensive analysis of retracted work, performed by Grieneisen et al., found that 47% of retracted work was due to publishing misconduct, 20% due to research misconduct, and 42% due to questionable results interpretation (multiple causes can be assigned to a single retraction, hence the greater than 100% total) [3]. This work differs in their analysis of types of causes, and almost entirely ignores the use of retraction as a tool for correction and progress (items 1 and 2 in the above numbered list).
The tendency to view retractions as a purely negative process has led some to encourage the adoption of multiple terms (such as ‘withdrawal by author’ or ‘withdrawal due to causes’) to distinguish between the underlying cause of a retraction [1].
Unfortunately, little effort is made to differentiate, and the effect is a generalized and very strong professional stigma against scientific retractions. While such stigma is appropriate for retractions due to fraud or incompetence, it creates an environment where even small risks are avoided due to the fear of potential retraction. Such risk aversion is seen to have negative effects in other domains as well, such as the culture of entrepreneurship in many European countries.
In Europe, a serious social stigma is attached to bankruptcy. In the USA bankruptcy laws allow entrepreneurs who fail to start again relatively quickly and failure is considered to be part of the learning process. In Europe those who go bankrupt tend to be considered as \u201closers\u201d. They face great difficulty to finance a new venture.
Communication by the European Commission, 1998 [4]
If you start a company in London or Paris and go bust, you have just ruined your future; do it in Silicon Valley and you have simply completed your entrepreneurial training.
The Economist, 1998 [4]

Current Work with Retractions

The current realization of the scientific process has been increasingly critiqued as a large number of papers, notably in the life and health sciences, have been found to be irreproducible. One form of critique has been analyses of retracted papers to understand trends in their countries of origin, field of science, and publishing journal [3] [5] [6]. However, little work is done to understand whether retractions are even an effective tool for enabling science to be self-correcting (the focus of this paper).
One startling aspect of the retraction process is that there is no centralized mechanism for how a journal retracts a publication, how scientists are notified, or where retracted work can be checked. There is no central database to crosscheck a new work’s references with those that are known to be retracted.
Several tools have been launched by communities to try and aggregate retraction knowledge. Most notably is the Retraction Watch blog [7]. Retraction Watch posts retracted papers as they are announced. The tool has found much success as it is one of the only centralized mechanisms for being notified of retracted work.

Methods for Understanding the Retraction Process

The goal of this paper is to provide an analysis that gives insight into the effectiveness of published retractions. For this case, we define ‘effectiveness’ as meaningfully removing retracted science from the discussion and community. We use citation counts as a proxy for discussion with the reasoning that a publication with many citations is still active in the community and heavily leveraged, whereas a publication with no citations is not creating a lasting impression on any scientific work.
Thus, exploring the rates of citation for publications before and after their retraction will be the lens we use to explore the retraction process.

Collection of Publication Data

As noted earlier, there is no centrally managed database of retracted work. There exist several search engines and tools for finding academic publications, but they employ no standard mechanism for finding retracted work. Most commonly, retracted work that is marked will have ‘Retracted:’ appended to their title - but not always. This makes the challenge of finding a comprehensive dataset on retracted publications rather difficult.
For this experiment, we scrape Google Scholar. Google Scholar is fed by Thomson Reuter’s Web of Science and Elsevier’s Scopus tools. Unfortunately, Google Scholar does not provide an API to their data, so we must manually scrape HTML pages to extract the data we need. Furthermore, Google Scholar does not support scraping of their content and have built aggressive mechanisms to detect and prevent such actions. Empirically, I find that within 20-40 seconds of scraping (even when throttling to a reasonable read rate), a given IP address is blocked and sent 503 errors. To work around this, I employ a series of IP relays in the form of Virtual Private Networks (VPNs). Upon Google’s blacklisting of an IP address, I connect to a new VPN server and scrape. A similar mechanism could be more automatically employed using dynamic proxying within the Tor network or similar tools.
Unfortunately, this significantly reduces the size of data we can analyze. Our collected dataset includes the top 100 retracted publications as listed by Google Scholar, and all of the ‘children’ of those retracted publications. We define children to be any publication which cites the retracted paper. Also, we rely on Google’s ranking of academic publications to collect the ‘top 100’. Google Scholar claims: “Google Scholar aims to rank documents the way researchers do, weighing the full text of each document, where it was published, who it was written by, as well as how often and how recently it has been cited in other scholarly literature” [8].
For each publication, the retracted papers and their children, we collect the year of publication, title, number of citations (according to Google Scholar), and the BibTeX reference citation.

Analysis of Retraction Data

We begin the analysis by exploring some simple statistics relating to the retracted papers. Specifically, we are interested in understanding the citation rates of a paper before and after the point of retraction. Due to the poorly documented process of many retractions, it is not possible to get an exact retraction date, so we rely on the granularity of a year.
Given that we also collect the number of citations of children publications, we can also calculate the number of ‘grandchildren’ of a retracted paper. Here, we use the term grandchild to refer to a publication that has cited a paper which cites a retracted paper. Grandchildren can be used to give us an insight into whether papers that cite retracted work are influential or not. For example, a retracted paper having 5 children and 10,000 grandchildren would mean that the retracted paper was cited by highly influential papers.
##Retraction Statistics For all papers, we analyze the papers that cite them and compare the year of retraction with the year of publication of the child paper. We find that the average number of citations before and after retraction is nearly identical. We also find that the median number of citations after a retraction is larger than before. These numbers indicate that the retraction is not serving its purpose of removing content from the scientific discussion.
MedianMeanStandard Deviation
Citations before Retraction34.573.86116.27
Citations during Retraction Year13.016.2317.60
Citations after Retraction50.072.6367.03
We can also explore the statistics of second-order, “grandchildren”, citations.
MedianMeanStandard Deviation
Grandchildren before Retraction1054.03866.44116.27
Grandchildren during Retraction Year231.5380.801326.27
Grandchildren after Retraction8040.30565.342009.17
These grandchildren statistics are in some way even more startling as they show the real impact of a retracted paper. Of the papers in our data set, on average, a retracted paper (after retraction) will be cited 72 times and those citations will be cited themselves 565 times. This trend of course grows exponentially as your explore third and fourth order citations, making clear the continued impact of retracted work.

Cumulative Behavior

To understand the general trends of pre- and post-retraction citation rates, we can visually aggregate our results. The following graph shows all the papers we have collected and their citation counts. The x-axis shows the number of years since retraction (either positive or negative), while the y-axis shows the total citation count. The colors differentiate different publications, but carry no additional meaning. The graph is centered around year 0 (0 years since retraction) and a grey vertical line in the middle separates the pre- and post-retraction citations.
We find a few notes of interest in looking at this graph:
  1. A retraction does generate an overall negative trend in the citation rate. After retraction the cumulative trend is towards 0.
  2. The graph is roughly symmetric, implying that there are as many citations before a retraction as there are after.
  3. The tail of citations after retraction is incredibly long. There continue to be citations 10-15+ years after retraction. Such a time period is often longer than the lifespan of a non-retracted publication.
  4. Some publications don’t seem phased by retractions. That is, their citation rate increases after retraction.
To highlight item 4, we present the following image that isolates three examples of a publication whose citation rate increases after retraction.
In analyzing these graphs, it is also important to take a moment and realize we’re still subject to causality. That is, the past has already happened, but the future hasn’t. All of the citations that will be made before retraction have been measured, but there could exist many more citations to come afterwards. This is especially true for those publications in our dataset which were retracted in 2015. Some of these publications had years to aggregate citations, until their 2015 retraction, the effects of which we could not possibly yet measure, and thus skew our graph.
To explore these trends in more depth, we explore the individual trends of six select publications. We choose three publications in which the retraction seems to have had the desired effect (a sharp and immediate drop in citation rate) and three in which the retraction seems to fail (their citation rate is unchanged or even grows after retraction).
In these graphs, we also include the grandchildren - or second-order - citations as blue bubbles. The line represents the number of citations of time, while the blue bubbles at each year interval shows the number of citations received by the original citations. This can be used as a measure of whether influential papers are citing the retracted work.
The following graphs show three instances of successful retraction. In these, we see that the citation rate after retraction (indicated by the grey vertical line) sharply falls. However, it is important to note the non-zero tail of each of these “successes”.
The following graphs show three instances of a failed retraction. Note that the citation rate after retraction (indicated by the grey vertical line) does not sharply decrease, but rather is unchanged or even increasing.

Discussion and Conclusion

One of the main takeaways from this analysis is that there is a serious problem in the efficacy of retraction. Not only is it mired in potentially-destructive social stigma, it also doesn’t seem to serve the function of ending a paper’s lifetime.
To understand the many factors that contribute to this, we have to be honest about the way in which many papers are written. The long tail of citations after publication could be due to the fact that:
  1. People could be knowingly citing retracted work.
  2. Many people download PDFs of papers well in advance of citing them for publication
  3. People may simply copy references from other papers that were published before the retraction
  4. People may add a reference for the sake of having more references, without ever reading the associated paper.
  5. The publication could have been physically printed and the only source of reference used before citation.
Fixing or minimizing these behaviors seems to be incredibly complex and difficult, and so perhaps the burden of responsibility should fall at the time of publication. Either peer-reviewers or an automated system should be checking to ensure that no references used in a publication are retracted. This of course is currently very difficult to do due to the lack of centralized retraction data.
In this analysis, it is also important to note, as stated before, the impact of causality. As all prior-to-retraction citations that could be made are made, we can count them systematically. However, there could still be citations made in the future to these retracted works, so the data will skew due to that lack of information.
Furthermore, while most scraping was automated, finding the retraction details was not. In order to effectively collect the year of retraction, I wrote a script that would open a publications’s url, wait for my input, and then open a new url. For each paper, I manually searched its associated website for the retraction date. Frustratingly, there is no standard way of announcing retractions. Some journals simply give a year and a sentence, while others have posted documents, and others have wonderfully clear red boxes with a note and date.
A standardized method of how a retraction is to be announced, categories for why it was retracted, and a location to make said announcements would be an enormous step forward towards a more effective retraction system.
It is evident that there exist many technically low-hanging fruit that can be captured to improve the scientific retraction process. Unfortunately, these opportunities are mired in cultural and institutional bureaucracy. One important step towards overcoming this is acknowledging that retractions simply do not serve the purpose they intend to. At that point, we can begin to turn retractions into a learning opportunity that becomes an integral tool of the scientific process.

References

[1]"Self-correction in science at work". Science. Vol. 348. American Association for the Advancement of Science, (2015): Num. 6242. 1420--1422.
[3]Grieneisen, Michael L and Zhang, Minghua. "A comprehensive survey of retracted articles from the scholarly literature". PLoS One. Vol. 7. Public Library of Science, (2012): Num. 10. e44118.
[4]Landier, Augustin. "Entrepreneurship and the Stigma of Failure". Available at SSRN 850446. (2005):
[5]Budd, John M and Sievert, MaryEllen and Schultz, Tom R. "Phenomena of retraction: reasons for retraction and citations to the publications". JAMA. Vol. 280. American Medical Association, (1998): Num. 3. 296--297.
[6]Fang, Ferric C and Steen, R Grant and Casadevall, Arturo. "Misconduct accounts for the majority of retracted scientific publications". Proceedings of the National Academy of Sciences. Vol. 109. National Acad Sciences, (2012): Num. 42. 17028--17033.
[7]"Retraction Watch". [http://retractionwatch.com/]
Add to Comment
Creative Commons License
All discussions are licensed under a Creative Commons Attribution 4.0 International License.
Submit
There are no comments here yet.
Be the first to start the discussion!