Back to Blogs
Newsletter

Crafting a Replicability Factor

DeSci Foundation
December 1, 2023
Subscribe to Newsletter

Academia currently runs on metrics. Or, more precisely, it runs on variations of just one metric - citations. The h-index is based on citations. The impact factor is based on citations. Article influence scores are based on citations. University and department rankings depend on citations. And well… citations are based on citations.

There are multiple problems with this. One of them is that citations primarily measure attention within academia but not the impact of research on society more broadly. Altmetrics addresses this issue by expanding areas where we can measure research impact. This is a critical goal - but there remains a second problem.

Citations, social media posts, articles, use in industry… All of these numbers are just that - numbers. Without the critical context of what is being said in these citations, they can indicate only the amount of conversation around a piece of research. An additional citation does not suggest that a given research article is more reliable than another.

Despite this, we typically use citations as a signal of quality to the general detriment of the scientific record. Well-cited authors and research are more frequently viewed, more generously funded, and questioned less. For example, we've seen the repercussions of this in the scandal around fake data underlying the amyloid beta hypothesis in Alzheimer research or the supposed relationship between trust and oxytocin, which cannot be replicated. In both cases, much attention has been misdirected and resources have been wasted for years or decades.

To respond to this problem, Josh Nicholson conceptualized a new metric, the ‘R-Factor’ or reproducibility factor.

The ‘R-Factor’ quantifies scientific claims' reliability and assesses their support. For Nicholson, “It is a simple ratio where if you had 10 papers test the same claim, and eight of them supported it, you'd have an R factor of 0.8.”

  1. It distinguishes mentions from replication attempts. A paper could have thousands of mentions and 0 assenting or dissenting replication attempts. This now visible gap can then be addressed.
  2. It measures how reliable the reported results are or how much controversy exists around them.

Coupled with Altmetrics, this would allow a broader and better way to measure impact: We would not only measure how much attention an article is getting, but we could also quantify the valence and reliability of that impact.

We have yet to get an actual ‘R-Factor’, but Scite.ai is working on building toward that goal by using large language models and sentiment analysis, allowing readers to get some more context on citations quickly.

In summary, metrics are essential for summarizing and communicating the relevance and quality of science. Unfortunately, our current citation-focused metrics are not doing a good job at either of these objectives. However, new approaches such as Altmetrics and Scite have the potential to improve upon the status quo.