To improve the publishing ecosystem, the incentives and rewards need to change for researchers meaning how we evaluate and reward work needs to change. Instead of rewarding flashy but wrong work we should reward careful but verified work. After all, research is about uncovering truths, not telling nice stories. This can be done by introducing new metrics and measurements at an article-based level for good research practices. Such metrics need to not only be careful and accurate but be usable by administrators and other scientists to evaluate work. Multiple proposals for rewarding careful work, from research badges for good practices to the
pre-registration of experiments with guaranteed publication have been proposed, however, that they have not been widely adopted by researchers or even worse have had to be incentivized by paying researchers suggests that while they might benefit research, they do not do enough to benefit the researcher.
More sophisticated citation-based metrics designed specifically to evaluate the veracity of work have also been proposed
\cite{Nicholson2014,Grabitz_2017}. Such metrics could, in theory, provide not only a new incentive and reward for careful work but also a systematic way to highlight the major faults of traditional systems. Consider the following scenario: in todays world a paper that has 100 citations or more is generally considered a success. But what if of those hundred citations, four studies found it to be wrong by their own independent evidence and the ninety six others simply mentioned it? Would that paper still be considered successful? The ability to look at independent papers and automatically identify if they are supportive or not would allow the community to track robust work and would incentivize researchers to do all they could so that others could come to their similar findings. That is, they would be incentivized to to share data openly, to publish fully detailed protocols so that others could reproduce their work, to publish carefully, and to publish openly. This is becoming a reality already with
the R-factor, a tool and metric that uses machine learning and sentiment analysis to identify if work has been supported or refuted independently. The implications of such a tool are yet to be realized or even tested given the stage of the R-factor but
community feedback suggests a positive outcome (disclosure: I am the co-inventor of the R-factor).
A new day
With the right metrics, incentives, and tools, researchers could finally utilize a system of their own, one that would advance research at an accelerated rate. Imagine the following: a paper is drafted as described above on a tool like Authorea. As the document is finished it is automatically cross-referenced across databases of other papers and researchers to find related papers and researchers. By doing this it becomes possible to find collaborators and when complete to find reviewers. Tools for matching articles with reviewers, like
JANE, already exist, they simply need to be improved. Coordinating review, an arduous process of back and forth invitations, scheduling, and reminders could be accomplished by an AI system. Indeed, Amy and Andrew, AI-based assistants developed by
x.ai, are in use already by many people to schedule and reschedule meetings all the time. Why
shouldn't it be used to arrange and coordinate peer review? A system that matches, invites, and rewards reviewers could then be developed, dramatically cutting costs and time. Want your paper to be reviewed? Leave a review first. Want to tell if it is a worthwhile paper, read it, evaluate it in terms of article-level metrics and community sentiment (reviews/endorsements). This might seem like science fiction but these tools already exist. It is simply a matter of putting them together in a way that researchers can utilize them. Also, it begets the question: do we need journals?
I invite you to help make this a reality with us and to leave your ideas on our vision by commenting on this document.