What does judging open science on merit entail?

+3 votes
53 views
asked Aug 4, 2015 in Open Science by Pat W. (40 points)

In this question, Gavin Simpson links to the SF Declaration. While the document is clear its desire to deemphasize journal impact factors, it recommends exploring new article-level metrics.

What are some examples of article metrics or methods that would serve a similar quality-control function as a peer-reviewed impact factor?



This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
commented Aug 18, 2015 by Alexander Konovalov (135 points)
If you think that this thread should be migrated to Academia or another SE site because the OpenScience beta is closing, please edit the list of questions shortlisted for the migration [here](http://meta.openscience.stackexchange.com/questions/73/).

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
commented Aug 18, 2015 by Gavin Simpson (720 points)
@PatW. Right-o. Thanks for the clarification. I'll try to convert my comment here into an answer

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
commented Aug 18, 2015 by Gavin Simpson (720 points)
If you are reducing this to metrics then I believe you are doing it wrong. As David Colquhoun (UCL) is quick to mention (in reference to academic appointments) *the best places read papers, ignore journals*. That pithy remark sums up my feelings here too. Even altmetrics, of which Colquhoun has little to no time for at all, fail at the first step of evaluating the merits of the work. instead they quantify those who shout loudly and/or have good social media presence.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
commented Aug 18, 2015 by Pat W. (40 points)
@GavinSimpson Not restricted to actual metrics; edited for clarity.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
commented Aug 18, 2015 by Daniel Standage (420 points)
This is an important question. It's fun to entertain ideas about how we could implement the next generation of tools for aggregating and consuming scientific literature: the "Amazon" or "StackExchange" of science, complete with rating and review systems. But as much as JIF is flawed and the current academic system can be gamed, any new system has the potential to be gamed as well, in ways that we may not initially anticipate. In my opinion, a real revolution in open science requires an in-depth dialogue on this and related questions.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)

2 Answers

+4 votes
answered Aug 4, 2015 by Gavin Simpson (720 points)

The most appropriate method is the one that almost surely will not be widely accepted or used. If you want to understand the quality and impact of a piece or body of work then you need to read that work and evaluate it within the context of the field within which it was published.

Arguments against this are the workload involved, but having impartial experts judge contributions to a field is likely the least gameable of the potential options.

The problem with automated metrics such as the one @Jure Triglav mentions is that links or citations in and of themselves do not constitute agreement with nor ascribe merit to a work. You only need to look at the top result of a Google search on the term "what happened to the dinosaurs", which is this piece of tripe. At one point, Google even made special mention, quoting from that piece of tripe in a card in the search results: see this comment piece for how is used to look

Further problems relate to the vagaries of citations;

  • scientists are often lazy when it comes to citing past work;
  • the number of citations is often limited by journals;
  • scientists often forget literature older than a few years;
  • citations are often not allowed to data or software.

Links need to be made to publications and that is done through citations and all the difficulties they bring.

Whilst altmetrics can provide some support for contributions beyond the traditional scientific literature, such as for software, slide decks etc., at best they are supplementary to a proper evaluation of the unique contribution that a researcher has made to a field. At the moment that requires some considerable human intervention.



This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
commented Aug 18, 2015 by Jure Triglav (110 points)
I agree that having expert reviews is the least gameable option, but it needs to be said that it's still not ungameable (ugh), as the definition of "expert" relies on individual gameable things. That aside, ideally, these reviews would be publicly searchable, collected and displayed in an accessible fashion, to prevent duplication of work (PubPeer?).

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
commented Aug 18, 2015 by Gavin Simpson (720 points)
@JureTriglav Agreed; all kinds of bias, subconscious or otherwise, can and do creep into these evaluations so mechanisms need to be in place to disincentivize such behaviour or biases. Open reviews is one action in this regard.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
+2 votes
answered Aug 4, 2015 by Jure Triglav (110 points)

Any system can be gamed, but some are harder to manipulate, or require colluding with a large and thusly brittle group.

A good article-level metric would be a representation of the assigned "PageRank" variant score. A paper linked not only to a large number of papers, but linked to a large number of well-linked papers is undoubtedly an important paper. While it's possible that it's important in a negative sense, that outcome is far less likely, and becomes less likelier still, as the score grows.

Would be very interesting to compare PageRank-oid scores with classic metrics such as number of citations, and also other newer metrics, such as number of views, downloads, tweets, likes, etc.

In other words, the world already fairly successfully relies on PageRank for the vast majority of sorting by importance or merit in information lookups, and science is merely a specific variant of this.



This post has been migrated from the Open Science private beta at StackExchange (A51.SE)

Welcome to Open Science Q&A, where you can ask questions and receive answers from other members of the community.

If you participated in the Open Science beta at StackExchange, please reclaim your user account now – it's already here!

e-mail the webmaster

...