How can a non-trivial scientific standard be maintained in an open peer review system?

+2 votes
300 views
asked Sep 15, 2015 in Open Science by Dilaton (80 points)

The "closed" (pay) journal peer-review method has some drawbacks

  • It is very slow and inefficient, between the submission of a paper and its publication can be several months
  • There is some kind of a "sampling effect": Only the editor + 2-3 referees judge the paper and decide about its fate. This can (and it did) lead to mistakes such that Good papers can get wrongly rejected (Higgs’s paper, string theory in the 1970), or nonsense gets accepted and in certain cases even globally hyped in the popular media.

Making use of the technological possibilities we have today and shift business to some kind of online pupblic open peer-review system seems to be able to do away with these negative sampling effects, as everybody who has the expertise needed can take part in the discussion and help judging the merits of a research paper.

However, this does only work when everybody is serious and honest enough to not overstep his expertise and refrains from reviewing papers about topics he is not knowledgeable about. Another issue is that opening the scientific peer-review process completely such that literally everybody can take part, this bears the danger that cranks and crackpots try to advertise their nonsense too.

So how can one keep up a non-trivial scientific standard in a public open peer-review envirement, without losing to many advantages compared to the standard "closed" journal-peer reviewing method?

 

2 Answers

+2 votes
answered Sep 18, 2015 by Stefan (20 points)
Depending on the disciplin the shift to open review may be difficult. So much depends on a good reputation of the publishing outlet. Some authors do not get promotion if there is no strict peer review involved. We at Language Science Press therefore decided to have a two step reviewing phase. The first phase is the traditional peer review (limited in time, if the deadline is not met, the book counts as rejected). A second, optional phase is the open review phase.

The first book that went through open reviewing is my Grammatical Theory textbook: http://langsci-press.org/catalog/book/25

The experience was quite positive: I got important remarks that improved the book. No trolls around. Some remarks are in the annotated pdf on the webpage above and some I got in private emails.

As for the trolls: We plan to combine OR with gamification, so that users and authors can up and downvote comments. This would identify science trolls pretty soon.

I guess in general the question is how to motivate people to do the OR. Is this something for your CV? We want to add motivation by gamification. People who have a high rank may put this into their CVs one day as a measurment of their service to the community.

 

Some of these ideas can be found in this journal paper and in our grant application:

http://hpsg.fu-berlin.de/~stefan/Pub/oa-jlm.html

http://hpsg.fu-berlin.de/~stefan/Pub/lsp-dfg.html
+1 vote
answered Sep 18, 2015 by Gerhard Paseman
There are some things I would recommend.

 

One is to have a custom Open Science Spam filter.  You know, one of those self-learning things which recognizes spam, except train it to reject papers.  The idea is to ensure certain consistency of writing and readability of input.  A lot of members here could generate some examples they would like to see rejected to start the training.  How much hints one gives the submitter as to why a paper was rejected are up to the designers; I would send a form letter that says it does not meet some of the automatic processing standards, and cursory review says the first problem is on page 3, with at least X many other pages being problematic (or some such thing that tell the submitter a lot of work needs to be done).

 

Second is to have people scan those papers that pass the filter briefly.  If the claims are outrageous or there is some other clear indicator that the paper is not acceptable, see if the filter can be trained to reject papers containing the offending section.

 

Third is to have a process for review.  Once papers pass the first two stages, have it sit in an Inbox for everyone to critique.  If no one picks it up and comments it after a certain period, claim a backlog or reviewer shortage or find another system that would allow it to be reviewed.

 

Fourth is for papers that have made it through at least two reviewers (ideally at some remove from the authors and authoring institutions).  Those are then put in the next stage of the pipeline for either thorough or massive reviewing, which can pick the paper apart.  If a paper makes it to the fourth stage, it should be worthy of scrutiny by all.  If it makes it path the fourth stage, it should be readable by "enough" people, ideally nonexperts as well as experts.

 

In all of this, the emphasis should be on good exposition and clear writing.  Any portions which do not exhibit this should be clearly marked by a known reviewer, so that others will not have their time wasted by trying to interpret a document.  Hopefully the automatic filter can be trained to recognize good exposition, and the How-To-Submit documentation should give good examples of clear exposition and examples which are unclear (and don't pass the first filter).  The goal should be output that would be nominated for good science writing.  Even if the paper is speculative and not supported by data, it can be marked as such, and critiques of the paper would in an ideal world include how the ideas could be tested.

 

Gerhard "Ask Me About System Design" Paseman, 2015.09.17
commented Sep 18, 2015 by Stefan (20 points)
I would not use a spam filter. This would exclude non-natives and beginners. Some of them would give up frustrated. Science is about collaboration of humans not about fights between humans and spam filters. By the way: I am glad that openscience.ub.uni-bielfeld uses a captcha that is not a pain in the neck. The early captcha stuff was one of the spam filtering techiques that excluded people.
commented Sep 18, 2015 by Gerhard Paseman
If you don't exclude, you end up with a pile of stuff that almost no one wants to read.  The point of the question was about setting up such an exclusion, called "a non-trivial scientific standard".  You don't have to have an automated spam filter, but at some point the submission rate will be more than the reviewers can handle.  There are ways to encourage non-natives and beginners, but not by accepting their first submission.  Gerhard "By Auto Critquing It Instead" Paseman, 2015.09.18

Welcome to Open Science Q&A, where you can ask questions and receive answers from other members of the community.

If you participated in the Open Science beta at StackExchange, please reclaim your user account now – it's already here!

e-mail the webmaster

...