Following my previous post about the relevance of publishing reviews and
reviewer names as discussed at PLOS One, here is another contribution of mine to the discussion:
If all reviews were made publicly available, that would allow for
some interesting data mining which might uncover trends about the
reviewing process, which could in turn help us improve the process. How
often are reviewers completely inconsistent? (see http://dynamicecology.wordpress.com/2012/11/25/how-random-are-referee-decisions/)
Are reviewers more stringeant or less detailed for authors from
specific backgrounds (gender, country, etc.)? What is the overall
balance in reviews between correcting style and correcting substance?
Which proportion of reviewer recommendations are accompagnied by a
specific reference? Etc.
No comments:
Post a Comment