Online - Exposing Scientific Peer Review (Oct08)

76

Click here to load reader

description

This is an in-depth analysis of the evidence behind the scientific peer review process. This is often taken for granted, but is subject to many biases and issues. First published online only October 2008.

Transcript of Online - Exposing Scientific Peer Review (Oct08)

Page 1: Online - Exposing Scientific Peer Review (Oct08)

Peer Review in ScienceWhat is It, Does it Work, How Should It Evolve?

Alex J Mitchell, University of Leicester [email protected]

October 2008

Page 2: Online - Exposing Scientific Peer Review (Oct08)

Background:About Medical PublishingGutenburg publishing, increasing output

Page 3: Online - Exposing Scientific Peer Review (Oct08)

Slide Credit: Robert Gorter – Regional Manager Elsevier, 2008

Page 4: Online - Exposing Scientific Peer Review (Oct08)

http://www.americanscientist.org/my_amsci/restricted.aspx?act=pdf&id=3263000957901

Increase in Scientific Output

Page 5: Online - Exposing Scientific Peer Review (Oct08)

Publish Publish Publish!

Tony and Sam were pleased with their output for 2007

Page 6: Online - Exposing Scientific Peer Review (Oct08)

Author’s View of What’s Important

0102030405060708090

100Peer-reviewed

Refs' commentspublished

Referees identified

Public commentary oneprints

Post-publication publiccommentary

Ability to submitcomments

http://www.alpsp.org/pub5.ppt

Page 7: Online - Exposing Scientific Peer Review (Oct08)

Introducing Scientific Peer ReviewHistory, Definition, Types, Manuscript Processing

Page 8: Online - Exposing Scientific Peer Review (Oct08)
Page 9: Online - Exposing Scientific Peer Review (Oct08)

History of Peer Review

• Inquisition of the Holy Roman and Catholic Church”. Scholars’ works were examined for any hints of “heresy”.

• The first recorded academic peer review process was at The Royal Society in 1665 by the founding editor of Philosophical Transactions of the Royal Society, Henry Oldenburg, soon followed by “Medical Essays and Observations” published by the Royal Society of Edinburgh in 1731.

•Ray Spier (2002), "The history of the peer-review process", Trends in Biotechnology 20 (8), p. 357-358 [357].

• “Peer Review” of any type goes back to the 17th century and beyond; example “The

Page 10: Online - Exposing Scientific Peer Review (Oct08)

Definitions• Peer Review Process (defn from Wikipedia)

– Peer review (also known as refereeing) is the process of subjecting an author's scholarly work, research or ideas to the scrutiny of others who are experts in the same field.

– Peer review requires a willing and able community of experts who give impartial feedback, with no personal credit and no financial or other reward.

• Peer Review Journal– A peer-reviewed journal is one that has submitted most of its published

articles for review by experts who are not part of the editorial staff. The numbers and kinds of manuscripts sent for review, the number of reviewers, the reviewing procedures and the use made of the reviewers’

– (International Committee of Medical Journal Editors. Uniform Requirements for Manuscripts submitted to Biomedical Journals. 2001 http://www.icmje.org/)

Page 11: Online - Exposing Scientific Peer Review (Oct08)

Types of Peer ReviewInternal vs External

An internal review is conducted only by editorial staffAn external review is conducted by experts in the field

Blind vs OpenIn Blind Peer Review submitted manuscripts are sent outside of the journal’s

publishing or sponsoring organization for review by external reviewers whose identifies are hidden

In Open Peer Review reviewers disclose their identity. Often authors are encouraged to suggest possible reviewers who are may or may not be impartial

Page 12: Online - Exposing Scientific Peer Review (Oct08)

Examples of Problems with Peer Review• Famous papers that were published and did NOT get peer reviewed:

– Watson & Crick’s 1951 paper on the structure of DNA in Nature– Abdus Salam’s paper “Weak and electromagnetic interactions” (1968). Led to Nobel Prize– Alan Sokal’s “Transgressing the Boundaries...” in 1996 turned out to be a hoax. Now known as the

Sokal Affair.– Albert Einstein's revolutionary "Annus Mirabilis" papers in the 1905 issue of Annalen der Physik were

only reviewed by the editor • Famous papers that were published and passed peer review that later proved to be fraudulent:

– Jan Hendrik Schon (Bell Labs) submitted and passed peer review 15 papers published in Science and Nature (1998-2001) found to be fraudulent.

– Igor and Grichka Bogdanov 1999 & 2002 published papers in theoretical physics believed by many to be jargon-rich nonsense.

• Famous papers that got rejected that later turned out to be seminal works:– Krebs & Johnson’s 1937 paper on the role of citric acid on metabolism was rejected by Nature as being

of “insufficient importance”, was eventually published in the Dutch journal Enzymologia. This discovery, now known as the Krebs Cycle, was recognized with a Nobel prize in 1953.

– Black & Scholes 1973 paper on “the pricing of options and corporate liabilities”, rejected many times, was eventually published at the intersession of Merton Miller to get it accepted by the Journal of Political Economy. This work led to the Nobel Prize.

Credit: Peggy Dominy & Jay Bhatt

Page 13: Online - Exposing Scientific Peer Review (Oct08)

Does it Work? The Issue In a Nutshell• “Stand at the top of the stairs with a pile of papers and throw them

down the stairs. Those that reach the bottom are published.”

=> It is not clear to what extent peer review improves the submitted product

• “Sort the papers into two piles: those to be published and those to be rejected. Then swap them over!”

=> Peer review is often haphazard and unmonitored

Adapted from Trish Groves: BMJ What Do We Know About Peer Review

Page 14: Online - Exposing Scientific Peer Review (Oct08)

Typical Journal Statistics (eg BMJ)• 6000-7000 research papers received• About 5-7% accepted• 1000 rejected by one editor within 48 hours

– further 3000 rejected with second editor– within one week of submission 3000 read by senior editor;

further 1500 rejected– 1500 sent to two reviewers; then 500 more rejected– 500 reach weekly manuscript meeting (with Editor, a clinicians

and a statistician)– 350 research articles accepted, usually after revision

• See Over for Detailed Figure

Page 15: Online - Exposing Scientific Peer Review (Oct08)

BMJ Manuscript Processing

5000 scanned by 1 editor

2500 sentfor review

1000 Immediately

rejected

6000 received per year

Poorly writtenObvious flawsToo Obscure

Hand writtenSilly mistakesWrong journal

100 accepted

2500Rejected by

editor

1000 Rejected by reviewer /

editor

Methodological concerns

Not interesting

400 rejected

1500 sent to hanging

committees

500 discussed

100 accepted400 rejected500 discussed

100 accepted400 rejected500 discussed

300 accepted per year(6 per week + 2 short

reports)

Acceptance Rate = 1 in 20

Page 16: Online - Exposing Scientific Peer Review (Oct08)

Functions/Responsibilities of Peer Review

1. Filtering out incorrect, inadequate & fraudulent work

2. Improving the accuracy and clarity of work that warrants acceptance

3. Helping journals deal with high volumes

4. Helping journals deal with multiple publication

Page 17: Online - Exposing Scientific Peer Review (Oct08)

These Are NOT Functions of Peer Review

• Deciding whether the paper should be accepted– This is the role of the editor

• Improving the spelling and grammar– This is the role of the copy-editor

• Improving on the study design– This is the role of the author

• Deciding upon the author order– This is the role of the authors

• Disseminating the reviewed paper– This is not allowed unless the paper is officially in print

Page 18: Online - Exposing Scientific Peer Review (Oct08)

Practicalities of Peer ReviewFilters, Publication Cycle, Responsibilities, Tips

Page 19: Online - Exposing Scientific Peer Review (Oct08)

Overview of Peer Review

Grading Accept ReviseReject

Peer Review Filter

Published Article

Send elsewhere

Reject

Qualitative Grade

Credit: Bradley Hemminger

Editorial Filter

Quantitative Grade

Outcome

Comments to Author

Score (1-10)

Author Filter

Article Appraisal

Where to Submit

Page 20: Online - Exposing Scientific Peer Review (Oct08)

Conventional Publication Cycle – 12mo

Step1

Paper Completed

Step 2

Present to colleagues (informal PR)

Step 3

Present at conference

Step 4

Submit to journal

Step 6

Peer

Review 1

Step 7

Revision 1

Step 8

Peer Review 2

Step 10

Proof

T0 T+1mo

Step 12

Readers letters (public PR)

T+3mo T+6mo

Step 11

Restricted Publication

Step 9

Revision 2

Step 5

Editor Agrees to Send for Review

T+7mo T+10mo T+12mo T+14mo

T+15mo T+18mo T+24mo

Page 21: Online - Exposing Scientific Peer Review (Oct08)
Page 22: Online - Exposing Scientific Peer Review (Oct08)

Questions before starting to Review• Expertise:

– Do I have expertise in the content or methods, or a valuable perspective on the issue?

• Potential conflicts : – Do I have conflicts of interest that preclude fair and balanced judgments?– Do I stand to gain, either financially or personally, from reviewing this

particular manuscript?– Will I be able to hold the main information that I gain from reviewing this

manuscript confidential until publication?

• Ability to meet deadline– Do I have the time to devote to this review and complete it by the date the

editors requested?

Peer Review: Integral to Science and Indispensable to Annals 1038 16 December 2003 Annals of Internal Medicine Volume 139 • Number 12

Page 23: Online - Exposing Scientific Peer Review (Oct08)

Questions whilst completing the Review• Does the review address the relevance of the topic to readers?• Does the review address the manuscript’s importance and novelty and say what

it adds to existing knowledge?• Does the review address the validity of the research, pointing out major strengths

and weaknesses of the methods?• Does the review address the clarity of presentation?• Does the review address important missing and/or inaccurate• information?• Does the review address the generalizability of findings?• Does the review address the interpretation of results and stated conclusions?• Does the review address whether the authors noted and discussed important

limitations?• Does the review cite specifics to support criticisms?• Does the review offer suggestions for improvement?• Does the review keep nitpicking to a minimum?• Is the review’s tone balanced?• Did I declare potential conflicts of interest?

Peer Review: Integral to Science and Indispensable to Annals 1038 16 December 2003 Annals of Internal Medicine Volume 139 • Number 12

Page 24: Online - Exposing Scientific Peer Review (Oct08)

Editor’s Dilemma• What Happens When Referees Disagree:

– (a) the Editor must decide– (b) Option to Use more additional reviews as a tie-breaker– (c) invite authors to reply to a referee's criticisms– (d) an editor may convey communications back and forth

between authors and a referee

– Usually however the editor will reject the paper if there is 1x negative review unless there is a special reason not to do so. This process is not open, and such decisions are often unexplained to authors

Page 25: Online - Exposing Scientific Peer Review (Oct08)

Tips for Reviewers: 1• Be courteous and constructive• Your role is advising not deciding• Try to suggest improvement no matter what the outcome• Maintain confidentiality• Don’t review work for those you know well• Complete reviews promptly, typically within 4 weeks• Spend at least 1 hour on the review• Search for related (esp recent work)• Write as you would like to be written to

Page 26: Online - Exposing Scientific Peer Review (Oct08)

Tips for Reviewers 2 - Key Questions• Is the research question Appropriate?• Was the question answered?• Were the methods appropriate?• What must be improved?• What could be improved?• What were the strengths?• Was all relevant literature considered?• What will readers think• => Would I object if my review was published

Page 27: Online - Exposing Scientific Peer Review (Oct08)

Problems with Peer ReviewExplicit Rules , Reliability, Bias, Blinding, Plagiarism Detection, Delays

Page 28: Online - Exposing Scientific Peer Review (Oct08)

Rules of the Game are Not ExplicitPeters and Ceci (1982)Resubmitted 12 altered articles to psychology journals that

had already published them but changed title/abstract/introduction/authors’ name/name of institution

• 3 articles recognised as resubmissions• One accepted• 8 rejected on methdological grounds!

Peters, D.C., & Ceci, S.J. (1982). Peer review practices of psychological journals: The fate of published articles, submitted again. The Behavioral and Brain Sciences, 5, 187-255.

Page 29: Online - Exposing Scientific Peer Review (Oct08)

What’s Wrong with Peer Review?• For Authors, Peer review:

– Usually Unreliable (Idiosyncratic)– Often Unfair (many biases)– Has no gold standard (Unstandarized)– Often provides qualitative comments only

• For Journals, Peer review:– Stifles innovation => encourages group think– Rewards the prominent– Causes unnecessary delay in publication– Is very expensive– Does not detect fraud, duplicate publication etc

Juan Miguel Campanario, "Rejecting Nobel class articles and resisting Nobel class discoveries", cited in Nature, 16 October 2003, Vol 425, Issue 6959, p.645Sophie Petit-Zeman, "Trial by peers comes up short" (2003) The Guardian, Thursday January 16, 2003

Page 30: Online - Exposing Scientific Peer Review (Oct08)

Inter-Rater Reliability Issues - 1• Locke reported inter-observer values ranging from 0.11 to 0.49 for agreement

between a number of reviewers making recommendations on a consecutive series of manuscripts submitted to BMJ

• Ingelfinger reported Rates of agreement only “moderately better than chance” (Kappa = 0.26) with agreement greater for rejection than acceptance

• Strayhorn and colleagues reported a value of 0.12 (poor agreement) for 268 manuscripts submitted to the JAACAP (Strayhorn et al., 1993)

• low levels of agreement were reported by Scharschmidt et al. for papers submitted to the Journal of Clinical Investigation (Scharschmidt et al., 1994

Two reviewers are not enough• Fletcher and Fletcher 1999 - need at least six reviewers, all favouring rejection or

acceptance, to yield a stats significant conclusion (p<0.05)

Ingelfinger FJ. Peer review in biomedical publication. American Journal of Medicine 1974; 56:686-692.. Locke S. A difficult balance: editorial peer review in medicine. London: Nuffield Provincial Hospitals Trust; 1985.Strayhorn J Jr, McDermott JF Jr, Tanguay P. An intervention to improve the reliability of manuscript reviews for the Journal of the American Academy of Child and Adolescent Psychiatry. Am J Psychiatry 1993; 150: 947–52.Scharschmidt BF, DeAmicis A, Bacchetti P, Held MJ. Chance, concurrence and clustering: analysis of reviewers' recommendations on 1000 submissions to the Journal of Clinical Investigation. J Clin Invest 1994; 93: 1877–80.

Page 31: Online - Exposing Scientific Peer Review (Oct08)

Inter-Rater Reliability Issues - 2• Linkov F et al Quality Control of Epidemiological Lectures Online:

Scientific Evaluation of Peer Review Croat Med J. 2007;48:249-55

Page 32: Online - Exposing Scientific Peer Review (Oct08)

Is agreement between reviewers any greater than would be expected by chance alone? Brain 2000

• We studied two journals in which manuscripts were routinely assessed by two reviewers, and two conferences in which abstracts were routinely scored by multiple reviewers.

• Agreement between the reviewers as to whether manuscripts should be accepted, revised or rejected was not significantly greater than that expected by chance.

Rothwell PM & Martyn CN Reproducibility of peer review in clinical neuroscience Is agreement between reviewers any greater than would be expected by chance alone? Brain, Vol. 123, No. 9, 1964-1969, 2000

• Editors were very much more likely to publish papers when both reviewers recommended acceptance than when they disagreed or recommended rejection

• There was little or no agreement between the reviewers as to the priority

Page 33: Online - Exposing Scientific Peer Review (Oct08)

Reliability of Editors' Subjective Quality Ratings of Peer Reviews of Manuscripts

• Objective.— Whether editors' quality ratings of peer reviewers are reliable and how they compare with other performance measures.

• Design.— A 3.5-year prospective observational study. • Participants.— All editors and peer reviewers who reviewed at least 3 manuscripts. • Main Outcome Measures.— Reviewer quality ratings, individual reviewer rate of

recommendation for acceptance, congruence between reviewer recommendation and editorial decision (decision congruence), and accuracy in reporting flaws in a masked test manuscript.

• Interventions.— Editors rated the quality of each review on a subjective 1 to 5 scale. • Results.— A total of 4161 reviews of 973 manuscripts by 395 reviewers were studied.

The within-reviewer intraclass correlation was 0.44 (P<.001), indicating that 20% of the variance seen in the review ratings was attributable to the reviewer. Intraclass correlations for editor and manuscript were only 0.24 and 0.12, respectively. Reviewer average quality ratings correlated poorly with the rate of recommendation for acceptance (R=-0.34) and congruence with editorial decision (R=0.26). Highly rated reviewers reported twice as many flaws as poorly rated reviewers.

• Conclusions.— Subjective editor ratings of individual reviewers were moderately reliable and correlated with reviewer ability to report manuscript flaws.

Callaham ML et al Reliability of Editors' Subjective Quality Ratings of Peer Reviews of Manuscripts JAMA. 1998;280:229-231.

Page 34: Online - Exposing Scientific Peer Review (Oct08)

BiasAuthor-related• Prestige (author/institution)• Gender• Where they live and work

Paper-related• Positive results• English language

Page 35: Online - Exposing Scientific Peer Review (Oct08)

Bias from Hidden Reviewers

AuthorAuthor

Friend

Unknown

Unknown

Enemy

Revision

Rejection

Enemy

+

+

--

-

Acceptance

See Maddox J. Conflicts of interest declared [news]. Nature 1992; 360: 205; Locke S. Fraud in medicine [editorial]. Br Med J 1988; 296: 376–7.

Page 36: Online - Exposing Scientific Peer Review (Oct08)

Open & Blind Review & Submission• Does revealing reviewers’ identity influence outcome?

– Open vs blind peer review

• Does revealing Authors’ identity influence outcome?– Open vs blind submission

• Key Studies

Page 37: Online - Exposing Scientific Peer Review (Oct08)

“Effect of Blinding and Unmasking on the Quality of Peer Review A Randomized Trial JAMA”

Design and Setting.— Randomized trial of 527 consecutive manuscripts submitted to BMJ, which were randomized and each sent to 2 peer reviewers.

• Interventions.— Manuscripts were randomized as to whether the reviewers were unmasked, masked, or uninformed that a study was taking place. Two reviewers for each manuscript were randomized to receive either a blinded or an unblinded version.

• Results.— Of the 527 manuscripts entered into the study, 467 (89%) were successfully randomized and followed up. The mean total quality score was 2.87. There was little or no difference in review quality between the masked and unmasked groups (scores of 2.82 and 2.96, respectively) and between the blinded and unblinded groups (scores of 2.87 and 2.90, respectively). There was no apparent Hawthorne effect. There was also no significant difference between groups in the recommendations regarding publication or time taken to review.

• Conclusions.— Blinding and unmasking made no editorially significant difference to review quality, reviewers' recommendations, or time taken to review.

Van Rooyen et al Effect of Blinding and Unmasking on the Quality of Peer Review A Randomized Trial JAMA. 1998;280:234-237.

Open vs Blind Submission

Page 38: Online - Exposing Scientific Peer Review (Oct08)

“Effect on the quality of peer review of blinding peer reviewers and asking them to sign their reports”

• Objective.— To evaluate the effect on the quality of peer review of blinding reviewers to the authors' identities and requiring reviewers to sign their reports.

• Design.— Randomized controlled trial. • Setting.— A general medical journal. • Participants.— A total of 420 reviewers from the journal's database. • Intervention.— We modified a paper accepted for publication introducing 8 areas of weakness.

Reviewers were randomly allocated to 5 groups. Groups 1 and 2 received manuscripts from which the authors' names and affiliations had been removed, while groups 3 and 4 were aware of the authors' identities. Groups 1 and 3 were asked to sign their reports, while groups 2 and 4 were asked to return their reports unsigned. The fifth group was sent the paper in the usual manner of the journal, with authors' identities revealed and a request to comment anonymously. Group 5 differed from group 4 only in that its members were unaware that they were taking part in a study.

• Main Outcome Measure.— The number of weaknesses in the paper that were commented on by the reviewers.

• Results.— Reports were received from 221 reviewers (53%). The mean number of weaknesses commented on was 2 (1.7, 2.1, 1.8, and 1.9 for groups 1, 2, 3, and 4 and 5 combined, respectively). There were no statistically significant differences between groups in their performance. Reviewers who were blinded to authors' identities were less likely to recommend rejection than those who were aware of the authors' identities (odds ratio, 0.5; 95% confidence interval, 0.3-1.0).

• Conclusions.— Neither blinding reviewers to the authors and origin of the paper nor requiring them to sign their reports had any effect on rate of detection of errors. Such measures are unlikely to improve the quality of peer review reports

Godlee et al JAMA. 1998;280:237-240.

Open vs Blind Submission

Page 39: Online - Exposing Scientific Peer Review (Oct08)

“The effects of blinding on the quality of peer review. A randomized trial”• Peer reviewers are blinded sometimes to authors' and institutions' names, but the

effects of blinding on review quality are not known.

• We, therefore, conducted a randomized trial of blinded peer review. Each of 127 consecutive manuscripts of original research that were submitted to the Journal of General Internal Medicine were sent to two external reviewers, one of whom was randomly selected to receive a manuscript with the authors' and institutions' names removed.

• Reviewers were asked, but not required, to sign their reviews.• Blinding was successful for 73% of reviewers.

• Quality of reviews was higher for the blinded manuscripts (3.5 vs 3.1 on a 5-point scale). Forty-three percent of reviewers signed their reviews, and blinding did not affect the proportion who signed. There was no association between signing and quality. Our study shows that, in our setting, blinding improves the quality of reviews and that research on the effects of peer review is possible.

McNutt et al JAMA Vol. 263 No. 10, March 9, 1990

Open vs Blind Submission

Page 40: Online - Exposing Scientific Peer Review (Oct08)

“Does Masking Author Identity Improve Peer Review Quality? A Randomized Controlled Trial”• Objectives.— To determine whether masking reviewers to author identity is generally associated

with higher quality of review at biomedical journals, and to determine the success of routine masking techniques.

• Interventions.— Two peers reviewed each manuscript. In one study arm, both peer reviewers received the manuscript according to usual masking practice. In the other arm, one reviewer was randomized to receive a manuscript with author identity masked, and the other reviewer received an unmasked manuscript.

• Main Outcome Measure.— Review quality on a 5-point Likert scale as judged by manuscript author and editor. A difference of 0.5 or greater was considered important.

• Results.— A total of 118 manuscripts were randomized, 26 to usual practice and 92 to intervention. In the intervention arm, editor quality assessment was complete for 77 (84%) of 92 manuscripts. Author quality assessment was complete on 40 (54%) of 74 manuscripts. Authors and editors perceived no significant difference in quality between masked (mean difference, 0.1; 95% confidence interval [CI], -0.2 to 0.4) and unmasked (mean difference, -0.1; 95% CI, -0.5 to 0.4) reviews. We also found no difference in the degree to which the review influenced the editorial decision (mean difference, -0.1; 95% CI,-0.3 to 0.3). Masking was often unsuccessful (overall, 68% successfully masked; 95% CI, 58%-77%), although 1 journal had significantly better masking success than others (90% successfully masked; 95% CI, 73%-98%). Manuscripts by generally known authors were less likely to be successfully masked (odds ratio, 0.3; 95% CI, 0.1-0.8). When analysis was restricted to manuscripts that were successfully masked, review quality as assessed by editors and authors still did not differ.

• Conclusions.— Masking reviewers to author identity as commonly practiced does not improve quality of reviews. Since manuscripts of well-known authors are more difficult to mask, and those manuscripts may be more likely to benefit from masking, the inability to mask reviewers to the identity of well-known authors may have contributed to the lack of effect.

Justice et al. JAMA. 1998;280:240-242.

Open vs Blind Submission

Page 41: Online - Exposing Scientific Peer Review (Oct08)

“Differences in Review Quality and Recommendations for Publication Between Peer Reviewers Suggested by Authors/Editors"

• Design, Setting, and Participants Observational study of original research papers sent for external review at 10 biomedical journals. Editors were instructed to make decisions about their choice of reviewers in their usual manner. Journal administrators then requested additional reviews from the author's list of suggestions according to a strict protocol.

• Main Outcome Measure Review quality using the Review Quality Instrument and the proportion of reviewers recommending acceptance (including minor revision), revision, or rejection.

• Results There were 788 reviews for 329 manuscripts. Review quality (mean difference in Review Quality Instrument score, –0.05; P = .27) did not differ significantly between author- and editor-suggested reviewers. The author-suggested reviewers were more likely to recommend acceptance (odds ratio, 1.64; 95% confidence interval, 1.02-2.66) or revise (odds ratio, 2.66; 95% confidence interval, 1.43-4.97). This difference was larger in the open reviews of BMJ than among the blinded reviews of other journals for acceptance(P = .02). Where author- and editor-suggested reviewers differed in their recommendations, the final editorial decision to accept or reject a study was evenly balanced (50.9% of decisions consistent with the preferences of the author-suggested reviewers).

• Conclusions Author- and editor-suggested reviewers did not differ in the quality of their reviews, but author-suggested reviewers tended to make more favorablerecommendations for publication.

Schroter JAMA 2006;295:314-317; see also Scharschmidt et al J Clin Invest 1994; 93: 1877–80.

Choice vs No-Choice Reviewer

Gutenburg publishing, increasing output

Page 42: Online - Exposing Scientific Peer Review (Oct08)

COPE Studies• Deliberately inserted 8 errors(method, analysis and

interpretation) into an accepted paper

• sent it to 400 reviewers - 221 responded

• mean number of weaknesses found was 2

• only 10% identified 4 or more

• 16% didn’t detect any •Godlee,F et al JAMA,1998,280,237

Page 43: Online - Exposing Scientific Peer Review (Oct08)

“Open peer review: a randomised controlled trial”Br Journal Psychiatry 2000

• 408 manuscripts assigned to reviewers who agreed were randomised to signed or unsigned groups.

• 245 reviewers (76%) agreed to sign their name.

• Signed reviews were of 5% higher quality, more courteous and took longer to complete than unsigned reviews.

• They were more likely to recommend publication.

Walsh et al (2000) Br Journal Psychiatry 176(1)47-51

Page 44: Online - Exposing Scientific Peer Review (Oct08)

van Rooyen BMJ 1999;318:23-27• 125 eligible papers were sent to two reviewers who were

randomised to have their identity revealed to the authors or to remain anonymous.

• Identified were 12% more likely than anonymous (35% v 23%) to decline to review the paper.

• There was no significant difference in quality

• No significant difference in the recommendation regarding publication or time taken to review the paper.

Page 45: Online - Exposing Scientific Peer Review (Oct08)

Problems: Plagarism & Duplicates

• A poll of 3,247 scientists funded by the U.S. National Institutes of Health found 0.3% admitted faking data, 1.4% admitted plagiarism, and 4.7% admitted to autoplagiarism (republishing).[Weiss]

• Note: Reviewers generally lack access to full raw data!!

• Weiss, Rick. 2005. Many scientists admit to misconduct: Degrees of deception vary in poll. Washington Post. June 9, 2005. page A03.[1]

Page 46: Online - Exposing Scientific Peer Review (Oct08)

Problems: Delays!!

Page 47: Online - Exposing Scientific Peer Review (Oct08)

“There was some delay in the peer review due to a backlog in the in-tray”

Page 48: Online - Exposing Scientific Peer Review (Oct08)

Credit: [email protected]

Page 49: Online - Exposing Scientific Peer Review (Oct08)

Speed of Publication

Credit: Fiona Godlee

Page 50: Online - Exposing Scientific Peer Review (Oct08)

Evidence behind Peer ReviewWhat does the evidence suggest….who judges the judges?

Page 51: Online - Exposing Scientific Peer Review (Oct08)

“Effects of Editorial Peer Review: A Systematic Review”• 9 studies considered the effects of concealing reviewer/author identity.

– 4 studies suggested that concealing reviewer or author identity affected review quality (mostly positively); however, methodological limitations make their findings ambiguous.

• One study suggested that a statistical checklist can improve report quality, but another failed to find an effect of publishing another checklist.

Jefferson T et al JAMA. 2002;287:2784-2786

• 2 studies of how journals communicate with reviewers did not demonstrate any effect on review quality.

• 1 study failed to show reviewer bias.• 1 nonrandomized study compared the quality of articles

published in peer-reviewed vs other journals• Two studies showed that editorial processes make articles

more readable and improve the quality of reporting

Page 52: Online - Exposing Scientific Peer Review (Oct08)
Page 53: Online - Exposing Scientific Peer Review (Oct08)
Page 54: Online - Exposing Scientific Peer Review (Oct08)
Page 55: Online - Exposing Scientific Peer Review (Oct08)

Future of Peer ReviewOpen publication, training, what makes a good reviewer?

Page 56: Online - Exposing Scientific Peer Review (Oct08)

Open Publication Cycle – 12 monthsStep1

Paper Completed

Step 2

Present to colleagues (informal PR)

Step 3

Present at conference

Step 4

Submit to online journal

Step 7 Peer Review 1

Step 6 Public comments

Step 8

Revision

T0 T+1mo

Step 11

Readers letters (post-publication)

T+3mo T+5mo

Step 10

Open Access Publication

Step 9

Online proof

Step 5

Editor Publishes “discussion paper”

T+6mo T+7mo T+9mo

T+11mo T+12mo

Page 57: Online - Exposing Scientific Peer Review (Oct08)

New Models: Case 1

• All submitted articles within scope are immediately posted on the Web for a 90 day discussion period

• At end of “review” period, authors given option to revise; revised article sent out for “pass-fail” review”

• If “pass,” article is published

Page 58: Online - Exposing Scientific Peer Review (Oct08)

New Models: Case 2

• Authors select reviewers from among BD editorial board members• Reviews published alongside author’s responses as part of article• Three reviews required

Page 59: Online - Exposing Scientific Peer Review (Oct08)

New Models: Case 3• Pre-publication review focuses

on technical rather than subjective issues

• All published papers made available for community-based open peer review including online annotation, discussion, and rating

• Managing Editor, Chris Surridge

Page 60: Online - Exposing Scientific Peer Review (Oct08)

Discussion Forum (Pub. Stage 1) + Journal (Pub. Stage 2)Open Access Publishing

Page 61: Online - Exposing Scientific Peer Review (Oct08)

Publish without a

Publisher?

Page 62: Online - Exposing Scientific Peer Review (Oct08)

Improving Peer Review by Training?

Page 63: Online - Exposing Scientific Peer Review (Oct08)

Effects of training on quality of peer review: randomised controlled trial

• Reviewers in the self taught group scored higher in review quality after training than did the control group, but the difference was not of editorial significance and was not maintained in the long term.

– Both intervention groups identified significantly more major errors after training than did the control group (3.14 and 2.96 v 2.13; P < 0.001), and this remained significant after the reviewers' performance at baseline assessment was taken into account.

– The evidence for benefit of training was no longer apparent on further testing six months after the interventions. Training increased likelihood of recommending rejection (92% and 84% v 76%; P = 0.002).

• Short training packages have only a slight impact on the quality of peer review.

• http://resources.bmj.com/bmj/reviewers/training-materials

Schroter et al BMJ 2004;328:673

Page 64: Online - Exposing Scientific Peer Review (Oct08)

What errors do peer reviewers detect, and does training improve their ability to detect them? • Design 607 peer reviewers at the BMJ were randomized to two intervention

groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted.

• Main outcome measures The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training.

• Results The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper.

• Conclusions Editors should not assume that reviewers will detect most majorerrors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection

J R Soc Med 2008;101:507-514

Page 65: Online - Exposing Scientific Peer Review (Oct08)

What Makes a Good Reviewer?• A reviewer was less than 40 years old

• From a top academic institution

• Well known to the editor choosing the reviewer

• Author blinded to the identity of the manuscript's authors

• Then the probability of good review was 87% vs 7%

Evans AT, McNutt RA, Fletcher SW, Fletcher RH. The characteristics of peer reviewers who produce good-quality reviews. J Gen Intern Med. 1993 Aug;8(8):422-8

Page 66: Online - Exposing Scientific Peer Review (Oct08)

Why Credit for Reviewing?• See:

• Leanne Tite, Sara Schroter Why do peer reviewers decline to review? A survey Journal of Epidemiology and Community Health 2007;61:9-12; doi:10.1136/jech

Page 67: Online - Exposing Scientific Peer Review (Oct08)
Page 68: Online - Exposing Scientific Peer Review (Oct08)

Richard Horton (editor The Lancet):

• "The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability — not the validity — of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong."

eMJA: Horton, Genetically modified food: consternation, confusion, and crack-up

Page 69: Online - Exposing Scientific Peer Review (Oct08)

Overview of Changes

DeclaredAnonymousDeclaredFuture

DeclaredPartial Anony.Partially declared

Current

DeclaredDeclaredHiddenPast

Paper in printAuthor (Draft) identity

Peer Identity

Page 70: Online - Exposing Scientific Peer Review (Oct08)

10 Suggestions for Better Peer Review1. Make Peer Review part of training (with supervision)2. Disclose the identifies of the reviewers3. Hide the identities of the authors4. Peer review with a minimum of 3 reviewers5. Have a quantitative and qualitative element6. Allow anyone to comment whilst the article is “online only”7. Score every peer review and avoid use of low scorers8. Require reviewers to disclose absence of bias & require authors to

submit full datasets9. Show the authors the full review on request10. Publish the peer reviews for every paper (accepted or rejected)

online

Page 71: Online - Exposing Scientific Peer Review (Oct08)

Further Reading

Page 72: Online - Exposing Scientific Peer Review (Oct08)

References• Smith Peer review: a flawed process at the heart of science and journals.

JRSM 2006;99:178-182.

• Linkov et al. Scientific Journals are 'faith based': is there science behind Peer review? JRSM 2006;99:596-598.

• Schroter et al. What errors do peer reviewers detect, and does training improve their ability to detect them? JRSM 2008;101:507-514.

• Rennie R. Editorial peer review:its development and rationale• In Godlee F, Jefferson T, editors. Peer review in health sciences. Second edition. London: BMJ

Books, 2003:1-13.

• Overbeke J, Wager E. The state of evidence: what we know and what we don't know aboutjournal peer review In Godlee F, Jefferson T, editors. Peer review in health sciences. Secondedition. London: BMJ Books, 2003:45-61.

• Fletcher RH, Fletcher SW. The effectiveness of editorial peer review In Godlee F, Jefferson T, editors. Peer review in health sciences. Second edition. London: BMJ Books, 2003:62-75.

• Martyn C. Peer review: some questions from Socrates In Godlee F, Jefferson T, editors. Peer review in health sciences. Second edition. London: BMJ Books, 2003:322-8.

• Smith R. The future of peer review In Godlee F, Jefferson T, editors. Peer review in health sciences. Second edition. London: BMJ Books, 2003:329-46.

Page 73: Online - Exposing Scientific Peer Review (Oct08)

Appendix• Checklist for critical appraisal• http://ap.psychiatryonline.org/cgi/content/full/28/2/81/A1

Page 74: Online - Exposing Scientific Peer Review (Oct08)

Credit: Panayiota Polydoratou and Martin Moyle

Page 75: Online - Exposing Scientific Peer Review (Oct08)

Sources• 1. Kronick DA. Peer-review in 18th-century scientific journalism. JAMA. 1990;263:1321-1322. ABSTRACT 2. Overbeke J. The

state of the evidence: what we know and what we don't know about journal peer review. In: Godlee F, Jefferson T, eds. Peer Review in Health Sciences. London, England: BMJ Books; 1999:32-45. 3. Alderson P, Davidoff F, Jefferson TO, Wager E. Editorial peer review for improving the quality of reports of biomedical studies [Protocol for a Cochrane Methodology Review]. Oxford, England:Cochrane Library, Update Software; 2001; issue 3. 4. Rennie D. Editorial peer review in biomedical publication. JAMA. 1990;263:1317. FULL TEXT | ISI | PUBMED 5. Rennie D, Flanagin A. The second International Congress on Peer Review in Biomedical Publication. JAMA. 1994;272:91. FULL TEXT | ISI | PUBMED 6. Rennie D, Flanagin A. Congress on Biomedical Peer Review. JAMA. 1998;280:213. FREE FULL TEXT 7. Wager E, Middleton P. Effects of technical editing in biomedical journals: a systematic review. JAMA. 2002;287:2821-2824. FREE FULL TEXT 8. McNutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. JAMA. 1990;263:1371-1376. ABSTRACT 9. Fisher M, Friedman SB, Strauss B. The effects of blinding on acceptance of research papers by peer review. JAMA. 1994;272:143-146. ABSTRACT 10. Jadad AR, Moore A, Carroll D, et al. Assessing the quality of reports of randomized clinical trials. Control Clin Trials. 1996;17:1-12. FULL TEXT | ISI | PUBMED 11. van Rooyen S, Godlee F, Evans S, et al. Effect of blinding and unmasking on the quality of peer review. JAMA. 1998;280:234-237. FREE FULL TEXT 12. Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding peer reviewers and asking them to sign their reports. JAMA. 1998;280:237-240. FREE FULL TEXT 13. Justice AC, Cho MK, Winker MA, et al. Does masking author identity improve peer review quality? JAMA. 1998;280:240-242. FREE FULL TEXT 14. van Rooyen S, Godlee F, Evans S, et al. Effect of open peer review on quality of reviews and on reviewers' recommendations. BMJ. 1999;318:23-27. FREE FULL TEXT 15. Das Sinha S, Sahni P, Nundy S. Does exchanging comments of Indian and non-Indian reviewers improve the quality of manuscript reviews? Natl Med J India. 1999;12:210-213. PUBMED 16. Walsh E, Rooney M, Appleby L, Wilkinson G. Open peer review. Br J Psychiatry. 2000;176:47-51. FREE FULL TEXT 17. Jefferson T, Smith R, Yee Y, et al. Evaluating the BMJ guidelines for economic submissions. JAMA. 1998;280:275-277. FREE FULL TEXT 18. Gardner MJ, Bond J. An exploratory study of statistical assessment of papers published in the British Medical Journal. JAMA. 1990;263:1355-1357. ABSTRACT 19. Bingham CM, Higgins G, Coleman R, Van der Weyden MB. The Medical Journal of Australia Internet peer-review study. Lancet. 1998;352:441-445. FULL TEXT | ISI | PUBMED 20. Neuhauser D, Koran CJ. Calling Medical Care reviewers first. Med Care. 1989;27:664-666. FULL TEXT | ISI | PUBMED 21. Callaham ML, Wears RL, Waeckerle JF. Effect of attendance at a training session on peer reviewer quality and performance. Ann Emerg Med. 1998;32:318-322. FULL TEXT | ISI | PUBMED 22. Strayhorn J, McDermott JF Jr, Tanguay P. An intervention to improve the reliability of manuscript reviews for the Journal of the American Academy of Child and Adolescent Psychiatry. Am J Psychiatry. 1993;150:947-952. FREE FULL TEXT 23. Ernst E, ReschKL. Reviewer bias against the unconventional? Complement Ther Med. 1999;7:19-23. FULL TEXT | PUBMED 24. Elvik R. Are road safety evaluation studies published in peer reviewed journals more valid than similar studies not published in peer reviewed journals? Accid Anal Prev. 1998;30:101-118. FULL TEXT | ISI | PUBMED 25. Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at Annals of Internal Medicine. Ann Intern Med. 1994;121:11-21. FREE FULL TEXT 26. Pierie JP, Walvoort HC, Overbeke AJ. Readers' evaluation of effect of peer review and editing on quality of articles in the Nederlands Tijdschrift voor Geneeskunde. Lancet. 1996;348:1480-1483. FULL TEXT | ISI | PUBMED 27. Jefferson T, Wager E, Davidoff F. Measuring the quality of editorial peer review. JAMA. 2002;287:2786-2790. FREE FULL TEXT

Page 76: Online - Exposing Scientific Peer Review (Oct08)

“That’s it? That’s Peer Review?”