Reproducibility in Research - Bond University Bannach-Brown presentation.pdf · “Reproducibility...
Transcript of Reproducibility in Research - Bond University Bannach-Brown presentation.pdf · “Reproducibility...
Reproducibility in ResearchDr Alexandra Bannach-Brown
Institute for Evidence-Based HealthcareBond University
@ABannachBrown
Background
• Systematic review of biomedical literature describing animal models
• Translating systematic review findings
• Research Quality
• Open Science & Open Access
Research• Aim: to improve scientific theories for increased understanding of
natural phenomena• Analysis and interpretation of observations• Experimental or spontaneous• Leads to knowledge claim• Usually involves statistical analysis
Research CycleGenerate and
specify hypothesis
Design study
Conduct and collect data
Analyse data and test
hypothesis
Interpret results
Publish/conduct next experiment
“Manifesto for Reproducible Science”, Munafo et al., 2017
Bench to Bedside
Nonclinical Development• Formulation• Laboratory development• Quality control & assurance
Preclinical Development• Animal studies• Bioavailability• Pharmacokinetic & pharmacodynamic
studies
Clinical Development• Phase I• Phase II• Phase III• Phase IV
Generate and specify
hypothesis
Design study
Conduct and collect data
Analyse data and test hypothesis
Interpret results
Publish/conduct next experiment
Generate and specify
hypothesis
Design study
Conduct and collect data
Analyse data and test hypothesis
Interpret results
Publish/conduct next experiment
Generate and specify
hypothesis
Design study
Conduct and collect data
Analyse data and test hypothesis
Interpret results
Publish/conduct next experiment
Reproducibility and Replication
“Reproducibility”- re-analysis of existing data using the same analytical procedures.
“Replication” - collection of new data, following the same methods.
Peng, (2011). "Reproducible Research in Computational Science", SCIENCE;1226-1227
Reproducibility Spectrum
Publication +Publication
OnlyFully
ReplicableFully Reported
MethodsMethods & Data
Linked Methods, Data & Data Analysis
Code
Not Reproducible
Gold Standard
Research Quality• How credible are the findings presented?• Is the design appropriate?
• Rigorous• Designed mitigate risk of bias
Replication
Ioannidis et al., 2014“Reproducibility in Science”, Begley & Ioannidis, Circulation Research. 2015;116:116-126
Replication
“Reproducibility in Science”, Begley & Ioannidis, Circulation Research. 2015;116:116-126“Estimating the reproducibility of psychological science”, Open Science Collaboration, Science, 2015; 349(6251)
Average neuroscience study powered between
8-31%(Button et al., 2013)
Threats to reproducible science
“Manifesto for Reproducible Science”, Munafo et al., 2017
Generate and specify
hypothesis
Design study
Conduct and collect data
Analyse data and test
hypothesis
Interpret results
Publish/conduct next experiment
Research Plan
• Your research proposal is based on what is already known• How do you know that this knowledge is reliable?
• Were the experiments on which this knowledge is based at risk of bias?• Is the summary of knowledge skewed by publication bias?
• Infarct Volume• 11 publications, 29 experiments, 408 animals• Improved outcome by 44% (35-53%)
Macleod et al, 2008
Effic
acy
Randomisation Blinded assessment of
outcome
Blinded conduct of experiment
Bias in in vivo stroke studies
SAINT II Phase 3 Clinical Trial
Bias in other in vivo domains
Multiple Sclerosis Parkinson´s disease
Alzheimer´s diseaseStroke
Reporting of measures to reduce risk of bias in laboratory studies
• Reporting of measures to reduce bias in 254 reports of in vivo, ex vivo or in vitro studies involving non-human animals, identified from random sample of 2000 publications from PubMed
Reporting of measures to reduce the risk of bias in publications from ‘leading’ UK institutions
Reporting of measures to reduce bias in 1173 in vivostudies involving non-human animals, published from leading UK institutions published in 2009 and 2010
Journal impact factor
Prev
alen
ce o
f rep
ortin
g
Reporting risk of bias by journal impact factor
Is the summary of knowledge skewed by publication bias?
BenefitHarm
outcome observed corrected
Disease models improvement 40% 30% Less improvement
Toxicology model harm 0.32 0.56 More harm
Different patterns of publication bias in different fields
nexpts
Estimatedunpublished
Reported efficacy
Corrected efficacy
Stroke – infarct volume 1359 214 31.3% 23.8%
EAE - neurobehaviour 1892 505 33.1% 15.0%
EAE – inflammation 818 14 38.2% 37.5%
EAE – demyelination 290 74 45.1% 30.5%
EAE – axon loss 170 46 54.8% 41.7%AD – Water Maze 80 15 0.688 sd 0.498 sd
AD – plaque burden 632 154 0.999 sd 0.610 sd
20%
Publication bias
Generate and specify
hypothesis
Design study
Conduct and collect data
Analyse data and test
hypothesis
Interpret results
Publish/conduct next experiment
Is the current study at risk of bias?
Expectancy effects
• 12 graduate psychology students• 5 day experiment: rats in T maze with dark arm alternating at
random, and the dark arm always reinforced• 2 groups – “Maze Bright” and “Maze dull”
Group Day 1
Day 2
Day 3
Day 4
Day 5
“Maze bright”
1.33 1.60 2.60 2.83 3.26
“Maze dull”
0.72 1.10 2.23 1.83 1.83
Δ +0.60 +0.50 +0.37 +1.00 +1.43
Rosenthal and Fode, Behav Sci 8, 183-9
It’s not just in the measurement
Blinded assessment of behavioural outcomeNo Yes
Impr
ovem
ent i
n be
havi
oura
l out
com
e (S
tand
ardi
sed
Effe
ct S
ize)
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Generate and specify
hypothesis
Design study
Conduct and collect data
Analyse data and test
hypothesis
Interpret results
Publish/conduct next experiment
Lack of an a priori plan (protocol) leaves you chasing shadows
Perils of testing non-prespecified hypotheses• International Study of Infarct Survival-2
– Aspirin improves outcome in myocardial infarction BUT – Non-significant worsening of outcome for patients born under Gemini or Libra
• What if it was patients with migraine?
Baigent et al., 1998. ISIS-2. BMJ.Peto, R., 2011. Current misconception 3. Br J Cancer
Generate and specify
hypothesis
Design study
Conduct and collect data
Analyse data and test
hypothesis
Interpret results
Publish/conduct next experiment
The final paper doesn’t provide useful information
Poor Reporting & Quality
• Clinical Trials: Cochrane • High/unclear risk of bias • TIDieR – poor reporting of interventions• (Hoffmann et al., 2014)
• Animal Studies: • measures to reduce the risk of bias• Poor reporting of general methods
Poor Reporting in 20,920 RCTs –Dechartres et al., 2017, BMJ;357:j2490
Robustness
Crabbe et al., 1999, Science
Researchers are different …
num
ber
quality
F.F.P.
HARKing
Open Science
Preregistration
Risks of bias
How we do research integrity
Research Improvement Strategy
Biomedical Research Investment
• $300bn globally, €50bn in Europe• Glasziou & Chalmers estimated that 85% of research is wasted• Even if waste is only 50%, improvements which reduced that by 1%
would free $3bn globally every year.• Investing ~1% of research expenditure in improvement activity would
go a long way
Research CycleGenerate and
specify hypothesis
Design study
Conduct and collect data
Analyse data and test
hypothesis
Interpret results
Publish/conduct next experiment
“Manifesto for Reproducible Science”, Munafo et al., 2017
Take home messages
Before the Experiment:1. Thoroughly check the quality of the prior knowledge base2. Ensure an a priori plan or protocol, including a priori hypotheses
During:3. Implement measures to reduce the risk of bias in experiments, record keeping
After:4. Transparent reporting: report the study in full, make the data available on e.g. Zenodo, preprint at BioRxiv
Executing Best Practice: Before
1. Thoroughly check the quality of your knowledge baseSystematic review of the field + critical appraisal
Executing Best Practice
2. Make an a priori plan or protocol- Register the protocol
• Exploratory or confirmatory experiment?• Study population, intervention, primary outcome, sample size calculation,
hypothesis, statistical analysis plan• Time-stamped, read-only, with persisting unique digital identifier• Before beginning data collection• Can remain private until work is published
Executing Best Practice: During
1. Implement measures to reduce the risk of bias in your experimentsRandomisation, Blinding, handling of drop-outs, etc
2. Traceability of materials, antibodies, code, etc3. Record deviations from the study protocol
Executing Best Practice: After4. Transparent reporting: • Ensure study is reported in full – methods, reagents, intervention• Make the data and analysis code available - "as open as possible as
closed as necessary"• Link all with DOIs
Reporting Guidelines:Before & After https://www.equator-network.org/
Preprints
● A version of your final manuscript before peer review● Preprint archives
○ medArXiv○ bioXiv○ metaArXiv○ PsyArXiv○ ArXiv
● Free & DOI = citable● Compatible with all (but 1) major publishing groups
○ Link to published article from preprint
Threats to reproducible science
“Manifesto for Reproducible Science”, Munafo et al., 2017
Dr Alexandra Bannach-BrownInstitute for Evidence-Based Healthcare
Bond [email protected]
@ABannachBrown