Innovative Study Designs for Implementation Research

30
Innovative Study Designs for Implementation Research Geoffrey M. Curran, PhD Director, Center for Implementation Research Professor, Departments of Pharmacy Practice and Psychiatry University of Arkansas for Medical Sciences Research Health Scientist, Central Arkansas Veterans Healthcare System

Transcript of Innovative Study Designs for Implementation Research

Page 1: Innovative Study Designs for Implementation Research

Innovative Study Designs for Implementation Research

Geoffrey M. Curran, PhD

Director, Center for Implementation Research

Professor, Departments of Pharmacy Practice and Psychiatry

University of Arkansas for Medical Sciences

Research Health Scientist, Central Arkansas Veterans Healthcare System

Page 2: Innovative Study Designs for Implementation Research

Goals for the session• Present innovative dissemination and implementation (D&I)

research designs– People do disagree on what is innovative, so these comments and

recommendations are my own (based on some experience)

• Let’s not forget being well-grounded/cited, tho– So, balance innovation with being informed by what others have tried

successfully

• Focus today on designs directed towards developing and/or testing implementation strategies– Designs often used in R01 implementation “trials,” or R21/34-type pilots– Stepped Wedge, SMART, Hybrid Effectiveness-Implementation,

Developmental Iterative

• Few other “innovative” elements to consider within these designs

Page 3: Innovative Study Designs for Implementation Research

Who am I?• Sociologist by training (1996)

• Most of the last 20 years in a Department of Psychiatry

– Last 3+ years also in a Department of Pharmacy Practice

• Began doing D&I research in the VA in 1998

– Quality Enhancement Research Initiative (QUERI)

– Implement “EBPs” while studying how best to implement

• VA, NIDA, NIMH, NIDDK, NIMHD implementation research grants

– Testing implementation strategies in support of adoption of EBPs

• Focus as well on methods and design in implementation research

Page 4: Innovative Study Designs for Implementation Research

Frame for discussion today

• Stages of implementation (Wensing and Grol, 2013 )

– We’ll focus on designs directed at steps 4-6

• EPIS framework commonly used/cited (Aarons et al., 2011)

– Exploration– Preparation developmental iterative designs– Implementation SW, SMART, Hybrid– Sustainment

Page 5: Innovative Study Designs for Implementation Research

Pilot/preparatory work towards strategies

• Implementation barriers and facilitators analysis very very very useful– Learn about implementation context

– Match implementation strategies to B/Fs

– Adapt EBP if needed (which it probably is needed, right Wynne?)

– KEY POINT: This step, in-and-of-itself, IS NOT INNOVATIVE

• So, can do this during an clinical effectiveness trial (Hybrid 1)– I’ll come back to this

• Or, if you are starting at this step in a new study (R21/34), you probably have to take it further down the line towards and possibly including pilot testing– Here are a few options…

Page 6: Innovative Study Designs for Implementation Research

Next step after B/F: strategy development

• Frameworks/models exist upon which to base your development process for the implementation strategies (Powell et al., 2015; Curran et al., 2008)

– E.g., Concept mapping, intervention mapping, group model building, EBQI

– Use one of these frameworks/models; still “innovative”…

• Recommend: Multi-stakeholder involvement, including “end-users” of strategies, from places who do a pilot test– Involve all types of folks whose behavior needs to change

• “Patient-facing” strategies? Need patients involved

– A panel who meets on a regular basis; consensus “voting”• Builds trust

• Recommend: Deliberate on and build from context-specific B/Fs

Page 7: Innovative Study Designs for Implementation Research

Next step after development: pilot test

• Recommend: Iterative design– Build in assessment and revision steps

– Use multi-stakeholder panel for continuing deliberations and revisions

• If the next step is an R01, then iterated implementation strategy can avoid some early fumbles in the “big study”

• Mixed methods are usually necessary for these designs– Quant usually for uptake measures

– Qual usually for feasibility, acceptability

Page 8: Innovative Study Designs for Implementation Research

So, on to Implementation “trial” designs

• Comparison of competing strategies– One implementation strategy vs. another; “implementation as usual”?– Compare “doses” of same strategy? Different “bundles” of strategies?– Often see “basic” vs “enhanced”

• Cluster RTC designs common – Randomizing providers/places to strategy A vs B, 1 vs 2– Not going to cover today…

• “Innovative” and useful designs at the moment:– “Stepped Wedge” – SMART designs (sequential multiple assignment randomized trial)– Hybrid Effectiveness-Implementation

Page 9: Innovative Study Designs for Implementation Research

“Stepped Wedge” design

• Use of crossover; begin with one strategy (or comparison condition) and then move to another

• Timing of the start of an implementation strategy is randomly assigned: “Stepped-Wedge” (…”dynamic wait list”; “roll-out design”)

Time 1 2 3 4

Cohort A 1 2 2 2

Cohort B 1 1 2 2

Cohort C 1 1 1 2

Page 10: Innovative Study Designs for Implementation Research

“Stepped Wedge” design

• Use of crossover; begin with one strategy (or comparison condition) and then move to another

• Timing of the start of an implementation strategy is randomly assigned: “Stepped-Wedge” (…”dynamic wait list”; “roll-out design”)

Time 1 2 3 4

Cohort A 1 2 2 2

Cohort B 1 1 2 2

Cohort C 1 1 1 2

Within-site comparison

Between-site comparison

Page 11: Innovative Study Designs for Implementation Research

What’s good about Stepped Wedge?

• Partners in the real world like it– “Wait, I might be randomized to get nothing? Or the strategy you think

is going to work worse?”

– Everyone gets the “good strategy” eventually

• Track changes both across time and across condition

• Statistical power is better with low N compared to parallel designs

• If delivering the implementation strategies is hard to do, can focus on groups of sites at a time– Works well for regional studies

Page 12: Innovative Study Designs for Implementation Research

“SMART” design

• Sequential Multiple Assignment Randomized Trial• Everybody starts with comparison condition or “first strategy”• After designated period of time, measure initial adoption• Adoption “failures” randomized to “strategy 2”

– Half continue strategy 1– Half get strategy 2

• After another observation period, measure adoption– Failures switch to 2– Do a step-down/sustainability phase? Remove strategies?

• Meaning that strategies go away or change to new “sustainability strategy” and see who keeps going and who doesn’t

Page 13: Innovative Study Designs for Implementation Research

Example: Kilbourne et al.

Page 14: Innovative Study Designs for Implementation Research

What’s good about SMART designs?

• Intuitive appeal– Only give “strategy 2” to those that seem to need it

– Doesn’t make assumptions about who might need strategy 2

• Can learn about why different types of sites need different strategies– Aided by assessment of context

• Can learn these things more quickly– Same study informs about what different contexts might need

– Goes ahead and tries to address “failing” sites now, not waiting until next study to deal with those

Page 15: Innovative Study Designs for Implementation Research

Implementation Research

Effectiveness Research

Improved processes, outcomes

Efficacy Research

“hybrid designs” go in here…

Hybrid Effectiveness-Implementation Designs

Page 16: Innovative Study Designs for Implementation Research

Why Hybrid Designs? • The speed of moving research findings into routine adoption can

be improved by considering hybrid designs that combine elements of effectiveness and implementation research– Or, combine research questions in both areas

• Don’t wait for “perfect” effectiveness data before moving to implementation research

• We can “backfill” effectiveness data while we test implementation strategies

• How do clinical outcomes relate to levels of adoption and fidelity?– How will we know this without data from “both sides”?

Page 17: Innovative Study Designs for Implementation Research

ClinicalEffectiveness

Research

Implementation Research

Hybrid

Type 1

Hybrid

Type 2

Hybrid

Type 3

Hybrid Type 1: test clinical intervention, observe/gather information on implementation

Hybrid Type 2: test clinical intervention, test/study implementation strategy

Hybrid Type 3: test implementation strategies, observe/gather information on clinical outcomes

Types of Hybrids

Page 18: Innovative Study Designs for Implementation Research

Research aims by hybrid typesStudy Characteristic Hybrid Type I Hybrid Type II Hybrid Type III

Research Aims Primary Aim:Determine effectiveness of an intervention

Secondary Aim: Better understand context for implementation

Primary Aim:Determine effectiveness of an intervention

Co-Primary* Aim:Determine feasibility and/or (potential) impact of an implementation strategy

*or “secondary”…

Primary Aim:Determine impact of an implementation strategy

Secondary Aim: Assess clinical outcomes associated with implementation

Page 19: Innovative Study Designs for Implementation Research

Definition:

• Test clinical intervention and explore implementation-related factors (80%/20%...)

Description:

• Conventional effectiveness study “plus”:

• Describe implementation experience (worked/didn’t; barriers/facilitators)

• How might the intervention need to be adapted going forward?

• What is needed to support people/places to do the intervention in the real world?

Indications (circa 2012):

• Clinical effectiveness evidence remains limited, so intensive focus on implementation might be premature…BUT

• Effectiveness study conditions offer ideal opportunity to explore implementation issues, or “implementability”, and plan implementation strategies for next stage

Hybrid Type 1 Designs

Page 20: Innovative Study Designs for Implementation Research

Remember…

• All effectiveness trials use “implementation strategies” to support the delivery of the intervention; we just don’t call them that…

• The are normally resource-intensive

– Paying clinics, paying interventionists, paying for care, frequent fidelity checks and intervening when it goes south…

• We “know” that the strategies used in the effectiveness trials are not feasible for supporting wide-spread adoption

• BUT, we can learn from the use of these strategies during the trial! We don’t need to wait until “all the effectiveness data are in” to begin to look at implementability

Page 21: Innovative Study Designs for Implementation Research

Definition:

• Test clinical/prevention intervention and test/study implementation strategy (50/50? 60/40? 72/28?)

Description:

• Dual-focus study:

– Clinical/Prevention Effectiveness trial within either:• Implementation trial (so, a comparative effectiveness factorial type design)

• Pilot (non-randomized) study of implementation strategy

Indications (circa 2012):

• Clinical/prevention effectiveness data available, though perhaps not for context/population of interest for this trial

• Data on barriers and facilitators to implementation available

• System/policy demands encouraging roll out?

Hybrid Type 2 Designs

Page 22: Innovative Study Designs for Implementation Research

Design Characteristics

• The original definition of a type 2 described variants: dual focused, dual randomized, factorial design & randomized effectiveness trial nested in pilot of an implementation strategy

– Majority of currently published Type 2s are the latter

– Dual randomized designs used non-complex interventions/strats

• When looking at the aims or hypotheses of existing studies, most have primary focus on intervention outcomes

Page 23: Innovative Study Designs for Implementation Research

Design Characteristics

• Important to have an explicitly described implementation strategy that is thought to be plausible in the real world

• Measure adoption, fidelity…

• Important to be clear about intervention components versus implementation strategy components– This isn’t always easy to decide or describe

– E.g., delivery format… • Is delivering the intervention over the telephone an intervention component

or an implementation strategy?

Page 24: Innovative Study Designs for Implementation Research

Definition: test implementation strategy, observe/gather information on clinical

intervention and outcomesDescription: • Largely focused on trial of implementation strategies• Randomization usually at level of provider, clinic, or system• Clinical outcomes are “secondary” Indications (circa 2012):• We sometimes proceed with implementation studies without completing a

“full portfolio” of effectiveness studies (e.g. mandates)• Interested in exploring how clinical effectiveness might vary by level/quality of

implementation? • More feasible and attractive when clinical outcomes data are more widely

available

Hybrid Type 3 Designs

Page 25: Innovative Study Designs for Implementation Research

Design Characteristics

• Important to use outcomes framework

– RE-AIM

– Proctor et al., 2011

• Clinical outcomes data collection

– Measures available in existing data?

– Primary data collection?

• Sub-sample?

Page 26: Innovative Study Designs for Implementation Research

New thinking on hybrid designs• New thinking on “lack of fixed-ness” of interventions contributing to

changing views on when and why of hybrid-type designs• Hybrid type 1 less of a “special case” but more routine?

– If effectiveness research in the “last step” before trying to get people to do the thing… why not more of a focus on implementation questions?

• We expect dual-randomized type 2 trials still to be rare– Clarity around intervention/strategy components essential

• Hybrid type 3 less of a “special case” also?– When wouldn’t we want clinical/prevention outcomes data?– Shouldn’t we PROVE how much fidelity is important and under what

circumstances?– Balance of evidence(s), resources, time, expertise

Page 27: Innovative Study Designs for Implementation Research

Innovative add-ons…

• Explore mediation– Don’t just show impl strat had adoption outcome

– Can you show causal pathway through beliefs, attitudes, precursor behaviors?

– Test theory on mechanism of action of strategies

• Explore different strategy bundles for different implementation phases– How might strategies need to differ from an initial implementation

to a sustainability phase?

Page 28: Innovative Study Designs for Implementation Research

For more info: hot off the presses…

Page 29: Innovative Study Designs for Implementation Research

Other readings…

• Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. 2012. “Effectiveness-implementation hybrid designs…” Med Care 50(3):217-226.

• Glasgow R, Magid D, Beck A, Ritzwoller D, Estabrooks PA. 2005. “Practical clinical trials for translating research into practice…” Med Care 43(6):551

• Brown CH, Ten Have TR, Jo B, et al. 2009. “Adaptive designs for randomized trials in public health.” Annu Rev Public Health 30:1-25.

• Landsverk J, Brown C, Rolls Reutz J, Palinkas L, Horwitz S. “Design elements in implementation research…” Adm Policy Ment Health Ment Health Serv Res38(1):54-63.

• Hemming K, Haines TP, Chilton PJ, Girling AJ, LilfordRJ. 2015. “The stepped wedge cluster randomized trial…” BMJ 351:h39

• Mdege et al. 2011. “Systematic review of stepped wedge cluster randomized trials shows that design is particularly used to evaluate interventions during routine implementation.” Journal of Clinical Epidemiology 64:936-948.

Page 30: Innovative Study Designs for Implementation Research

Remember… (from Brown et al., 2017)

• “We close by reinforcing the message that researchers and evaluators represent only one sector of people that make design decisions in implementation. More than efficacy and effectiveness studies, dissemination and implementation studies involve significant changes in organizations and communities; as such, community leaders, organizational leaders, and policy makers have far more at stake than do the evaluators. The most attractive scientific design on paper will not happen without the endorsement and agreement of the communities and organizations where these strategies are implemented.”