Implementation science and learning health systems: Connecting the dots

83
Implementa)on science and learning health systems: Connec)ng the dots Anne Sales PhD RN Department of Learning Health Sciences University of Michigan VA Ann Arbor Healthcare System

Transcript of Implementation science and learning health systems: Connecting the dots

Implementa)on  science  and  learning  health  systems:  Connec)ng  the  dots

Anne  Sales  PhD  RN  Department  of  Learning  Health  Sciences  

University  of  Michigan  VA  Ann  Arbor  Healthcare  System  

Similari)es  and  differences:  Knowledge  to  Ac)on  cycle,  Learning  Health  Cycle

h?p://ktclearinghouse.ca/knowledgebase/knowledgetoacGon    

Comes  out  of  health  services,  research  uGlizaGon,  evidence  based  medicine  tradiGons  

Comes  out  of  informaGcs,  data  science  tradiGons  

Areas  of  similarity

•  StarGng  point    •  Not  specified  in  the  diagrams  •  “Begin  with  a  problem”  

•  Area  of  concern  •  ArGculated  by  stakeholders  or  a  community  defined  loosely  

•  SomeGmes  highlighted  by  exisGng  data,  someGmes  not  

• Move  from  arGculaGon  of  problem  through  a  cycle  of  acGon  •  Cycles  are  iteraGve  •  SystemaGc  process  

What  cons)tutes  knowledge?

• KTA:  Knowledge  funnel  •  Knowledge  inquiry  

•  Search  exisGng  literature  •  Knowledge  synthesis  

•  SystemaGc  review  •  Synthesis  of  exisGng  literature  

•  Knowledge  tools  or  products  •  Products  of  synthesis  •  Toolkits  disGlled  from  syntheses  •  Other  tools  

•  Learning  cycle  •  Data  collecGon  begins  once  the  learning  goal  is  arGculated  

•  Data  may  come  from  many  sources  •  In  health,  electronic  health  records  are  a  primary  source  

•  Data  assembly  •  Cleaning    •  IniGal  storage  

•  Data  analysis  •  InterpretaGon  

•  Knowledge  generaGon  •  Knowledge  storage  

Cri)cal  differences?

•  Knowledge  is  generated  through  search  and  synthesis  of  exisGng  literature  

•  Privileges  peer  reviewed,  published  empirical  studies  

•  Underlying  assumpGon  of  hierarchy  of  evidence  

•  Study  design  oUen  used  as  assessment  of  quality  

•  Synthesis  across  mulGple  studies,  including  meta-­‐analysis,  is  highly  valued  

•  Recommended  acGons  (clinical  intervenGons)  based  on  synthesis  and  summary  of  evidence  

•  Knowledge  is  generated  through  aggregaGon  and  analysis  of  data  

•  Emphasis  on  populaGon  level  data  •  Use  of  exisGng  data  •  Underlying  assumpGon  of  validity  and  meaning  of  “big”  data  

•  Assume  that  data  of  sufficient  size  is  representaGve  of  populaGons  

•  ConsGtutes  a  “be?er”  approximaGon  to  the  populaGon  than  samples  in  “small”  data  studies  

What  do  you  do  with  knowledge  (evidence)?

• KTA  •  IniGate  the  AcGon  Cycle  

•  IdenGfy  knowledge  to  acGon  gaps  •  Adapt  knowledge  to  local  context  •  Assess  barriers  to  knowledge  use  •  Select,  tailor  and  implement  intervenGons  

•  Monitor  knowledge  use  •  Evaluate  outcomes  •  Sustained  knowledge  use  

•  Learning  cycles  •  Formulate  advice  messages  •  Share  advice  •  Record  decisions  and  outcomes  

Both  models  are  at  a  high  level  of  abstrac)on

•  Emphases  are  in  different  places  •  KTA  more  focused  on  the  acGon  component  

•  More  detail  in  some  respects  •  Adapt  knowledge  to  local  context  •  Assess  barriers  to  knowledge  use  •  Select,  tailor  and  implement  intervenGons  

•  Monitor  knowledge  use  •  Evaluate  outcomes  •  Sustained  knowledge  use  

•  But  implementaGon  is  only  one  part  of  one  component  of  the  acGon  cycle  

•  Learning  cycle  focuses  on  what  you  do  aUer  you  have  knowledge  stored  

•  Emphasis  on  advice  messages  •  FormulaGon  •  Sharing  •  DocumenGng  decisions  and  outcomes  

“How  to”  ques)ons

•  Select,  tailor  and  implement  intervenGons?  (KTA)  

•  These  refer  to  implementaGon  intervenGons,  not  the  clinical  intervenGons  that  consGtute  the  “knowledge”  being  moved  into  acGon  (pracGce)  

•  Formulate  advice  messages?  (Learning  cycle)  

•  Share  advice  messages?  

Neither  model  provides  much  guidance  on  these  quesGons,  although  there  are  some  implicit  assumpGons  and  explicit  approaches  

Filling  in  some  gaps–  approaches  to  designing  interven)ons •  Primarily  individual  level  issues  

•  TheoreGcal  Domains  Framework  (TDF):14  domains  

•  Knowledge  •  Skills  •  Social/Professional  role  and  idenGty  •  Beliefs  about  capabiliGes  •  OpGmism  •  Beliefs  about  consequences  •  Reinforcement  •  IntenGons  •  Goals  •  Memory,  a?enGon  and  decision  processes  •  Environmental  context  and  resources  •  Social  influences  •  EmoGon  •  Behavioral  regulaGon  

•  Primarily  organizaGonal  issues  •  Consolidated  Framework  for  ImplementaGon  Research  (CFIR):5  domains  

•  IntervenGon  characterisGcs  •  Outer  secng  •  Inner  secng  •  CharacterisGcs  of  individuals  •  Process  

Can  be  applied  to  design  components  in  either  model  Cane  et  al.  Implementa+on  Science  2012  7:37      doi:10.1186/1748-­‐5908-­‐7-­‐37  

h?p://cfirguide.org/constructs.html    

Monitoring  and  evalua)on  steps

• KTA  •  Monitor  knowledge  use  •  Evaluate  outcomes  •  Sustained  knowledge  use  

•  Learning  cycle  •  DocumenGng  decisions  and  outcomes  

RE-­‐AIM  •  Reach  •  EffecGveness  •  AdopGon  •  ImplementaGon  •  Maintenance  

Other  frameworks  can  be  useful  as  well,  such  as:  

Summary

•  Several  areas  of  similarity  •  DisGnct  areas  of  difference  

•  Most  parGcularly  in  the  source  of  knowledge/evidence  •  Possible  area  for  integraGon  and  concordance:  GRADE  •  h?p://www.gradeworkinggroup.org/    

•  Can  “big”  data  evidence  be  included?  •  Both  are  silent  on  key  issues  of  implementaGon  intervenGon  design  

•  Use  of  TDF  and  CFIR  confers  some  key  benefits  •  Linkage  between  barriers  assessed  and  acGon/soluGon  sets  

•  Both  describe  the  need  for  monitoring  and  evaluaGon  but  provide  li?le  guidance  

•  Use  of  other  frameworks  such  as  RE-­‐AIM  offers  useful  insights  

“Common  Loop  Assay”    A  systemaGc  approach  to  

surveying  infrastructure  supporGng  rapid  learning  about  health  

 

Allen  Flynn,  PharmD  Department  of  Learning  Health  Sciences  

Medical  School  University  of  Michigan  

 8th  Annual  Conference  on  the  Science  of  DisseminaGon  and  ImplementaGon,  Washington  DC,  December  14-­‐15,  2015  

Some  content  in  these  slides  provided  by  Dr.  Charles  P.  Friedman,  University  of  Michigan,  2015      

INFRASTRUCTURE  

Outline  

•  Macro  and  micro  views  of  a          Learning  Health  System  (LHS)  •  A  LHS  requires  infrastructure    •  New  method  for  eliciGng  informaGon  about        exisGng  infrastructure  supporGng  rapid        learning  about  health:    

the  Common  Loop  Assay      

INFRASTRUCTURE  

DOT  BEING  

CONNECTED  HERE  

14  

14

LHS  Macro:  An  Ultra-­‐Large  Scale  System  

Patient Groups

Governance  Engagement  Data  Aggrega9on  Analysis  Dissemina9on  

Insurers

Pharma

Universities

Government/Public Health Healthcare Delivery Networks

All-­‐Inclusive   Decentralized  

Research Institutes

Reciprocal  Trusted  

Tech Industry

LHS  Micro:  Learning  Cycle  (Feedback  Loop)  Learning  Community  (of  Prac9ce)  

Not  this  Research  Community  (of  Discovery)  

D  &  I  

Examples  of  Learning  CommuniGes  (of  PracGce)  operaGng  Feedback  Loops  

•  A  “clinical  design  team”  within  an  organizaGon          •  A  “collaboraGve  quality  improvement  group”        brings  together  pracGGoner-­‐experts  throughout  a  state            •  A  team  of  researchers  becomes  embedded  in  a          clinical  program  in  order  to  help  improve  the  program        

 

The  LHS  ‘Infrastructure  ProposiGon’  

=  a  Socio-­‐technical  

Plaporm  What’s  

Needed  is  

Elements  making  up  a  community  learning  plaporm  

EHRs  REGISTRY      

STATS  SOFTWARE      

EXPERTISE  &  LEADERSHIP  

DIGITAL  LIBRARY  

MESSAGE  TAILORING  MECHANISMS  

DATA  CATALOG  

Real-­‐world  example  Urology  learning  

community  

“Common  Loop  Assay”  for  studying  infrastructure  

People

Technology

Policy

ProcessHow  does  this  plaporm  

 

support  these  7  steps    

for  a  learning  community?    

More  about  the  Common  Loop  Assay  (CLA)  •  QuesGons  about  all  7  steps    •  Hour-­‐long  focus  groups    •  Aims  to  idenGfy  common  Success  Factors  &  Challenges        related  to  infrastructure  •  Findings  inform  design  and  development  of  new        or  more  advanced  LHS  infrastructure  components  

Common  Loop  Assay  QuesGons/Structure  

For  each  step  on  the  learning  cycle:    1.  “Who”  (by  role)  is  involved?  2.  What  actually  gets  done?  3.  Which  tools/methods  are  used?  4.  What  moGvates  parGcipaGon?  5.  What  challenges  pertain?  

What  types  of  results  are  we  generaGng?  

•  Web-­‐based  registries  

SUCCESS  FACTORS   CHALLENGES  

•  Manual  data  abstracGon  from  EHRs  

•  CollaboraGve  interpretaGon  of  results   •  Advancing  work  on  “promising  pracGces”  

•  Engaged  charismaGc  leadership   •  Developing  leaders  

•  Data  capture  using  EHRs     •  Clinical  decision  support  via  EHRs  

Overall:    Ongoing  learning  using  feedback  loops  is    rate-­‐limited  due  to  a  variety  of  factors.    

Summary  About  Infrastructure  •  Belief  -­‐  a  new  socio-­‐technical  infrastructure  is  needed          •  Common  Loop  Assay  helps  idenGfy  its  requirements  •  Learning  communiGes  require  infrastructure  for:      1)  EHR  data  abstracGon        2)  knowledge  representaGon  in  computable  forms  

     3)  IntegraGon  of  tailored  advice  messages  in  EHRs          

 

LHS.medicine.umich.edu

Innovating audit and feedback using message tailoring models for learning health systems

Zach Landis-Lewis, PhD, MLIS

Asst. Prof. of Learning Health Sciences

A learning cycle for improving health

Message tailoring

29

Image source: Austrian JS, Adelman JS, Reissman SH, Cohen HW, Billett HH. The impact of the heparin-induced thrombocytopenia (HIT) computerized alert on provider behaviors and patient outcomes. J Am Med Inform Assoc. 2011 Nov-Dec;18(6):783-8.

30

Image source: Gerber JS, Prasad PA, Fiks AG, Localio AR, Grundmeier RW, Bell LM, Wasserman RC, Keren R, Zaoutis TE. Effect of an outpatient antimicrobial stewardship intervention on broad-spectrum antibiotic prescribing by primary care pediatricians: a randomized trial. JAMA. 2013 Jun 12;309(22):2345-52.

31

Image source: Austrian JS, Adelman JS, Reissman SH, Cohen HW, Billett HH. The impact of the heparin-induced thrombocytopenia (HIT) computerized alert on provider behaviors and patient outcomes. J Am Med Inform Assoc. 2011 Nov-Dec;18(6):783-8.

TMI (Too much information)?

➔ Alert fatigue and information overload are commonplace in clinical settings

➔ Non-actionable information is routinely sent to healthcare

professionals ➔ We lack knowledge about how to optimize the delivery of

messages to improve care

https://flic.kr/p/5q67Vu

33

Menus of theoretical constructs ● Capability, Opportunity,

Motivation, and Behavior (COM-B) (Michie et al, 2011) ● Theoretical Domains

Framework (TDF) (Michie et al, 2005)

 

Many performance feedback message types

34

48%

Message elements can be specified

35

48%

Current  score   Current  score   Current  score  

Performance  history   Peer  comparison  

Trend   Goal  

Message elements relate to information needs

36

48%

Current  score   Current  score   Current  score  

Performance  history   Peer  comparison  

Trend   Goal  

“How am I doing?”

“What has changed?” “How do I compare?”

Message elements relate to information needs

37

48%

Current  score   Current  score   Current  score  

Performance  history   Peer  comparison  

Trend   Goal  

“How am I doing?”

“What has changed?” “How do I compare?”

Relate message elements to action

38

Message

Mechanism of action? Barrier to

improvement

Identifying mechanisms of action

39

Message

Mechanism of action?

EMR

Can data support inference about mechanisms?

Barrier to improvement

40

Message

Mechanism of action?

EMR

Menu-based choice tool for clinical supervisors

Barrier to improvement

A. To establish proof-of-concept for a message tailoring

system A. To understand the potential impact of message tailoring

on clinical performance

41

Objectives

42

●  Optimal care is well-defined for antiretroviral therapy

●  Care provided by non-physician clinicians who use a nationally-standardized guideline

●  High staff turnover increases the potential impact of continuous interventions

Setting: Why Malawi?

43

44

●  Used in 66 sites for provision of antiretroviral therapy

●  Developed by Baobab Health Trust

●  Ministry of Health partnership, supported by CDC

Setting: Malawi’s National ART EMR

45

Clinic team

Patient population

Setting: HIV/AIDS care feedback loop

46

Clinic team EMR Data

Patient population

Setting: HIV/AIDS care feedback loop

47

Patient population

Clinic team EMR

Ministry of Health Department of HIV and AIDS

Quarterly reports

Data

Setting: HIV/AIDS care feedback loop

48

Patient population

Clinic team EMR

Ministry of Health Department of HIV and AIDS

Quarterly reports

Quarterly feedback

Data

Setting: HIV/AIDS care feedback loop

49

Patient population

Clinic team EMR

Ministry of Health Department of HIV and AIDS

Quarterly reports

Quarterly feedback

Data

Training

Local supervision Guidelines

Setting: HIV/AIDS care feedback loop

50

Patient population

Clinic team EMR

Ministry of Health Department of HIV and AIDS

Quarterly reports

Quarterly feedback

Data

Individualized feedback

Training

Local supervision Guidelines

Setting: HIV/AIDS care feedback loop

51

1.  Measure clinical performance

Methods: Study design

52

1.  Record pediatric patient height

2.  Record patient weight

3.  Provide cotrimoxazole preventative therapy (CPT)

4.  Classify progression of AIDS at

treatment initiation (WHO staging)

Methods: Measure clinical performance

53

Conditions AND Actions

Conditions

Condition

Patient has encounter

Action

Weight was recorded

Recommendation: “Record weight in kg to the nearest 100g at every visit.”

17 26 65% =   =  

SQL query

SQL query

Methods: Measure clinical performance

●  Analyzed two years of de-identified EMR data from 11 ART clinics

●  Approved by Pitt IRB and Malawi NHSRC

●  Ruby, MySQL, R statistical software

54

Methods: Measure clinical performance

55

1.  Measure clinical performance

Methods: Study design

56

1.  Measure clinical performance

2.  Model of feedback mechanisms

System component development

Inputs

Methods: Study design

Domain  knowledge  

Procedural  knowledge  

Memory/  a?enGon  

Material  resources   Social  pressure   Self-­‐  

efficacy  

Sample    of  TDF    

constructs  

Capability Opportunity Motivation

Domain  knowledge  

Procedural  knowledge  

Memory/  a?enGon  

Material  resources   Social  pressure   Self-­‐  

efficacy  

Sample    of  TDF    

constructs  

Capability Opportunity Motivation

Awareness  of  guideline  

Lack  of  knowledge  of  EMR  use  

Awareness  of  performance  

Lack  of  resources  

Peer  pressure  and  social  norms  

Beliefs  about  capability  

Barrier  to  improving  care  

Domain  knowledge  

Procedural  knowledge  

Memory/  a?enGon  

Material  resources   Social  pressure   Self-­‐  

efficacy  

Sample    of  TDF    

constructs  

Capability Opportunity Motivation

Awareness  of  guideline  

Lack  of  knowledge  of  EMR  use  

Awareness  of  performance  

Lack  of  resources  

Peer  pressure  and  social  norms  

Beliefs  about  capability  

Barrier  to  improving  care  

Hypo-­‐  theGcal  

mechanism  of  acGon  

Feedback  changes    awareness  /  knowledge   None  

Feedback  influences  

percepGon  of  ability  

Domain  knowledge  

Procedural  knowledge  

Memory/  a?enGon  

Material  resources   Social  pressure   Self-­‐  

efficacy  

Sample    of  TDF    

constructs  

Capability Opportunity Motivation

Awareness  of  guideline  

Lack  of  knowledge  of  EMR  use  

Awareness  of  performance  

Lack  of  resources  

Peer  pressure  and  social  norms  

Beliefs  about  capability  

Barrier  to  improving  care  

Hypo-­‐  theGcal  

mechanism  of  acGon  

Feedback  changes    awareness  /  knowledge   None  

Feedback  influences  

percepGon  of  ability  

PotenGal  impact   High   Low   CondiGonson  message  

Domain  knowledge  

Procedural  knowledge  

Memory/  a?enGon  

Material  resources   Social  pressure   Self-­‐  

efficacy  

Sample    of  TDF    

constructs  

Capability Opportunity Motivation

Awareness  of  guideline  

Lack  of  knowledge  of  EMR  use  

Awareness  of  performance  

Lack  of  resources  

Peer  pressure  and  social  norms  

Beliefs  about  capability  

Barrier  to  improving  care  

Features  of  performance  

history  (Data)  

Consistently  low  individual  performance  

No  prior  feedback  provided  

Low  team  performance   Low  individual  performance  

Hypo-­‐  theGcal  

mechanism  of  acGon  

Feedback  changes    awareness  /  knowledge   None  

Feedback  influences  

percepGon  of  ability  

PotenGal  impact   High   Low   CondiGonson  message  

Domain  knowledge  

Procedural  knowledge  

Memory/  a?enGon  

Material  resources   Social  pressure   Self-­‐  

efficacy  

Sample    of  TDF    

constructs  

Capability Opportunity Motivation

Awareness  of  guideline  

Lack  of  knowledge  of  EMR  use  

Awareness  of  performance  

Lack  of  resources  

Peer  pressure  and  social  norms  

Beliefs  about  capability  

Barrier  to  improving  care  

Features  of  performance  

history  (Data)  

Consistently  low  individual  performance  

No  prior  feedback  provided  

Low  team  performance   Low  individual  performance  

Hypo-­‐  theGcal  

mechanism  of  acGon  

Feedback  changes    awareness  /  knowledge   None  

Feedback  influences  

percepGon  of  ability  

PotenGal  impact   High   Low   CondiGonson  message  

Tailoring  approach   Current  score  and  guideline    

Current  score;  training  

recommended  

Peer  comparison  

Withhold  individualized  feedback  

Self  compar-­‐  ison  

Methods: Study design

63

1.  Measure clinical performance

2.  Model of feedback mechanisms

System component development

Inputs

Methods: Study design

Methods: Study design

64

1.  Measure clinical performance

3.  Create a rule-based message tailoring process

System component development

Inputs

2.  Model of feedback mechanisms

Methods: Study design

65

Step 1. Identify performance

features

Method Feature classification

Data source Individual clinician

performance data

Data type True/False

Example  rule:  If  recipient  has  consistently  performed  below  50%,  indicate  that  “consistently  low  performance”  is  present.  

Methods: Message tailoring process

66

Step 1. Identify performance

features

2. Infer barrier presence

Method Feature classification

Scoring rule

Data source Individual clinician

performance data

Performance features

Data type True/False Score

Example  rule:  If  recipient  has  a  10%  or  larger  performance  gap  relaGve  to  peers,  then  increase  score  for  capability  and  moGvaGon-­‐associated  barriers  

Methods: Message tailoring process

67

Step 1. Identify performance

features

2. Infer barrier presence

3. Assess message relevance

Method Feature classification

Scoring rule Scoring rule

Data source Individual clinician

performance data

Performance features

Performance features

Data type True/False Score Score

Methods: Message tailoring process

68

Step 1. Identify performance

features

2. Infer barrier presence

3. Assess message relevance

4. Prioritize messages

Method Feature classification

Scoring rule Scoring rule Scoring rule

Data source Individual clinician

performance data

Performance features

Performance features

Barrier presence and message

relevance scores

Data type True/False Score Score Score

Methods: Message tailoring process

69

Step 1. Identify performance

features

2. Infer barrier presence

3. Assess message relevance

4. Prioritize messages

Method Feature classification

Scoring rule Scoring rule Scoring rule

Data source Individual clinician

performance data

Performance features

Performance features

Barrier presence and message

relevance scores

Data type True/False Score Score Score

Rule total: 11 5 6 13

Methods: Message tailoring process

70

1.  Measure clinical performance

3.  Create a rule-based message tailoring process

System component development

Inputs

2.  Model of feedback mechanisms

Methods: Study design

71

1.  Measure clinical performance

3.  Create a rule-based message tailoring process

4.  Generate tailored, prioritized messages

System component development

Inputs

2.  Model of feedback mechanisms

Methods: Study design

72

1.  Measure clinical performance

3.  Create a rule-based message tailoring process

4.  Generate tailored, prioritized messages

5.  Analyze performance data and messages

System component development

Inputs

Outputs

2.  Model of feedback mechanisms

Methods: Study design

●  Performance gaps

Difference of >10% between an individual and the average of two top-performing peers

●  Variability of top-priority message formats

73

Methods: Analysis of tailoring results

74

48%

[No  feedback]  

Current  score   Self-­‐comparison   Peer-­‐comparison  

Withhold  feedback   PrioriGzed    combinaGon  

No  prioriGzaGon  

Methods: Analysis of tailoring results

75

We measured individual performance retrospectively over 24 months at clinic 11 sites

Results

76

Results: Average clinic performance

Results: Height recording performance

77

78

Results: CPT prescribing performance

79

Average monthly total of >= 10% gaps in performance between an individual and their top-performing peers

Results: Performance gaps

80

Results: Prioritized message variability

81

●  Message tailoring opportunities appear to be routine at a national level

●  Our findings suggest that message optimization could reduce information overload

●  Data quality problems may account for low performance, but these may be reduced using feedback

●  The model of feedback influence was generalized for tasks in this context

●  Increased task specification may increase tailoring opportunities

Discussion

82

●  This work demonstrates proof-of-concept performance feedback message tailoring system

●  We aim to refine this approach, and to implement and evaluate feedback message tailoring systems in Malawi

●  We view message tailoring systems as an essential component of learning health systems

Conclusion

●  Rebecca Crowley Jacobson, Gerry Douglas, Oliver Gadabu, Mwatha Bwanali, Harry Hochheiser, Matt Kam, and Susan Zickmund

●  Colleagues at Baobab Health Trust

●  Fogarty International Center #1R03TW009217-01A1

●  National Library of Medicine #5T15LM007059-22

●  University of Pittsburgh Center for Global Health, and the Department of Biomedical Informatics (DBMI) 83

Acknowledgements