Home

Chapter 4 : How Do I Know If I Am Successful?

Evaluation and measurement

"A key gap and opportunity in the implementation science field is the development and identification of practical, validated measures that assess key implementation processes and outcomes. Harmonized use of such standard measures across content areas and studies would greatly help advance the science of D&I."
— Russell Glasgow, PhD
Professor, Family Medicine
Associate Director, Colorado Health Outcomes Program
University of Colorado School of Medicine

Learning Objectives:

To compare and contrast study evaluation approaches in D&I

&

To identify key metrics in D&I

Key Terms For D&I Evaluation Design

Internal validity. n.

Internal validity is concerned with the ability to draw causal inferences by the extent to which a study minimizes confounding heterogeneity and systematic error – for example, involving patient selection and measurement.

External validity. n.

External validity is concerned with the generalizability or real-world applicability of findings from a study and determines whether the results and inferences from the study can be applied to the target population and settings

Pre-Post studies. n.

A Pre-Post study compares changes in outcomes following an intervention and then seeks to attribute those changes to the intervention. They are intuitive to conduct. The problem is that, without reference to a comparison group, they cannot answer whether the changes would have occurred anyway.

Observational comparative effectiveness studies. n.

Observational studies seek to draw inferences about the possible effect of an intervention as adopted in real-world practice. Observational studies can be quicker to conduct, more practical, and more cost efficient than clinical trials. External validity is usually strong but studies suffer from confounding due to non-random allocation. In addition, observational studies can rely on data collected for non-research purposes – for example, billing data (administrative claims) or clinical data (electronic health records) – which can also limit internal validity.

Explanatory clinical trial. n.

An explanatory clinical trial is a specialized randomized experiment in a specialized population under optimal conditions. – often referred to as randomized controlled trials (RCTs). In general, explanatory trials seek to maximize internal validity, often at the expense of external validity.

Pragmatic clinical trials. n.

Pragmatic (or practical) clinical trials are randomized trials that are concerned with producing answers to questions faced by decision makers. Pragmatic trials seek to increase the external validity of the findings while maintaining strong internal validity. Tunis and colleagues defined them as studies that (1) select clinically relevant alternative interventions to compare; (2) include a diverse population of study participants, (3) recruit participants from heterogeneous practice settings, and (4) collect data on a broad range of health outcomes.

Source: Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA 2003;290(12):1624-1632.
visit this site.

Two Guidelines for Evaluations Using Observational Data

Hover to Reveal
Click to Reveal

OBSERVATIONAL COMPARATIVE EFFECTIVENESS RESEARCH

AHRQ: Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide. This guide provides key information for designing comparative effectiveness research protocols to identify both minimal standards and best practices.
Visit The Website

PATIENT-CENTERED OUTCOMES RESEARCH METHODOLOGY

Developing and improving the science and methods of patient-centered outcomes research (PCOR) is one of PCORI’s primary efforts: “Research methodology better methods will produce more valid, trustworthy, and useful information that will lead to better healthcare decisions, and ultimately to improved patient outcomes.” PCORI has issued a draft methodology report that is currently being modified based on stakeholder feedback
Visit The Website

OBSERVATIONAL COMPARATIVE EFFECTIVENESS RESEARCH

AHRQ: Developing a Protocol for Observational Comparative Effectiveness Research: A User’s Guide. This guide provides key information for designing comparative effectiveness research protocols to identify both minimal standards and best practices.
Visit The Website

PATIENT-CENTERED OUTCOMES RESEARCH METHODOLOGY

Developing and improving the science and methods of patient-centered outcomes research (PCOR) is one of PCORI’s primary efforts: “Research methodology better methods will produce more valid, trustworthy, and useful information that will lead to better healthcare decisions, and ultimately to improved patient outcomes.” PCORI has issued a draft methodology report that is currently being modified based on stakeholder feedback
Visit The Website

PRECIS Evaluation Criteria

Pragmatic-Explanatory Continuum Indicator Summary (PRECIS) was developed by Thorpe and colleagues to measure where a given study might fall. It can be applied to illustrate the degree to which a trial is pragmatic or explanatory. It uses ten domains plotted on a “spoke-and-wheel” diagram.

Click text to learn more about each domain.

Source: Journal of Clinical Epidemiology, Vol 62, Thorpe, KE et al., A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers, Pages 464-475, Copyright (2009), with permission from Elsevier.
Listen to More Here
Click Here for Key References

Examples: D&I Study Designs

Case Example : Mixed Methods Multiple Case Study

Context:

Improving quality in children’s mental health and social service settings.

 

Objective:

This study is designed to inform efforts to develop more effective implementation strategies by fully describing the implementation experiences of a sample of community-based organizations that provide mental health services to youth in one Midwestern city.

Evaluation Design:

A mixed methods multiple case study of seven children’s social service organizations in one Midwestern city in the United States (the control group of a larger randomized controlled trial). Qualitative data included semi-structured interviews with organizational leaders and a review of documents (e.g., implementation and quality improvement plans, program manuals, etc.) to understand implementation decision-making and specific implementation strategies that are used to implement new programs and practices. Focus groups with clinicians explore their perceptions of a range of implementation strategies.

This qualitative work informs the development of a Web-based survey that will assess the perceived effectiveness, relative importance, acceptability, feasibility, and appropriateness of implementation strategies from the perspective of both clinicians and organizational leaders.

The Organizational Social Context measure will be used to assess organizational culture and climate.
NEXT →
Source:
Powell BJ, Proctor EK, Glisson CA, Kohl PL, Raghavan R, Brownson RC, Stoner BP, Carpenter CR, Palinkas LA. A mixed methods multiple case study of implementation as usual in children’s social service organizations: study protocol. Implement Sci. 2013; 8: 92.

Case Example : Pragmatic implementation trial in primary care

Context:

Understanding patient-centered health behavior and psychosocial issues in primary care.

 

Context:

Our goal is to design a scientifically rigorous and valid pragmatic trial to test whether primary care practices can systematically implement the collection of patient-reported information and provide patients needed advice, goal setting, and counseling in response.

Evaluation Design:

A cluster randomized delayed intervention trial, of the My Own Health Report (MOHR) study. Nine pairs of diverse primary care practices are randomized to early or delayed intervention four months later. The intervention consists of fielding the MOHR assessment and subsequent provision of needed counseling and support for patients presenting for wellness or chronic care.

Stakeholder groups are engaged throughout the study design to account for local resources and characteristics.

Study outcomes include the intervention reach (percent of patients offered and completing the MOHR assessment), effectiveness (patients reporting being asked about topics, setting change goals, and receiving assistance in early versus delayed intervention practices), contextual factors influencing outcomes, and intervention costs.
NEXT →
Source:
Krist AH, Glenn BA, Glasgow RE, Balasubramanian BA, Chambers DA, Fernandez ME, Heurtin-Roberts S,Kessler R, Ory MG, Phillips SM, Ritzwoller DP, Roby DH, Rodriguez HP, Sabo RT, Sheinfeld Gorin SN,Stange KC; MOHR Study Group. Designing a valid randomized pragmatic primary care implementation trial: the my own health report (MOHR) project. Implement Sci. 2013;8:73.

Case Example : Implementation in a VA setting

Context:

Heart failure is the primary reason for discharge from the VA medical service. Furthermore, the readmission rate is high.

The Hospital to Home (H2H) Excellence in Transitions Initiative is a new national campaign to reduce preventable readmissions for patients recently hospitalized with a cardiovascular condition (www.H2Hquality.org).

 

Context:

  1. To determine if VA facility enrollment in H2H results in improved care for VA patients with heart failure.
  2. To determine barriers and facilitators to a) enrolling facilities in H2H, and for those facilities enrolled, b) adopting the H2H interventions.
  3. To evaluate the use of the VA Heart Failure Network to aid in implementing the H2H initiative in a randomized trial. (HF Network

Implementation Design:

A 122 VA facilities with <100 discharges were randomized into intervention and control groups. From Month 1 through month 6 the implementation of the VA H2H initiative was facilitated for all the intervention facilities.

All the intervention facilities were asked to participate by (1) enrolling their facility at the H2H website as commitment to, and (2) initiating projects based on the VA H2H initiative. In Month 6 surveys were sent to both the intervention and control facilities to assess participation in the VA H2H initiative. From Month 7 to Month 12 the VA H2H initiative is being facilitated at all remaining 61 control facilities.
NEXT →
Source: Beck A, Bergman DA, Rahm AK, Dearing JW, Glasgow RE. Using Implementation and Dissemination Concepts to Spread 21st-century Well-Child Care at a Health Maintenance Organization Perm J. 2009 Summer; 13(3): 10–18.

Case Example : Implementation Design In HMO

Context:

Implementing and disseminating an evidence-based model of well-child care (WCC) that includes developmental and preventive services recommended by the American Academy of Pediatrics. 

Implementation Design:

Twenty-first Century WCC is a parent-centered, team-based, primary care model that combines online pre-visit assessments—completed by parents and caregivers—with vaccinations and anticipatory guidance.
Nurses, nurse practitioners, developmental specialists, and pediatricians all play roles in the WCC model.
Patient and clinician interaction, health records, and resources are all facilitated through a Web-based diagnostic, management, tracking, and resource information tool.
Unlike innovations that are embedded only in technical systems, validated models of team-based health care have multiple components that must be made compatible with complex sociotechnical systems. Interpersonal communication, work, coordination, and judgment are key processes that affect implementation quality.  Implementation can involve tailoring to a particular site and customizing either the model or the organizational context to accommodate it.
BEGINNING →
Source: Beck A, Bergman DA, Rahm AK, Dearing JW, Glasgow RE. Using Implementation and Dissemination Concepts to Spread 21st-century Well-Child Care at a Health Maintenance Organization Perm J. 2009 Summer; 13(3): 10–18.

Considerations for Practical D&I Measures

Required

  • Important to stakeholders
  • Burden is low to moderate
  • Sensitive to change
  • Actionable

Additional

  • Broadly applicable, has norms to interpret
  • Low probability of harm
  • Addresses public health goal(s)
  • Related to theory or model
  • “Maps” to “gold standard” metric or measure

Required

  • Important to stakeholders
  • Burden is low to moderate
  • Sensitive to change
  • Actionable

Additional

  • Broadly applicable, has norms to interpret
  • Low probability of harm
  • Addresses public health goal(s)
  • Related to theory or model
  • “Maps” to “gold standard” metric or measure
Source: Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013; 45(2):237-43.

Outcomes for Implementation Science


“Improvements in consumer well-being provide the most important criteria for evaluating both treatment and implementation strategies—for treatment research, improvements are examined at the individual client level whereas improvements at the population-level (within the providing system) are examined in implementation research. However . . . implementation research requires outcomes that are conceptually and empirically distinct from those of service and clinical effectiveness.”
Source: Procter E, Simere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda. Adm Policy Ment Health. 2011; 38(2): 65-76.
Implementation Outcomes
Acceptability
Adoption
Appropriateness
Costs
Feasibility
Penetration
Sustainability
Service Outcomes
Acceptability
Efficiency
Safety
Effectiveness
Equity
Patient-Centeredness
Timeliness
Client Outcomes
Satisfaction
Function
Symptomatology
Implementation Outcomes
Acceptability
Adoption
Appropriateness
Costs
Feasibility
Penetration
Sustainability
Service Outcomes
Acceptability
Efficiency
Safety
Effectiveness
Equity
Patient-Centeredness
Timeliness
Client Outcomes
Satisfaction
Function
Symptomatology

Resources for D&I Measures

Seattle Implementation Research Collaborative Instrument Review Project: A Systematic Review of Dissemination and Implementation Science Instruments

The overarching aim of the SIRC Instrument Review Project (IRP) is to conduct a systematic review of D&I instruments pulling, not only from published work, but also utilizing existing D&I research networks to obtain instruments in earlier phases of development.

Three primary outcomes for this project series include:

(1) a comprehensive library of D&I instruments measuring the implementation outcomes identified by Proctor and colleagues (2010) and organized by the Consolidated Framework for Implementation Research (CFIR; Damschroder et al., 2009) to make available to SIRC members;
(2) a rating system reflecting the degree of empirical validation of instruments, adapted from the evidence-based assessment (EBA) work of Hunsley and Mash (2008) and Terwee et al (2012);
(3) a consensus battery of instruments decided upon by D&I expert task force members using the EBA criteria to guide future D&I research efforts.
To date, 450 instruments were identified. Rating of these measures using the above-described criteria is ongoing.
Click Here to Learn More

Grid-Enabled Measures D&I Project

Grid-Enabled Measures (GEM) is a collaborative, web-based activity using the National Cancer Institute’s portal that uses a wiki platform to focus discussion and engage the research community. Its goal is to enhance the quality and harmonization of measures for implementation science health-related research and practice.

The initiative has provided information about 130 different implementation science measures across 74 constructs, their associated characteristics and a rating of these measures for quality and practicality.

This resource and ongoing activity has the potential to advance the quality and harmonization of implementation science measures and constructs.
Click Here to Learn More

NIH/VA Working Meeting on Reporting and Measures

A diverse group of experts from the United States and Canada gathered in October 2013 to discuss issues around reporting and measurement on D&I. In contrast to the large annual meetings held over the past five years, the 2013 meeting was a smaller, invitation-only meeting, including Federal and non-Federal experts, and focused on the development of a series of strategic review and position papers articulating needed capacity development for the dissemination and implementation research field.

The recently released NIH Funding Opportunity Announcements on Dissemination and Implementation Research in Health (PARs 13-054; 13-055; 13-056) articulate research priorities for the next set of high impact research studies.

As a companion to this trans-NIH call, the NIH D&I Research working group proposed a working meeting to advance three key areas of focus, including 1) D&I Measure Development and Standardized Reporting; 2) D&I Research Methods and Study Design; and 3) D&I Research Training. Each of these foci was addressed in separate complementary meetings.

Various products including position and agenda setting papers, and a website supporting D&I model selection and measurement are being developed and will be available in early to mid-2014.
Source: Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013; 45(2):237-43.
  • ‹ prev
  • next ›
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

Cost Measures for D&I

Information about the cost of an intervention can greatly impact if it is considered, adopted, implemented, and sustained. Cost measures can refer to:

Cost effectiveness ratio

The incremental cost of obtaining an incremental unit of a health effect.

Implementation cost

The proportion of the intervention resources and costs that would be required to implement the intervention in a different setting or population. Calculated as a function of the intervention costs and associated sensitivity analyses.

Intervention cost

The resource costs of conducting and participating in an intervention. Does not include research or development costs.

Recruitment cost

The costs of recruiting subjects to an intervention or pragmatic trial.
Source: Ritzwoller DP, Sukhanova A, Gaglio B, Glasgow RE. Costing behavioral interventions: A practical guide to enhance translation. Ann Behav Med 2009.

VA QUERI Economic Analysis Guidelines

A QUERI economic analysis measures costs and often outcomes, and places this information in context. A QUERI economic analysis may be a study of the relationship between quality and production efficiency, the determination of the cost of an intervention, an evaluation of the impact of an intervention on total health care costs, or a cost-effectiveness analysis.
A guide was developed for health services researchers:
Visit The VA QUERI Economic Analysis Guidelines

Practice Applying the Re-AIM Framework for Evaluating Outcomes

A QUERI economic analysis measures costs and often outcomes, and places this information in context. The RE-AIM framework is designed to enhance the quality, speed, and public health impacts of efforts to translate research into practice in five steps:

Reach

your intended target population

Efficacy

or effectiveness

Adoption

by target staff, settings, or institutions

Implementation

consistency, costs, and adaptation made during delivery

Maintenance

of intervention effects in individuals and settings over time

RE-AIM Tools

A checklist has been developed and can be accessed along with other evaluation planning tools
Download The Checklist & Get Started

Resources

A Method for Achieving Covariate Balance in Cluster Randomized Trials

Miriam Dickinson, PhD
 Slides Video

​Provides a tutorial on the use of balance criterion approach to achieve balance in cluster-randomized trials​.

Mixed Methods in Dissemination and Implementation Research

Karen Albright, PhD
 Slides Video

​A discussion of the strengths of qualitative vs. quantitative data, common barriers to using mixed method approaches, three dimensions of methodological combination, and five mixed method designs with particular relevance for dissemination and implementation research​.

Key Takeaways : Section 1

Click To Reveal Flipcard Answer

Internal Validity

The ability to draw causal inferences by the extent to which a study minimizes confounding heterogeneity and systematic error.

External Validity

The generalizability or real-world applicability of findings from a study that determines whether the results and inferences from the study can be applied to the target population and settings.

Pre-Post Studies

Compares changes in outcomes following an intervention to pre-intervention levels and then seeks to attribute those changes to the intervention.

Observational Comparative Effectiveness

Studies which seek to draw inferences about the possible effect of an intervention as adopted in real-world practice.

Explanatory Clinical Trials

A specialized, randomized experiment in a specialized population under optimal conditions.

Pragmatic Clinical Trials

Randomized trials that are concerned with producing real world answers to questions faced by decision makers. Pragmatic trials seek to increase the external validity of the findings while maintaining strong internal validity.

Key Takeaways : Section 2

Click To Reveal Flipcard Answer

What are some important properties of practical D&I measures?

  • Patient-centered and relevant
  • Present a low burden for data collection
  • Sensitivity to change 
  • Actionability
  • Relevance
  • Relate to a D&I theory or model

Cost data can have a great impact on implementation. What sort of information should be included?

  • Cost effectiveness
  • Implementation 
  • Intervention
  • Recruitment
  • Sustainability

The Re-AIM Framework is a useful evaluation tool. It measures:

  • Reach
  • Effectiveness
  • Adoption 
  • Implementation
  • Maintenance

Checklist To Get Started

  • What D&I evaluation designs are commonly used in your field? Where are the opportunities for strengthening their internal and external validity?

  • What D&I measures are commonly used in your field? Where are the opportunities for adding measures used by others?

  • Do you have the right analytic support on your team?

Read Chapter 5