Key is the balance between assuring implementation of core components of intervention while allowing adaptation of non-essential components.
PRECIS-2 identifies two domains related to real-world use (i.e., flexibility -delivery and flexibility -adherence)
How to achieve this balance is a challenge and ongoing issue- Main Approaches include;
data from prior implementation- aspects most strongly associated with outcomes;
in absence of this, theory of the intervention.
One approach that has been used is to standardize what needs to be done (for example, the 5 A’s), but to allow flexibility in how and to some extent who does this.
There are many challenges in real world application and pragmatic research on implementation of evidence based interventions and guidelines. This section will focus on the tension between rigorous fidelity to validated intervention components and what is feasible in real world, and especially low resource settings. On one hand, the most common reason for failure to replicate positive intervention findings is failure to deliver the intervention as intended (Allen et al, 2012). On the other hand, as discussed earlier, many settings do not have the resources, training or time to deliver interventions as evaluated in efficacy research.
How different is the flexibility in how the intervention is delivered and the flexibility likely in usual care?
How different is the flexibility in how participants must adhere to the intervention and the flexibility likely in usual care?
Often, satisfactorily resolving the tension between intervention fidelity and adaptation to specific settings and conditions is key to success in real world use (Cohen et al, 2008). PRECIS-2 addresses this in two categories: flexibility delivery and flexibility adherence. Flexibility delivery refers to how different flexibility in how the intervention is delivered is in the trial than in usual care. For example, if there is a strict protocol with close monitoring and frequent feedback to staff, this would be considered an explanatory trial and rated a 1 or 2 on the 5 point PRECIS scale.
Parallel issues are relevant at the participant or patient adherence level, called flexibility adherence in PRECIS-2. An intervention that closely monitors patients and has mechanisms to improve adherence, or even removes patients from the study because of non-adherence would be rated as very explanatory (1), whereas one that involves no more than usual encouragement that would be in usual care would be rated as very pragmatic (5).
One recommended approach for achieving balance between fidelity and adaptation has been to assure implementation of ‘core’ components of an intervention while allowing for customization of non-essential components (e.g., often the way a program is presented or what it is called).
Achieving the desired level of balance is admittedly a challenge and an ongoing issue throughout a trial. Ideally, one would have data on the components of an intervention most strongly associated with outcomes to provide guidance. In the absence of this, relying on the theory of the intervention can be used to identify core components. For example, in working with different healthcare systems to implement the 5 As model of self-management (Glasgow et al, 2003), we have emphasized the importance of delivering each of the 5 As (Ask, Advise, Agree, Assist, Arrange) as core components, but that the specific forms used to counsel patients and how these elements are integrated into patient flow can be customized to the practice.
Ask
Glasgow RE, Davis C, Funnell MM, Beck A. (2003) Implementing Practical Interventions to Support Chronic Illness Self-Management in Health Care Settings: Lessons Learned and Recommendations. Joint Commission Journal on Quality and Safety 29(11):563-574.
Identical flexibility to usual care
Jerry Krishnan, MD, PhD
Resolving the tension between intervention fidelity and adaptation is often the key to success.
failure to deliver the intervention as intended.
On the other hand, as discussed earlier, many settings do not have the resources, training or time to deliver interventions as evaluated in efficacy research.
Effective dissemination and implementation (D&I) of evidence-based interventions (EBIs) assume that program strategies and methods will be conducted with “fidelity.” Fidelity has been defined as the “extent to which the intervention was delivered as planned. It represents the quality and integrity of the intervention as conceived by the developers.” This chapter discusses the importance of fidelity for D&I research, proposes a framework for considering factors that influence fidelity, and describes strategies for producing high fidelity in interventions. It presents an example of a community-based intervention and various attempts to assess fidelity, highlighting several key factors, processes, and challenges. The chapter concludes by summarizing implications of fidelity and adaptation issues on intervention dissemination for practitioners, researchers, and policymakers. The goal is to add conceptual clarity regarding fidelity across the study design spectrum.
Locate/Download Full Text . . .
Background
Understanding the process by which research is translated into practice is limited. This study sought to examine how interventions change during implementation.
Methods
Data were collected from July 2005 to September 2007. A real-time and cross-case comparison was conducted, examining ten interventions designed to improve health promotion in primary care practices in practice-based research networks. An iterative group process was used to analyze qualitative data (survey data, interviews, site visits, and project diary entries made by grantees approximately every 2 weeks) and to identify intervention adaptations reported during implementation.
Results
All interventions required changes as they were integrated into practice. Modifications differed by project and by practice, and were often unanticipated. Three broad categories of changes were identified and include modifications undertaken to accommodate practices' and patients' circumstances as well as personnel costs. In addition, research teams played a crucial role in fostering intervention uptake through their use of personal influence and by providing motivation, retraining, and instrumental assistance to practices. These efforts by the research teams, although rarely considered an essential component of the intervention, were an active ingredient in successful implementation and translation.
Conclusions
Changes are common when interventions are implemented into practice settings. The translation of evidence into practice will be improved when research design and reporting standards are modified to help quality-improvement teams understand both these adaptations and the effort required to implement interventions in practice.
Locate/Download Full Text . . .
The ‘5As’ model of behavior change provides a sequence of evidence-based clinician and office practice behaviors (Assess, Advise, Agree, Assist, Arrange) that can be applied in primary care settings to address a broad range of behaviors and health conditions. Although the 5As approach is becoming more widely adopted as a strategy for health behavior change counseling, practical and standardized assessments of 5As delivery are not widely available. This article provides clinicians and researchers with alternatives for assessment of 5As implementation for both quality improvement, and for research and evaluation purposes, and presents several practical tools they may wish to use. Sample instruments for tracking delivery of the 5As and related tools that are in the public domain are provided to facilitate integration of self-management support into clinical care. We discuss the strengths and limitations of the various assessment approaches. Promising and practical measures to assess the 5As exist for both quality improvement and research purposes. Additional validation is needed on almost all current procedures, and both clinicians and researchers are encouraged to use these instruments and share the resulting data.
Background
Evidence-based interventions are frequently modified or adapted during the implementation process. Changes may be made to protocols to meet the needs of the target population or address differences between the context in which the intervention was originally designed and the one into which it is implemented [Addict Behav 2011, 36(6):630–635]. However, whether modification compromises or enhances the desired benefits of the intervention is not well understood. A challenge to understanding the impact of specific types of modifications is a lack of attention to characterizing the different types of changes that may occur. A system for classifying the types of modifications that are made when interventions and programs are implemented can facilitate efforts to understand the nature of modifications that are made in particular contexts as well as the impact of these modifications on outcomes of interest.
Methods
We developed a system for classifying modifications made to interventions and programs across a variety of fields and settings. We then coded 258 modifications identified in 32 published articles that described interventions implemented in routine care or community settings.
Results
We identified modifications made to the content of interventions, as well as to the context in which interventions are delivered. We identified 12 different types of content modifications, and our coding scheme also included ratings for the level at which these modifications were made (ranging from the individual patient level up to a hospital network or community). We identified five types of contextual modifications (changes to the format, setting, or patient population that do not in and of themselves alter the actual content of the intervention). We also developed codes to indicate who made the modifications and identified a smaller subset of modifications made to the ways that training or evaluations occur when evidence-based interventions are implemented. Rater agreement analyses indicated that the coding scheme can be used to reliably classify modifications described in research articles without overly burdensome training.
Conclusions
This coding system can complement research on fidelity and may advance research with the goal of understanding the impact of modifications made when evidence-based interventions are implemented. Such findings can further inform efforts to implement such interventions while preserving desired levels of program or intervention effectiveness.