To describe and apply statistical considerations to pragmatic trial research design
&
To review the PRECIS-2 domains of follow-up, primary outcome and primary analysis
This design strategy involves blending design components of clinical effectiveness and implementation research and is described by 3 hybrid types:
Traditional clinical and implementation research have not shared many design features – for example, unit of analysis, typical unit of randomization, outcome measures, and targets of the intervention being tested. Hybrid designs are new and the field is still evolving on how best to blend these design components. However, the information they provide could speed the translation of research findings into routine practice.
Intention-to-Treat (ITT)
Contextual Factors
ITT analysis includes every subject who is randomized according to their treatment assignment. It ignores noncompliance, protocol deviations, withdrawal, and anything that happens after randomization. In this regard, it reflects usual care practices.
Special Consideration: Addressing patient drop-out (data missingness) is particularly critical when using ITT in longitudinal studies. Thus, a natural tension exists between ensuring protocol adherence and minimizing follow-up burden.
Characteristics of the setting can affect implementation and effectiveness of interventions. Analyses should explore potential moderators (effect modifiers) that are present at baseline using multilevel modeling with time x treatment x moderator interactions.
Special Consideration: Ensuring that data on possible contextual factors are collected.
Are one form of pragmatic research trial design that does not involve randomization
Involve randomization of the treatment or the intervention.
Designed by the NIH Collaboratory to provide a complete suite of information on how to understand, design, conduct, analyze & disseminate pragmatic clinical trials (PCTs).
An important challenge is the need to develop infrastructure to support pragmatic clinical trials, which compare interventions in usual practice settings and subjects.
The NIH Clinical and Translational Science Awards Consortium reported on five recommendations related to strengthening the research infrastructure for pragmatic clinical trials
FRAMEWORK
FRAMEWORK
Among other benefits, this approach (along with stakeholder consideration) will actually help to accelerate the eventual uptake of the research findings into practice!
Stepped wedge randomised trial designs involve sequential roll-out of an intervention to participants (individuals or clusters) over a number of time periods. By the end of the study, all participants will have received the intervention, although the order in which participants receive the intervention is determined at random. The design is particularly relevant where it is predicted that the intervention will do more good than harm (making a parallel design, in which certain participants do not receive the intervention unethical) and/or where, for logistical, practical or financial reasons, it is impossible to deliver the intervention simultaneously to all participants. Stepped wedge designs offer a number of opportunities for data analysis, particularly for modelling the effect of time on the effectiveness of an intervention. This paper presents a review of 12 studies (or protocols) that use (or plan to use) a stepped wedge design. One aim of the review is to highlight the potential for the stepped wedge design, given its infrequent use to date.
An important challenge in comparative effectiveness research is the lack of infrastructure to support pragmatic clinical trials, which compare interventions in usual practice settings and subjects. These trials present challenges that differ from those of classical efficacy trials, which are conducted under ideal circumstances, in patients selected for their suitability, and with highly controlled protocols. In 2012, we launched a 1-year learning network to identify high-priority pragmatic clinical trials and to deploy research infrastructure through the NIH Clinical and Translational Science Awards Consortium that could be used to launch and sustain them. The network and infrastructure were initiated as a learning ground and shared resource for investigators and communities interested in developing pragmatic clinical trials. We followed a three-stage process of developing the network, prioritizing proposed trials, and implementing learning exercises that culminated in a 1-day network meeting at the end of the year. The year-long project resulted in five recommendations related to developing the network, enhancing community engagement, addressing regulatory challenges, advancing information technology, and developing research methods. The recommendations can be implemented within 24 months and are designed to lead toward a sustained national infrastructure for pragmatic trials.
This article summarizes the three types of hybrid effectiveness-implementation designs and associated evaluation methods. It includes a discussion of how hybrid designs have the potential to enhance knowledge development and application of clinical interventions and implementation strategies in “real world” settings. The authors propose implications of hybrid designs for quality improvement research.
Traditionally, researchers think of knowledge development and application as a uni-directional, step-wise, progression, in which different questions are addressed in isolation. First, a randomized clinical trial (RCT) is deployed to determine if an intervention implemented under controlled conditions has efficacy in specific populations. Next, “effectiveness research” methods determine if the effect remains when implemented in less controlled conditions with broader populations. Finally, “implementation research” methods, such as cluster randomized controlled trials, are deployed to understand the best methods to introduce the intervention into practice. While systematic, this unidirectional approach can take a great deal of time from the original efficacy study design to the final conclusions about implementation, and conditions may change so that original clinical and policy questions become less relevant [1]. Additionally, the unidirectional approach does not help us understand interaction effects between the intervention and the implementation strategy.
Hybrid designs simultaneously evaluate the impact of interventions introduced in real world settings (e.g. “effectiveness”), and the implementation strategy. Such designs enhance the ability to identify important intervention-implementation interactions, which inform decisions about optimal deployment and generalized impact, and may accelerate the introduction of valuable innovations into practice. This has implications for quality improvement researchers, who are often guiding the deployment and evaluating the impact of interventions in healthcare settings.
It is now well known that standard statistical procedures become invalidated when applied to cluster randomized trials in which the unit of inference is the individual. A resulting consequence is that researchers conducting such trials are faced with a multitude of design choices, including selection of the primary unit of inference, the degree to which clusters should be matched or stratified by prognostic factors at baseline, and decisions related to cluster subsampling. Moreover, application of ethical principles developed for individually randomized trials may also require modification. We discuss several topics related to these issues, with emphasis on the choices that must be made in the planning stages of a trial and on some potential pitfalls to be avoided.
The stepped wedge design, under which all trial participants receive the intervention but the order in which the intervention is received is randomised, is potentially useful to rigorously evaluate organisational interventions to improve quality and safety.
We use two examples of cluster-randomised stepped-wedge trials (DQIP and GP-POLY) to illustrate advantages and disadvantages of the design in evaluations of complex prescribing improvement interventions in primary care. DQIP is nearing completion and GP-POLY will start in 2013.
The intervention in both DQIP and GP-Poly involves outreach visits by researchers for education and informatics tool training, making sequential roll out a logistic necessity. The stepped wedge allows for this by design, but trial durations may be prolonged compared to parallel-arm trials and other designs, and arranging initiation visits to fit with randomisation schedules is challenging. Since all participants receive the intervention and there are multiple repeated measurements, practice sample size requirements in DQIP and GP-POLY were reduced compared to a parallel-arm design, but power calculations are more complex. Recruitment may be improved by offering the intervention to all participants, but creates potential problems for retention and avoiding contamination in practices with long lags between recruitment and intervention start. Because of the vulnerability of stepped wedge trials to time varying confounding, avoiding changes in intervention delivery to successive cohorts is important and needs careful planning.
The stepped wedge design is attractive for cluster randomised trials of quality improvement interventions, especially when staggering of intervention delivery is inevitable, but presents challenges for implementation that need careful planning.
Randomized controlled trials often suffer from two major complications, i.e., noncompliance and missing outcomes. One potential solution to this problem is a statistical concept called intention-to-treat (ITT) analysis. ITT analysis includes every subject who is randomized according to randomized treatment assignment. It ignores noncompliance, protocol deviations, withdrawal, and anything that happens after randomization. ITT analysis maintains prognostic balance generated from the original random treatment allocation. In ITT analysis, estimate of treatment effect is generally conservative. A better application of the ITT approach is possible if complete outcome data are available for all randomized subjects. Per-protocol population is defined as a subset of the ITT population who completed the study without any major protocol violations.
The goal of comparative effectiveness research (CER) is to provide patients, their advocates and caregivers, health care professionals, federal officials, policy makers, and payers with evidence-based information to make informed health care decisions.1,2 Previously, CER studies were designed by researchers and had relatively little input from patients. Patient engagement has rapidly gained acceptance as crucial to the successful translation of CER for all interested parties.3 Experiences with patient engagement in research, including community-based participatory research,4 suggest that success hinges on patients being interested and emotionally involved in the research question and understanding their role in the CER process.