Skip to main content

Maintaining Implementation through Dynamic Adaptations (MIDAS): protocol for a cluster-randomized trial of implementation strategies to optimize and sustain use of evidence-based practices in Veteran Health Administration (VHA) patients



The adoption and sustainment of evidence-based practices (EBPs) is a challenge within many healthcare systems, especially in settings that have already strived but failed to achieve longer-term goals. The Veterans Affairs (VA) Maintaining Implementation through Dynamic Adaptations (MIDAS) Quality Enhancement Research Initiative (QUERI) program was funded as a series of trials to test multi-component implementation strategies to sustain optimal use of three EBPs: (1) a deprescribing approach intended to reduce potentially inappropriate polypharmacy; (2) appropriate dosing and drug selection of direct oral anticoagulants (DOACs); and (3) use of cognitive behavioral therapy as first-line treatment for insomnia before pharmacologic treatment. We describe the design and methods for a harmonized series of cluster-randomized control trials comparing two implementation strategies.


For each trial, we will recruit 8–12 clinics (24–36 total). All will have access to relevant clinical data to identify patients who may benefit from the target EBP at that clinic and provider. For each trial, clinics will be randomized to one of two implementation strategies to improve the use of the EBPs: (1) individual-level academic detailing (AD) or (2) AD plus the team-based Learn. Engage. Act. Process. (LEAP) quality improvement (QI) learning program. The primary outcomes will be operationalized across the three trials as a patient-level dichotomous response (yes/no) indicating patients with potentially inappropriate medications (PIMs) among those who may benefit from the EBP. This outcome will be computed using month-by-month administrative data. Primary comparison between the two implementation strategies will be analyzed using generalized estimating equations (GEE) with clinic-level monthly (13 to 36 months) percent of PIMs as the dependent variable. Primary comparative endpoint will be at 18 months post-baseline. Each trial will also be analyzed independently.


MIDAS QUERI trials will focus on fostering sustained use of EBPs that previously had targeted but incomplete implementation. Our implementation approaches are designed to engage frontline clinicians in a dynamic optimization process that integrates the use of actional clinical data and making incremental changes, designed to be feasible within busy clinical settings.

Trial registration NCT05065502. Registered October 4, 2021—retrospectively registered.


Sustaining the use of evidence-based practices (EBPs) is a well-documented challenge for health systems [1, 2]. In the Department of Veterans Affairs (VA), Quality Enhancement Research Initiative (QUERI) programs have long worked to close the gap between evidence and practice, focusing on implementing sustained routine use of EBPs [3]. As part of a recently funded QUERI program, the Maintaining Implementation through Dynamic Adaptations (MIDAS) program aims to directly improve care for Veterans by engaging frontline clinicians to not only optimize care by closing documented quality gaps but also to sustain those improvements. Too often, initial implementation is “spotlighted” through high attention and leadership priority, but many clinics struggle to achieve and/or sustain positive impact after the spotlight turns to other initiatives [4]. Lack of sustainment may be particularly challenging for clinics that start off with low rates of EBP use compared to other clinics in the system [5]. The problem of implementation sustainability is due, in part, to the fact that many implementation efforts are short in duration with limited follow-up support [6, 7]. Strategies that change the habits of practitioners—either by changing the system within which they practice or by providing insights about how to make doing the EBP simpler and more meaningful—are likely to be more effective [8].

We will conduct a series of trials that each aims to implement an EBP with documented quality gaps. Clinics will be randomized to one of two implementation strategies designed to make sustained changes [4]; patient-level use of potentially inappropriate medications (PIMs) will be assessed within each clinic (cluster).

The two implementation strategies to be tested are (1) academic detailing (AD) and (2) AD plus the “Learn. Engage. Act. Process.” (LEAP) team-based quality improvement (QI) learning program (AD+LEAP). The first strategy, AD, is designed to provide individual providers with the knowledge and motivation to use the EBPs. The second implementation strategy, AD+LEAP, adds a team-based strategy that engages frontline providers and staff in incremental cycles of improvement in the use of the EBP with the support of a coach. The LEAP program provides coaching within a structured, paced curriculum over a 6-month period, during which teams complete one improvement project following a Plan-Do-Study-Act (PDSA) cycle of change. With both implementation strategies, participants will have access to a clinical dashboard or a similar resource, designed specifically for each EBP, that will provide actional data to inform improvement efforts. Clinical dashboards support providers in identifying high-risk patients and informing evaluation and treatment planning, and they are commonly used to increase uptake of EBPs in many areas of medicine [9,10,11,12] and throughout VHA [13,14,15,16,17]. They are scalable and sustainable and can form the core—but not the entirety—of a data-driven implementation program [18,19,20]; however, they are often used inconsistently in clinical practice [21].

Evidence-based practices (EBPs)

The three EBPs will target (1) reducing inappropriate polypharmacy through proactive deprescribing in patients age 65 and older, (2) safe use of direct oral anticoagulants (DOACs), and (3) cognitive behavioral therapy as first-line treatment for insomnia.

Polypharmacy, often defined as the use of 5 or more medications, and hyper-polypharmacy (10 or more medications), are increasingly prevalent because of population aging and multi-morbidity [22,23,24]. Polypharmacy has been associated with increased risk of adverse drug events, drug-drug interactions, medication nonadherence, impaired functional status, cognitive impairment, and higher medical costs [25, 26]. Polypharmacy has, therefore, been a focus of quality improvement efforts, with a particular focus on older patients because of their greater susceptibility to medication harms. In recognition that polypharmacy may often be appropriate for patients with multiple comorbidities, the MIDAS EBP will focus on “inappropriate polypharmacy” and “potentially inappropriate medications” (PIMs) [27] in patients age 65 or older. PIMs are drugs that have an “unfavorable balance of benefits and harms compared with alternative treatment options;” guidance statements such as the American Geriatrics Society (AGS) Beers criteria [28] can aid in identifying patients using PIMs. The AGS Beers criteria is commonly used by clinicians, educators, and regulators and forms the basis for Healthcare Effectiveness Data and Information Set (HEDIS) quality measures related to medication management in older adults [28]. In the VA, a team of clinicians has developed an innovative practice known as VIONE to address inappropriate polypharmacy by encouraging and facilitating clinician review of each of a patients’ medications to determine whether each should be continued [29]; VIONE stands for “Vital, Important, Optional, Not indicated, and Every medication has a specific indication for use.” Components of the program include provider and pharmacist education about polypharmacy and VIONE when the program is initially implemented; access to multiple clinical dashboards that identify patients at increased risk for polypharmacy, as well as patients using specific drugs included in the AGS Beers criteria (the PIMs dashboard); a process for referring patients to pharmacists, who can perform medication reviews, using note templates in the electronic health record (EHR); and functionality to track medications that are discontinued, along with reasons for discontinuation. VIONE relies on a collaborative approach between providers and pharmacists to successfully deprescribe PIMs. VIONE has been recognized as a “gold status” practice by VHA leaders [30], is supported by the VA’s Academic Detailing Service, and has been implemented in over 100 clinical settings system-wide [31]. However, as highlighted by a recent evidence synthesis prepared for VA, there is a “glaring gap” in comparative effectiveness trials to identify the best approach to facilitate discontinuation of unnecessary and/or inappropriate medications [32].

The second EBP is the safe use of DOACs. DOACs are highly effective medicines to prevent harm from venous thromboembolisms, but when used inappropriately can also have severe side effects. In response, VA Pharmacy Benefits Management Services’ medication safety arm, the VA Center for Medication Safety (VA MedSAFE), has invested in national implementation efforts to promote safer practices for DOACs through medication use evaluations, national calls with pharmacists, traditional AD, and dashboards. In 2021, they also released an Anticoagulation Management Directive, which provides recommendations on best practices for quality assurance [33]. However, despite these efforts, safety concerns remain, and unsafe prescribing continues [34,35,36,37,38]. One particularly distinctive implementation effort is the DOAC Dashboard [36,37,38]. This tool has a series of flags for PIMs for every patient in VA. Unlike many dashboards, it is intended to be used at the point-of-care, primarily by anticoagulation pharmacists. Uptake of the dashboard has been rapid, with virtually every site in VA using it at least once per week.

Third, Cognitive Behavioral Therapy for Insomnia (CBTI) is recommended as first-line treatment for insomnia according to VA/Department of Defense (DoD) practice guidelines [39]. However, sedative hypnotic medications are still the most common treatment for insomnia, despite the associated risks of accidents, falls, and cognitive impairment [40,41,42,43]. The VA’s national evidence-based psychotherapy program has trained over 1000 therapists to deliver CBTI, yet programmatic and provider-level barriers (e.g., perceived priority) persist and limit adherence to treatment guidelines.


Mixed methods analyses will be used to evaluate the following three aims:

  1. 1.

    To compare the effectiveness of two implementation strategies (LEAP QI Learning Program + AD vs. AD alone) on potentially inappropriate medication use, using a pooled analysis of effects across the three trials at 18 months, 2 years, and 3 years post-baseline at the clinic-level, based on monthly assessed data from 13–36 months;

  2. 2.

    To compare the effectiveness of the two implementation strategies on secondary outcomes specific to each trial at 18 months, 2 years, and 3 years post-baseline, based on monthly assessed data from 13 to 36 months; and

  3. 3.

    To explore the effects of implementation, provider behaviors and experiences, and context, on sustained improvements in potentially inappropriate medication use.

For the purposes of pooled analysis across the three trials, an analogous dichotomous outcome will be identified for each trial, reflecting the proportion of patients with potentially inappropriate medication use. This will effectively triple the number of clinics (8 clinics per trial; 24 total clinics) included in the analysis of Aim 1. Additionally, each trial will be analyzed as a standalone study. All three trials will have distinct secondary outcomes.

Our aims are designed to deepen commitment [44] to sustain EBP use by including measures that matter to different key constituencies including employees (e.g., workgroup functioning, job satisfaction), health system leaders (increased use of EBPs), and patients (e.g., reduction in PIMs) [6, 7, 45]. The combination of implementation strategies with measures that matter is designed to empower teams and individuals to increase meaning and purpose of their work, focused on the health and well-being of the Veterans we serve. Evaluation results will provide guidance as to which implementation strategy is more likely to lead to sustained outcomes.

Human subjects protection

The MIDAS QUERI trials qualify as non-research conducted under the authority of Veterans Health Administration (VHA) operations, as it was designed and implemented for internal VHA purposes (to improve patient care) and not to produce information to expand the knowledge base of a scientific discipline.

In response to the designation of broad categories of activities as non-research in the Federal Policy for the Protection of Human Subjects (Common Rule) in Title 38 Code of Federal Regulations Part 16 (38 CFR 16.102(l)) published January 19, 2017, the VHA enacted new policies and guidelines for determining non-research activities within VHA. In accordance with these VHA policies and guidelines, this program has documentation as non-research by Pharmacy Benefits Management, Office of Mental Health and Suicide Prevention, and Veterans Integrated Service Network (VISN) 10, which are each authorized to deem projects as non-research activities for which formal IRB oversight is not required, as defined per VHA Handbook 1058.05 in the section “Officials Authorized to Provide Documentation of VHA Program Office Non-Research Operations Activities” and later updated in section 5a of the VHA Program Guide 1200.21.

Evaluation Design

This program is designed as a concurrent nested mixed methods evaluation [46, 47] in the context of cluster-randomized trials that will evaluate the effectiveness of AD+LEAP over AD as implementation approaches to improve the use of EBPs across three trials (See Additional file 1 for the SPIRIT checklist). Each trial will launch in quarterly increments over a 9-month period, each enrolling 8–12 clinics, randomized to one of the two implementation arms. For each trial, AD and AD+LEAP intervention activities will take place over a period of up to 12 months. Because our focus is on sustainment of improvements in clinical measures for each EBP, administrative data on key outcomes will be obtained over 36 months with a focus on comparisons at 18-month and 2- and 3-year post-randomization follow-ups.

Partnered research

When research aims align with clinical priorities articulated by health system leaders, the likelihood of greater benefit can be dramatically amplified [48]. This program was developed in partnership with key offices within VHA, including Pharmacy Benefits Management, Office of Mental Health Services and Suicide Prevention (OMHSP), and executive leaders in two VISNs. We have worked closely with Pharmacy Benefits Management’s VA MedSAFE program and Academic Detailing Service. Our multi-faceted metrics are designed to deepen commitment to sustain the EBPs [44] and, in turn, to institutionalize them [45].

Implementation strategies

As described above, all participating clinics will have access to regularly updated data from dashboards or similar resources. The dashboards provide clear, simple descriptions of care practices (e.g., patients at elevated risk of polypharmacy), thereby allowing easy identification of care variances to help detailers, individual providers, and clinic-level leadership identify opportunities for improvement [49]. In VHA, advances in medical informatics in design and content have produced increasingly user-friendly, responsive, and actionable dashboards, which have helped to amplify the work of clinicians and academic detailing pharmacists in invaluable ways. Our team is currently conducting a scoping review that will provide deeper knowledge of factors affecting uptake and effectiveness of dashboards [50]. Well-designed dashboards promote data-driven care optimization for individual care and population health management. All arms of care and outcomes of the trials will align with the clinical dashboards or a similar resource (e.g., the VIONE trial will rely on a VIONE practice dashboard and our measures will replicate those reported through the dashboard). Two implementation approaches that each use a dashboard or similar resource are described in the next sections (see Additional file 2 for StaRI Reporting Checklist details).

Academic detailing

Our AD intervention is modeled on existing AD principles. AD is a direct educational outreach of face-to-face (and more recently, virtual [51, 52]) interactions between academic detailers and clinicians that incorporate principles of adult learning theories, theory of planned behavior, and social marketing to improve the use of EBPs [53]. Using an accurate, up-to-date synthesis of the best clinical evidence in an engaging format, academic detailers ignite clinician behavior change, ultimately improving patient health. Evidence syntheses reveal that AD alone can be effective [54,55,56,57]; however, AD combined with other approaches (e.g., audit and feedback) is most effective in changing prescribing practices [58].

We will create an AD program that can be used and adapted for each intervention. The program will include a generalized approach based on existing recommendations, including documentation and training that will be general to the program and specific to each trial. For each trial, we will work with content experts and operational partners to develop 4–6 key messages that will be tightly linked with the primary outcome of the trial and the data in the respective clinical dashboard or resource. We will create detailer- and provider-focused educational materials to guide the detailer’s conversations with providers. This will include sample conversational scripts and tools to integrate detailing messages with provider-specific care patterns from the data.

Our detailers will be hired specifically for this project. They will attend training with the National Resource Center for Academic Detailing (NaRCAD) and through the VA Office of Academic Detailing. They will receive trial-specific education from each trial’s principal investigator and relevant content experts. They will shadow current detailers and will role-play the detailing sessions to practice conveying the key messages both internally and with non-participating practitioners. The sessions’ framing is based on the Theory of Planned Behavior [59, 60] and motivational interviewing [61].

The specific content of each visit will be tailored to address the specific context (barriers) identified at each participating clinic and for that specific provider. The detailer will start with an initial virtual visit with providers and other key staff at participating clinics; the detailer will meet with providers at each clinic for 15–30 min each. The detailer will review the dashboard in preparation for each visit to identify gaps and opportunities for improvement. A second, virtual visit will be completed four to eight weeks later to follow up with each participating provider.

For all three EBPs, we will identify provider-level barriers, including lack of knowledge about the EBP or uncertainty about the value of the practice [62]. Our AD strategy is designed to address these gaps by supporting individual providers in both use of the EBP and the use of clinical data to guide the practice.

Our AD approach will also include identifying a local champion prior to the first clinic visit. A “train-the-trainer” approach will be used to help ensure activities continue over the long term. The level and nature of champion engagement will be collaboratively determined by need and availability. Ideally, a local champion will shadow our detailer during visits and will be provided with training resources and coaching. Our detailer will develop a plan with the local champion to continue to reach out to new providers as appropriate to more deeply embed and sustain the practice and to track EBP use with the relevant data resource.

The Learn. Engage. Act. Process. (LEAP) program

Through prior work, we have identified common barriers encountered when implementing EBPs. This work, guided by the Consolidated Framework for Implementation Research (CFIR), has repeatedly identified a lack of planning, not consistently engaging key stakeholders, and not taking time for reflecting and evaluating on progress and impact in EBP implementation efforts [63,64,65]. LEAP, a blended implementation approach [66], is specifically designed to address these barriers by interweaving four discrete, evidence-based implementation strategies: (1) create a learning collaborative, (2) assess for readiness and identify barriers and facilitators, (3) audit and provide feedback, and (4) conduct cyclical small tests of change [67, 68].

The LEAP QI program engages frontline teams in sustained incremental improvements of EBPs over a 6-month period of hands-on learning, designed for busy clinicians as listed in Fig. 1. The Institute for Healthcare Improvement’s (IHI) Model for Improvement and PDSA cycles of change provides the core foundational approach [58] for team-based, hands-on learning, and coaching support with a QI network to enhance learning and accountability.

Fig. 1
figure 1

LEAP program components

The LEAP curriculum was adapted from a Massive-Open Online Course (MOOC) developed by HarvardX in collaboration with IHI [69]. Materials from the MOOC were adapted for LEAP by (1) designing for teams rather than individuals, (2) streamlining materials to accommodate busy frontline clinicians, and (3) lengthening program duration to provide more time to complete an improvement project. The LEAP curriculum includes brief videos, short readings, and easy-to-understand templates and tools, using selected content developed by IHI and HarvardX. The curriculum is paced, with new guidance released on a weekly basis through an online platform (SharePoint Online). Assignments completed in LEAP (i.e., project charter) can be drawn on for continued future improvement efforts. Continuing education (CEs) are available through VA’s Talent Management System (TMS).

Each clinic participating in LEAP forms a QI team. In our cluster randomized design (described below), teams will participate in cohorts of 4-6 to create a learning collaborative. LEAP coaches interact with teams in individual webinar sessions in the early weeks of LEAP and later via virtual collaboratives with all teams. LEAP teams choose aims, plan projects, and monitor data to bring about meaningful changes based on the specific needs surrounding the EBP at hand. The LEAP implementation strategy also includes a 6-month maintenance component, called LEAPOn, that provides monthly collaboratives for teams to encourage continued work on PDSA cycles.

So far, 49 teams have completed LEAP, comprising 276 frontline staff, clinicians, and Veterans. Based on first-year results, LEAP measurably increased confidence in using QI methods, and participants were satisfied or very satisfied (81-89%) with all LEAP components [70]. In addition, 96% agreed or strongly agreed that LEAP was relevant to the needs of their program. Post-LEAP, teams intended to continue to optimize care for their patients; however, participants struggled most with the lack of available time for QI amid competing clinical priorities.

Conceptual framework for evaluation

MIDAS QUERI focuses specifically on the sustained use of EBPs. The Dynamic Sustainability Framework (DSF) asserts that “[o]ngoing quality improvement of interventions is the ultimate aim…[because] evidence solely from clinical trials [is insufficient] and…quality improvement processes focused on intervention optimization are ultimately more relevant to achieve sustainment.” [71] Sustainment science literature [7, 45] and other implementation science frameworks [72, 73] all affirm the necessity of ongoing optimization. Thus, at the center of the DSF is the need to engage individuals and teams in continual adaptation and optimization through learning cycles like the PDSA cycles foundational in QI [72]. However, clinical teams have significant challenges doing PDSA cycles because of patient care demands and they must navigate constant changes in infrastructure, policies and procedures, and staffing, all of which leave little time for implementing improvements. Nevertheless, if frontline teams do not invest time and effort into making improvements, change will not happen and/or will not be sustained, leading to widespread failures across the system. The LEAP and AD strategies are specifically designed to engage busy frontline employees in continuing incremental optimization of each EBP.

Sustainability research highlights the need to identify outcomes important to multiple stakeholders for change to be fully integrated as routine care [7, 45]. Figure 2 shows our conceptual framework. At the heart, is a positive reinforcing feedback loop between three categories of outcomes, each designed to meet the needs of three key constituencies: (1) employees who deliver treatments; (2) health system leaders; and (3) patients. Our strategies are designed to move individuals and/or teams into a virtuous cycle where engaging in optimization brings visible improvements in work-life (e.g., burnout, satisfaction as measured by “Best Places to Work”) as employees are motivated [74, 75] by seeing measurable improvements in near-term service outcomes that matter to clinical leaders (increased use of EBPs) and patients who experience improved clinical outcomes. Sustained change relies on building ever-stronger coalitions of support that can occur when outcomes are visible and communicated widely. This increased visibility with supervisors and other clinical leaders will help to foster willingness to allow the space and time needed to engage in optimization [4, 7, 8, 45]. Increasing capacity for change, especially through teamwork, is strongly associated with lower burnout among clinicians [76]. We will combine qualitative findings with quantitative measures to help explain changes (or lack thereof) over time. Our AD strategy is based on Theory of Planned Behavior, where attitudes, subjective norms, and perceived behavioral control shape behavioral intentions that lead to engaging in cycles of optimization of personal work processes. The LEAP strategy relies on teaming theory [77] and engagement in continuous QI [78] and provides team-based structured coaching as teams learn to plan and execute PDSA optimization cycles.

Fig. 2
figure 2

MIDAS conceptual framework

The effectiveness of our implementation strategies will be moderated by contextual determinants (i.e., barriers and facilitators) influencing teams’ and individuals’ ability to engage in optimization. These contextual determinants will be assessed using a newly developed pragmatic Context Assessment Tool (pCAT; unpublished) that assesses nine constructs across three of the domains of the CFIR (Innovation Characteristics, Outer Setting, and Inner Setting). This prioritized list of constructs was chosen based a series of context assessments during implementation evaluations in VHA [63,64,65]. The COVID-19 pandemic has heightened the awareness of how other unexpected impacts that may also influence this pathway to the positive reinforcing feedback that is designed to keep individuals and teams engaged in optimization.

Clinic selection and eligibility

We will work with our operational partners to identify candidate clinics that want to reduce their use of PIMs based on the topic for each respective trial. We will provide an orientation to the topic, introduction of the dashboard, and overview of the two implementation intervention arms. Prior to implementation, we will work with interested clinics to ensure they have met the preconditions necessary to begin sustained optimization of the EBP: (1) a team leader or champion; (2) an identified department with service leadership buy-in and control over the processes/practices impacted by the implementation; (3) readily accessible data to monitor process and impact of the implementation and use of the EBP, e.g., through an easy-to-access dashboard; and (4) installment of key components needed to support the EBP (e.g., installation of a specific note template in the EHR system). We will recruit four to six clinics per arm per trial; a clinical leader will provide assent to participate and enroll.

Clinic randomization

Within each trial, clinics will be randomized after assenting to participate (equivalent to enrollment). Clinics will be assigned to one of two arms by a statistician, stratified further by clinic type (medical center, community clinic, or Community Living Center) if needed to ensure partial balance between arms with respect to potential confounders associated with culture and complexities associated with clinic location [79, 80].

Outcomes and analyses

As part of a pooled analysis, we will compare the same two implementation strategies across all three EBPs and take a unified approach to implementation and evaluation across the trials. Table 1 shows MIDAS measures, data collection timeframe, and data sources. While a unified dichotomous outcome, i.e., PIMs, was identified for each trial to allow for the pooled analysis, each trial will also be analyzed individually (see Table 1).

Table 1 MIDAS measures showing data sources and timepoints by aim

Aim 1: primary outcomes and pooled analysis

Although each trial will be conducted as an independent study, our primary aim is to compare across trials the effectiveness between the two implementation strategy arms in reducing PIMs during post-implementation period. To this end, we defined a unified primary outcome to allow us to combine the results across the three trials. The unified primary outcome will be operationalized based on a patient-level dichotomous response indicating PIM use (yes/no) among patients at-risk of PIMs, i.e., among those who may benefit from the specific EBP each month. The monthly patient-level PIM use response will be summarized to clinic-level month-by-month percentage of potentially inappropriate use using administrative data from baseline to 36 months, with months 13–36 as the post-implementation follow-up period. Each trial-specific monthly data will be cross-sectional, i.e., different patients may be included in each month.

For inappropriate polypharmacy, the clinic-month outcome will be the proportion of patients who had medication possession (based on VA pharmacy fill data) of one or more medications from the AGS Beers criteria that are included on the VIONE PIMs dashboard [28] (numerator) among patients age 65 or older, not receiving palliative care, and followed by the clinic (denominator). For each drug included on the PIMs dashboard, there are associated business rules that define when medication use is flagged as potentially inappropriate; these same criteria will be applied in this trial. For example, the use of a first- or second-generation anti-psychotic drug is flagged as potentially inappropriate unless there is a diagnosis of schizophrenia or bipolar disorder. These criteria had previously been determined by VIONE’s Subject Matter Expert group, which provides VIONE with guidance on translating deprescribing criteria into the most practical and appropriate rules for use on the dashboard. Altogether, the following AGS Beers medications from the PIMs dashboard will be included in the analysis: anticholinergics, antipsychotics, aspirin, benzodiazepines, long-acting sulfonylureas, muscle relaxants, non-steroidal anti-inflammatory drugs (NSAIDs), proton pump inhibitors (PPIs), sliding scale insulin, and Z-drugs.

For DOAC safety, the outcome will be the proportion of patients with potentially inappropriate prescribing out of those using DOACs, as measured by “flags” (e.g., potential mis-dosing based on renal function and other indicators) on the DOAC dashboard. The DOAC flagging system is based on Food and Drug Administration (FDA) indications and has been in clinical use since 2018. Components of the outcome include inappropriate dosing for the given indication and the use of DOACs in contraindicated settings (such as valve replacements).

For first-line treatment for insomnia, the outcome will be the proportion of patients with a new prescription for a sedative-hypnotic medication who have not had CBTI in the prior 12 months out of all primary care patients actively following with the clinic and are not in hospice/palliative care.

For all three trials, medication use (yes/no) and possession of active prescription for each month will be determined using exposure days based on supply days, and use will be determined by the exposure status on day 1 of each month. We will also do sensitivity analyses based on the criteria of use anytime during the month as well as PIMs defined to medications used chronically, for example, greater than 90 of the 180 prior days.

For each trial we will first compare demographic characteristics (age, sex, and race) of patients at risk of PIMs in the first month of implementation between the two arms. We will then obtain, for each trial by arm, crude monthly percentages (along with the corresponding 95% confidence intervals) of PIMs, averaged across clinics randomized to each arm and weighted by clinic-month size. For each trial, we will plot the monthly clinic level percentages over the follow-up 13–36 months to graphically assess if the difference between the two arms can be meaningfully summarized across the three trials with the unified outcome. If we find, for example, that trends between-arms over post-implementation months differ notably across the three trials, unified results comparing AD+LEAP vs. AD arm across trials may not be meaningful, and we will only conduct analyses separately by each trial.

For comparison between arms, we will use generalized estimating equations (GEE) with clinic-level monthly percent of PIMs among patients at risk during post-implementation period (months 13 to 36) as the dependent variable. The model will include indicators of two trials with one trial as the referent category to account for differing underlying levels of inappropriate medication use across trials. The model will also include follow-up time in months and the LEAP+AD arm indicator with AD as the referent category and will adjust for serial correlation within clinic over time. We will also include time by arm interaction to assess if the magnitude of the difference between LEAP+AD vs. AD changes over time. If the interaction is significant, we will estimate between-arm difference at 18 months as well as at 2- and 3-years separately based on the model with the interaction term. On the other hand, if the interaction is not significant, this would indicate between-arm difference not to differ at the three follow-up times of interest (18, 24, and 36 months), and thus we will drop the interaction term and the parameter estimate of the LEAP+AD arm indicator will be used to estimate the time-averaged difference in percentage of patients with inappropriate medications during the post-implementation period in clinics randomized to LEAP+AD compared to clinics randomized to AD.

If we find notable baseline demographic differences between arms within trials, we will use a generalized linear mixed model (GLMM) with logit link to estimate the between-arm difference while adjusting for baseline age, sex, and race difference with monthly person-level response (yes/no) data from the post-implementation period of months 13 to 36. In addition to time, AD+LEAP indicator, and trial type indicators as predictors, the GLMM model using patient-level data will include patient age, sex, race, and random intercepts for patients nested within clinic to adjust for potential correlation within clinics and serial correlation over time. The parameter estimate for the LEAP+AD arm indicator will be used to estimate the time-averaged odds of inappropriate medication use during the post-implementation period for patients in clinics randomized to LEAP+AD compared to the odds of the same patients if their clinics were randomized to AD. Although the GEE and GLMM models give different summary estimates with different interpretations, the GLMM model allows for adjusting for patient characteristics, and a consistent substantive conclusion will assure us of the evidence for the effect of LEAP when added to AD. Similar to the GEE model, we will test if the odds ratio of LEAP+AD vs. AD changes over time by including time by arm interaction term, and if the interaction term is significant, we will obtain adjusted odds ratios associated with LEAP+AD compared to AD at 18 months, 2 years, and 3 years.

For each trial, we will also compare AD and AD+LEAP to usual care controls. To do this, we will perform a non-randomized secondary analysis for each trial. The analysis will have the same primary outcome variable and use the same generalized linear mixed model (GLMM) with logit link. The primary control group will be all non-participating sites. We will also use a secondary analysis, where for each intervention site we will have two control sites that are matched on clinic size (within 50%), pre-intervention outcome rate (within 30 rankings of all sites), and region of the country. These analyses will adjust for the clinic-level variables clinic size, intervention outcome rate, region of the country, and the patient-level variables age, sex, and race.

Aim 2: secondary outcomes and analyses

Secondary outcomes for VIONE will be the prevalence of potentially inappropriate use of PPIs; the prevalence of potentially inappropriate use of aspirin; and the prevalence of potentially inappropriate use of central nervous system (CNS) active medications (muscle relaxants, anti-psychotics, Z-drugs, and benzodiazepines) or anticholinergic drugs; number of inappropriate medications at a patient level; monthly medication costs for all drugs, without regard to appropriateness; and number of pharmacist medication reviews.

Secondary outcomes for the DOAC trial will be the sub-components of the “flags” on the dashboard. These include potential mis-dosing, potential medication interactions, or concern for nonadherence. This follows the organizational structure of both the presentation of the flags on the dashboard and the key messages provided to the AD and LEAP teams. Process outcomes will be how often the provider uses the dashboard and rates of new DOAC starts compared to warfarin starts. These outcomes will be kept in alignment with our other work using the dashboard [37].

In stand-alone analyses of the CBTI trial, the primary outcome will be the prevalence of any CBTI receipt among primary care patients actively following with the clinic who are not in hospice/palliative care. Secondary outcomes will be the mean CBTI sessions completed and referrals to CBTI. Receipt of any CBTI and mean number of sessions will be measured by extracting from the medical records' CBTI note templates completed by CBTI therapists. CBTI referrals will be measured according to consult requests in the medical record or by monthly therapist reports.

Analyses of secondary outcomes such as percent of potentially inappropriate use of PPI or mean number of CBTI sessions at each clinic month will be similar to that of the primary outcome using the GEE model accounting for correlation over time. We will also conduct separate analyses by trial with the dependent variables that are unique to each trial. For example, for the polypharmacy trial, the secondary outcome of interest is count of medications flagged as inappropriate based on Beers’ criteria [28]. We will compare monthly rates of Beers’ list medication use between implementation strategies using GLMMs with log link.

Aim 3: exploration of potential predictors of clinical outcomes

Our process evaluation will follow a multi-phase concurrent nested mixed methods design [81]. This design has three purposes: (1) help prepare all stakeholders and participants prior to the start of each trial; (2) monitor the progress of implementation; and (3) explain summative findings. Overall priority is placed on quantitative methods that guide the trials, while qualitative methods are embedded or “nested” within conduct of the trials.

Employee behavior and experience measures will be collected via five scales as listed in Table 1. Surveys will be administered via online link within invitation emails; administration will occur at baseline and 18 months post-baseline; satisfaction will be elicited at the end of each intervention (upon completion of the 6-month “core” LEAP program for LEAP team members and at the end of each AD visit for AD participants). Descriptive statistics will be generated and tests for differences across implementation strategy arms will be conducted using mixed models to account for within-clinic correlation.

Qualitative data will be collected prior to and 18 months following baseline via semi-structured interviews (virtual by telephone or conferencing software (e.g., MS Teams platform)). A purposive sample of key people (clinic leaders, supervisors, providers, and staff) at each clinic will be invited to participate so we can better understand the context in which the implementation strategies are/were deployed. The interview guides and qualitative analyses will be guided by the CFIR to identify potential and actual barriers and facilitators [63,64,65]. Principles embedded within the DSF will guide exploration of the degree of engagement in QI and teamwork [71]. Prior to implementation, this information will help inform the work of the academic detailers and LEAP coaches; post-implementation, this information will help to explain quantitative findings within and across the trials. Interviews will be audio-recorded and transcribed verbatim. Pre-implementation, interviews will focus on collecting practical information using a rapid analysis approach [82, 83] to help tailor and adapt implementation for each participating clinic (see Additional file 3 for master interview guide). Post-implementation, qualitative analyses will seek insights on what kinds of improvements were made, barriers and facilitators to making improvements, reflections on/satisfaction with participation in AD/LEAP, and explore relationships between determinants, participants, and key stakeholders and how these may lead to building coalitions of support [7, 8]. We will combine qualitative findings with quantitative measures from Aims 1 and 3 to help explain changes (or lack of) over time.

Our process evaluation will rely on quantitative and qualitative data sources. Fidelity to each implementation strategy will be tracked by interventionists (the detailers and LEAP coaches) completing a mixed-methods self-assessment tool after each interaction (a coaching session for LEAP, detailing contact for AD). These assessments will be used to guide coach-supervisor and peer reflections on improvements, problem-solving, and mitigating barriers and amplifying facilitators of improvement efforts. We will also track participation by participants (individuals scheduled for detailing and/or LEAP team members) and completed assignments by LEAP teams. The academic detailer and LEAP coaches will enter notes for each interaction into a tracking system for each strategy. This data will be combined with pre-implementation and 18-month semi-structured interview data for further insights into barriers, facilitators, and problem-solving approaches used by LEAP coaches and detailers. Quantitative and qualitative data will be combined at the analysis or interpretation phase.

Economic evaluation

We will use a micro-costing method [84, 85] to determine the costs to deliver LEAP and AD. The LEAP coaches and academic detailers developed a list of the activities they will perform for each participating site. Depending on the specific activity, they determined the best way to record the time spent on each activity—e.g., logging the start and stop time each time the activity takes place vs. setting an estimated average time for activities that take approximately the same amount of time for each incidence (such as recurring meetings, responding to quick queries via e-mail, meeting preparation, etc.). In the latter case, the coaches and detailers simply record the occurrence of the activity, which is then assigned the estimated time. The coaches and detailers will log times for each activity, categorized by participating site, in a time tracking database. Using data from this database, we will calculate the average time required for each activity and apply this to the number of times it takes place over the course of performing the implementation strategy at a site. These data can then be used to determine an estimated total time required to perform the implementation strategy (LEAP or AD) at a site, which, combined with the hourly cost of the LEAP coach or detailer, can be used to calculate the total cost of employing the strategy at a site.


The MIDAS program of QI trials is unique and ambitious in unifying conduct of three cluster randomized trials on three EBPs across diverse settings (medical centers and community-based clinics) and within different clinical specialties (clinical pharmacy, primary care, and mental health) within VHA. Implementation strategies were designed based on previous implementation studies (LEAP) and in partnership with VHA operations leaders (AD). Two-arm trials to compare AD alone or adding LEAP are combined with multi-phase concurrent nested mixed methods process evaluations to ensure valued and much-needed learning, regardless of trial outcomes [81].

Our two implementation strategies each target a different level: AD intervenes with individuals to build capability and motivation for optimizing personal work processes while LEAP intervenes with teams to build capability and motivation for optimizing broader clinical processes. Using both may provide a “winning” multi-level strategy where one builds on the strengths of the other for lasting change.

Our overarching goal is sustainability. Our approaches and measures draw on sustainability science that point to the need to engage clinicians and staff in ongoing optimization or QI. Engaging frontline teams and individuals in continued practice optimization can often feel like an up-hill battle. The “gravity” we have encountered in our past work [70]—and reinforced in findings by others—is a lack of time [86,87,88,89]. On the other hand, when mission and values translate into aligned priorities up and down the system, it is motivational—it is thrilling—to work within communities—teams and coalitions within organizations—and to be a part of something larger [87, 88]: serving Veterans and making a difference. There is no silver bullet or magic solution to moving individuals or teams into this space but our approach to making small changes that are feasible to do within demanding clinical settings, has the potential to coalesce forces for large-scale positive impact [7]. By increasing the visibility of learnings and successes, one individual/one team can make a positive impact. We aim to help shift power to these agents for change by focusing on small doable, incremental changes that add up to significant impact over time. Especially because of forces outside the control of our strategies, our multi-phase mixed methods evaluation approach is essential. The combination of qualitative data informing or explaining quantitative findings will help ensure we generate learnings and insights that will benefit all stakeholders.

This QI program has limitations. Our primary outcomes are at 18 months and 2 and 3 years post-baseline. Significant secular impacts are increasingly common (e.g., pandemic, flood, fire) and the causal pathway is not clear between intervention and outcomes. One certain disruption is VHA’s migration to a new EHR system planned during the trial period, which may impact the availability and reliability of administrative data and may impact clinics’ and individuals’ ability to engage fully in optimization as they wrestle with learning a new way of working. However, we have built-in multiple dimensions of measures (employee-focused, system-focused, and patient-focused) along with qualitative and process data that are designed to help tailor support and monitor and explain findings that can reveal important insights such as which settings resulted in the highest impact with which combination of implementation strategies; we have also included a secondary analysis with matched controls to factor out systemwide disruptions or trends.

The landscape is fast-moving with respect to intervention options. For example, for CBTI, computer-based CBTI, group-based treatment, and a briefer version of the full CBTI model are all quickly building evidence and are appealing in their ability to ease pressures with a limited number of providers trained in CBTI. VHA leaders are also paying increasingly closer attention to the need for CBTI and may implement new policies to motivate its use as a first-line treatment for insomnia, e.g., adding alerts that recommend CBTI each time a provider tries to order a sleep medication. Each of these approaches may have their own champions at various levels within the organization and with their own preferences and partners for implementation. Thus, our team will remain open to the best approach for as long as possible before launching that trial. This reality highlights how important our ability to remain agile and responsive is, and as we strengthen our partner relationships for successful trial conduct with in-depth evaluation. As embedded researchers engaged in systemwide QI, we recognize the need to align with system priorities [90] and be attuned to findings that will best improve care for patients [91].

Availability of data and materials

Data will be available upon request after completion of the study.



Academic detailing


American Geriatrics Society


Cognitive Behavioral Therapy for Insomnia


Continuing education


Consolidated Framework for Implementation Research


Central nervous system


Department of Defense


direct oral anticoagulants


Dynamic Sustainability Framework


Evidence-based practices


Electronic health record


Food and Drug Administration


Generalized estimating equations


Generalized linear mixed model


Healthcare Effectiveness Data and Information Set


Institute for Healthcare Improvement


Learn. Engage. Act. Process.


Maintaining Implementation through Dynamic Adaptations


Massive-Open Online Course


National Resource Center for Academic Detailing


Non-steroidal anti-inflammatory drugs


Office of Mental Health Services and Suicide Prevention


pragmatic Context Assessment Tool




Potentially inappropriate medication


proton pump inhibitor


Quality Enhancement Research Initiative


Quality improvement


Talent Management System


Veterans Affairs


VA Center for Medication Safety


Veteran Health Administration


Vital, Important, Optional, Not indicated, and Every medication has a specific indication for use


Veterans Integrated Services Network


  1. Meaney M, Pung C. McKinsey global results: Creating organizational transformations. McKinsey Q. 2008;7:1–7.

    Google Scholar 

  2. Rafferty AE, Jimmieson NL, Armenakis AA. Change readiness: A multilevel review. J Manag. 2013;39:110–35.

    Google Scholar 

  3. Kilbourne AM, Elwy AR, Sales AE, Atkins D. Accelerating Research Impact in a Learning Health Care System: VA’s Quality Enhancement Research Initiative in the Choice Act Era. Med Care. 2017;55(Suppl 7):S4–12.

    Article  PubMed  Google Scholar 

  4. Miake-Lye I, Mak S, Lam CA, Lambert-Kerzner AC, Delevan D, Olmos-Ochoa T, et al. Scaling Beyond Early Adopters: a Content Analysis of Literature and Key Informant Perspectives. J Gen Intern Med. 2021;36:383–95.

    Article  PubMed  Google Scholar 

  5. Rogers E. Diffusion of Innovations. 5th ed. New York: Free Press; 2003.

    Google Scholar 

  6. Lennox L, Linwood-Amor A, Maher L, Reed J. Making change last? Exploring the value of sustainability approaches in healthcare: a scoping review. Health Res Policy Sys. 2020;18:120.

    Article  CAS  Google Scholar 

  7. Lennox L, Maher L, Reed J. Navigating the sustainability landscape: a systematic review of sustainability approaches in healthcare. Implement Sci. 2018;13:27.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Lennox L, Doyle C, Reed JE, Bell D. What makes a sustainability tool valuable, practical and useful in real-world healthcare practice? A mixed-methods study on the development of the Long Term Success Tool in Northwest London. BMJ Open. 2017;7:e014417.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Rostata-Pesola H, Olender L, Leon N, Aytaman A, Hoffman R, Pesola GR. Hepatitis C virus screening at a Veterans Administration Hospital in New York City. J Am Assoc Nurse Pract. 2020.

  10. Rozenberg-Ben-Dror K, Taylor JM, Chia L, Ruiz D, Himsel AS, Jacob DA, et al. Hepatocellular carcinoma surveillance utilizing a population management dashboard in the veterans affairs healthcare system. Hepatology. 2018;68:313A–4A.

    Article  Google Scholar 

  11. Peterson JF, Kripalani S, Danciu I, Harrell D, Marvanova M, Mixon AS, et al. Electronic Surveillance and Pharmacist Intervention for Vulnerable Older Inpatients on High-Risk Medication Regimens. J Am Geriatr Soc. 2014;62:2148–52.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Bertsimas D, Kallus N, Weinstein AM, Zhuo YD. Personalized Diabetes Management Using Electronic Medical Records. Diabetes Care. 2017;40:210–7.

    Article  PubMed  Google Scholar 

  13. Carmichael JM, Meier J, Robinson A, Taylor J, Higgins DT, Patel S. Leveraging electronic medical record data for population health management in the Veterans Health Administration: Successes and lessons learned. Am J Health Syst Pharm. 2017;74:1447–59.

    Article  PubMed  Google Scholar 

  14. Burningham Z, Jackson GL, Kelleher J, Stevens M, Morris I, Cohen J, et al. The Enhancing Quality of Prescribing Practices for Older Veterans Discharged From the Emergency Department (EQUIPPED) Potentially Inappropriate Medication Dashboard: A Suitable Alternative to the In-person Academic Detailing and Standardized Feedback Reports of Traditional EQUIPPED? Clin Ther. 2020;42:573–82.

    Article  PubMed  Google Scholar 

  15. Fischer MJ, Kourany WM, Sovern K, Forrester K, Griffin C, Lightner N, et al. Development, implementation and user experience of the Veterans Health Administration (VHA) dialysis dashboard. BMC Nephrol. 2020;21:136.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Lau MK, Bounthavong M, Kay CL, Harvey MA, Christopher MLD. Clinical dashboard development and use for academic detailing in the U.S. Department of Veterans Affairs. J Am Pharm Assoc. 2019;59:S96-S103.e3.

    Article  Google Scholar 

  17. Oliva EM, Bowe T, Tavakoli S, Martins S, Lewis ET, Paik M, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14:34–49.

    Article  PubMed  Google Scholar 

  18. Twohig PA, Rivington JR, Gunzler D, Daprano J, Margolius D. Clinician dashboard views and improvement in preventative health outcome measures: a retrospective analysis. BMC Health Serv Res. 2019;19:475.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Agency for Healthcare Research and Quality (AHRQ). Adapting to a Changing Evidence Environment: The EvidenceNOW Key Driver Diagram. Agency for Healthcare Research and Quality n.d. (Accessed 12 May 2021).

  20. Pan American Health Organization (PATH). Immunization Data: Evidence for Action. A Realist Review of What Works to Improve Data Use for Immunization, Evidence from Low- and MiddleIncome Countries. Seattle: PATH; Washington, DC: PAHO; 2019.

    Google Scholar 

  21. Dowding D, Randell R, Gardner P, Fitzpatrick G, Dykes P, Favela J, et al. Dashboards for improving patient care: review of the literature. Int J Med Inform. 2015;84:87–100.

    Article  PubMed  Google Scholar 

  22. Khezrian M, McNeil CJ, Murray AD, Myint PK. An overview of prevalence, determinants and health outcomes of polypharmacy. Therapeut Adv Drug Saf. 2020;11:204209862093374.

    Article  Google Scholar 

  23. Peel N, Runganga M, Hubbard R. Multiple medication use in older patients in post-acute transitional care: a prospective cohort study. CIA. 2014;1453.

  24. Masnoon N, Shakib S, Kalisch-Ellett L, Caughey GE. What is polypharmacy? A systematic review of definitions. BMC Geriatr. 2017;17:230.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Maher RL, Hanlon J, Hajjar ER. Clinical consequences of polypharmacy in elderly. Expert Opin Drug Saf. 2014;13:57–65.

    Article  PubMed  Google Scholar 

  26. Salazar JA, Poon I, Nair M. Clinical consequences of polypharmacy in elderly: expect the unexpected, think the unthinkable. Expert Opin Drug Saf. 2007;6:695–704.

    Article  PubMed  Google Scholar 

  27. Rankin A, Cadogan CA, Patterson SM, Kerse N, Cardwell CR, Bradley MC, et al. Interventions to improve the appropriate use of polypharmacy for older people. Cochrane Database Syst Rev. 2018.

  28. By the 2019 American Geriatrics Society Beers Criteria® Update Expert Panel. American Geriatrics Society 2019 Updated AGS Beers Criteria® for Potentially Inappropriate Medication Use in Older Adults: 2019 AGS BEERS CRITERIA® UPDATE EXPERT PANEL. J Am Geriatr Soc. 2019;67:674–94.

    Article  Google Scholar 

  29. Nelson MW, Downs TN, Puglisi GM, Simpkins BA, Collier AS. Use of a Deprescribing Tool in an Interdisciplinary Primary-Care Patient-Aligned Care Team. Sr Care Pharm. 2022;37:34–43.

    Article  PubMed  Google Scholar 

  30. VAntage Point Contributor. New best practices are being implemented across VA following “Shark Tank” competition. VAntage Point 2018. (Accessed 14 Apr 2021).

  31. Battar S, Watson Dickerson KR, Sedgwick C, Cmelik T. Understanding Principles of High Reliability Organizations Through the Eyes of VIONE: A Clinical Program to Improve Patient Safety by Deprescribing Potentially Inappropriate Medications and Reducing Polypharmacy. Fed Pract. 2019;36:564–8.

    PubMed  PubMed Central  Google Scholar 

  32. Bloomfield H, Linsky A, Bolduc J, Greer N, Naidl T, Vardeny O, et al. Deprescribing for Older Veterans: A Systematic Review. Washington, DC: Evidence Synthesis Program, Health Services Research and Development Service, Office of Research and Development, Department of Veterans Affairs; 2019.

    Google Scholar 

  33. Veterans Health Administration (VHA). VHA Directive 1108.16(1): Anticoagulation Therapy Management (2021).

  34. Dawwas GK, Dietrich E, Cuker A, Barnes GD, Leonard CE, Lewis JD. Effectiveness and Safety of Direct Oral Anticoagulants Versus Warfarin in Patients With Valvular Atrial Fibrillation: A Population-Based Cohort Study. Ann Intern Med. 2021:M20–6194.

  35. Kurlander JE, Gu X, Scheiman JM, Haymart B, Kline-Rogers E, Saini SD, et al. Missed opportunities to prevent upper GI hemorrhage: The experience of the Michigan Anticoagulation Quality Improvement Initiative. Vasc Med. 2019;24:153–5.

    Article  PubMed  Google Scholar 

  36. Rossier C, Spoutz P, Schaefer M, Allen A, Patterson ME. Working smarter, not harder: evaluating a population health approach to anticoagulation therapy management. J Thromb Thrombolysis. 2020.

  37. Barnes GD, Sippola E, Dorsch M, Errickson J, Lanham M, Allen A, et al. Applying population health approaches to improve safe anticoagulant use in the outpatient setting: the DOAC Dashboard multi-cohort implementation evaluation study protocol. Implement Sci. 2020;15:83.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Valencia D, Spoutz P, Stoppi J, Kibert JL, Allen A, Parra D, et al. Impact of a Direct Oral Anticoagulant Population Management Tool on Anticoagulation Therapy Monitoring in Clinical Practice. Ann Pharmacother. 2019;53:806–11.

    Article  PubMed  Google Scholar 

  39. The Management of Chronic Insomnia Disorder and Obstructive Sleep Apnea Work Group. VA/DoD Clinical Practice Guidelines for the Management of Chronic Insomnia Disorder and Obstructive Aleep Apnea 2019.

    Google Scholar 

  40. Kim HM, Gerlach LB, Van T, Yosef M, Conroy DA, Zivin K. Predictors of Long-Term and High-Dose Use of Zolpidem in Veterans. J Clin Psychiatry. 2019;80.

  41. Tom SE, Wickwire EM, Park Y, Albrecht JS. Nonbenzodiazepine Sedative Hypnotics and Risk of Fall-Related Injury. Sleep. 2016;39:1009–14.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Chen P-L, Lee W-J, Sun W-Z, Oyang Y-J, Fuh J-L. Risk of Dementia in Patients with Insomnia and Long-term Use of Hypnotics: A Population-based Retrospective Cohort Study. PLoS One. 2012;7:e49113.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Hansen RN, Boudreau DM, Ebel BE, Grossman DC, Sullivan SD. Sedative Hypnotic Medication Use and the Risk of Motor Vehicle Crash. Am J Public Health. 2015;105:e64–9.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Leatherman S, Berwick D, Iles D, Lewin LS, Davidoff F, Nolan T, et al. The business case for quality: case studies and an analysis. Health Aff (Millwood). 2003;22:17–30.

    Article  Google Scholar 

  45. Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101:2059–67.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21:S1–8.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Curry LA, Krumholz HM, O’Cathain A, Clark VLP, Cherlin E, Bradley EH. Mixed Methods in Biomedical and Health Services Research. Circ Cardiovasc Qual Outcomes. 2013;6:119–23.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Bush PL, Pluye P, Loignon C, Granikov V, Wright MT, Pelletier JF, et al. Organizational participatory research: a systematic mixed studies review exposing its extra benefits and the key factors associated with them. Implement Sci. 2017;12:119.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Buttigieg SC, Pace A, Rathert C. Hospital performance dashboards: a literature review. J Health Organ Manag. 2017;31:385–406.

    Article  PubMed  Google Scholar 

  50. Helminski D, Kurlander JE, Renji AD, Sussman JB, Pfeiffer PN, Conte ML, et al. Dashboards in Health Care Settings: Protocol for a Scoping Review. JMIR Res Protoc. 2022;11:e34894.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Smart MH, Mandava MR, Lee TA, Pickard AS. Feasibility and acceptability of virtual academic detailing on opioid prescribing. Int J Med Inform. 2021;147:104365.

    Article  PubMed  Google Scholar 

  52. Baldwin L-M, Fischer MA, Powell J, Holden E, Tuzzio L, Fagnan LJ, et al. Virtual Educational Outreach Intervention in Primary Care Based on the Principles of Academic Detailing. J Contin Educ Health Prof. 2018;38:269–75.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Kennedy AG, Regier L, Fischer MA. Educating community clinicians using principles of academic detailing in an evolving landscape. Am J Health Syst Pharm. 2021;78:80–6.

    Article  PubMed  Google Scholar 

  54. Chan WV, Pearson TA, Bennett GC, Cushman WC, Gaziano TA, Gorman PN, et al. ACC/AHA Special Report: Clinical Practice Guideline Implementation Strategies: A Summary of Systematic Reviews by the NHLBI Implementation Science Work Group: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol. 2017;69:1076–92.

    Article  PubMed  Google Scholar 

  55. Kamarudin G, Penm J, Chaar B, Moles R. Educational interventions to improve prescribing competency: a systematic review. BMJ Open. 2013;3:e003291.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Ostini R, Hegney D, Jackson C, Williamson M, Mackson JM, Gurman K, et al. Systematic review of interventions to improve prescribing. Ann Pharmacother. 2009;43:502–13.

    Article  PubMed  Google Scholar 

  57. O’Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, et al. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2007:CD000409.

  58. Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance. Wiley; 2009.

  59. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50:179–211.

    Article  Google Scholar 

  60. Liang L, Bernhardsson S, Vernooij RWM, Armstrong MJ, Bussières A, Brouwers MC, et al. Use of theory to plan or evaluate guideline implementation among physicians: a scoping review. Implement Sci. 2017;12:26.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Miller WR, Moyers TB. Motivational interviewing and the clinical science of Carl Rogers. J Consult Clin Psychol. 2017;85:757–66.

    Article  PubMed  Google Scholar 

  62. Fischer F, Lange K, Klose K, Greiner W, Kraemer A. Barriers and Strategies in Guideline Implementation-A Scoping Review. Healthcare (Basel). 2016;4.

  63. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8:51.

    Article  Google Scholar 

  64. Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, Oddone EZ. Implementation evaluation of the Telephone Lifestyle Coaching (TLC) program: organizational factors associated with successful implementation. Transl Behav Med. 2017;7:233–41.

    Article  PubMed  Google Scholar 

  65. Damschroder LJ, Reardon CM, AuYoung M, Moin T, Datta SK, Sparks JB, et al. Implementation findings from a hybrid III implementation-effectiveness trial of the Diabetes Prevention Program (DPP) in the Veterans Health Administration (VHA). Implement Sci. 2017;12:94.

    Article  Google Scholar 

  66. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69:123–57.

    Article  PubMed  Google Scholar 

  67. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

    Article  Google Scholar 

  68. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14:42.

    Article  Google Scholar 

  69. Practical Improvement Science in Health Care: A Roadmap for Getting Results n.d. Accessed 27 Apr 2022.

  70. Damschroder LJ, Yankey NR, Robinson CH, Freitag MB, Burns JA, Raffa SD, et al. The LEAP Program: Quality Improvement Training to Address Team Readiness Gaps Identified by Implementation Science Findings. J Gen Intern Med. 2021;36:288-95.

  71. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: tools, training, technical assistance, and quality assurance/quality improvement. Am J Community Psychol. 2012;50:445–59.

    Article  PubMed  Google Scholar 

  73. Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;41:171–81.

    Article  PubMed  Google Scholar 

  74. Rotenstein LS, Johnson AK. Taking Back Control—Can Quality Improvement Enhance The Physician Experience? 2020.

  75. Edwards ST, Marino M, Solberg LI, Damschroder L, Stange KC, Kottke TE, et al. Cultural And Structural Features Of Zero-Burnout Primary Care Practices: Study examines features of primary care practices where physician burnout was reported to be zero. Health Aff. 2021;40:928–36.

    Article  Google Scholar 

  76. Goldberg DG, Soylu TG, Kitsantas P, Grady VM, Elward K, Nichols LM. Burnout among Primary Care Providers and Staff: Evaluating the Association with Practice Adaptive Reserve and Individual Behaviors. J Gen Intern Med. 2021.

  77. Edmondson AC. Teaming. How organizations learn, innovate, and compete in the knowledge economy. San Francisco; Chichester: Jossey-Bass Pfeiffer; 2014.

    Google Scholar 

  78. Perla RJ, Provost LP, Parry GJ. Seven propositions of the science of improvement: exploring foundations. Qual Manage Healthc. 2013;22:170–86.

    Article  Google Scholar 

  79. Hayes RJ, Moulton LH. Cluster randomised trials; 2009.

    Book  Google Scholar 

  80. Campbell MJ, Walters SJ. How to design, analyse and report cluster randomised trials in medicine and health related research. Chichester: Wiley; 2014.

    Book  Google Scholar 

  81. Creswell JW, Plano Clark VL, Gutmann ML, Hanson WE. An Expanded Typology for Classifying Mixed Methods Research Into Designs: The Mixed Methods Reader. Chichester: Sage Publishing; n.d.

  82. Averill JB. Matrix Analysis as a Complementary Analytic Strategy in Qualitative Inquiry. Qual Health Res. 2002;12:855–66.

    Article  PubMed  Google Scholar 

  83. McMullen CK, Ash JS, Sittig DF, Bunce A, Guappone K, Dykstra R, et al. Rapid Assessment of Clinical Information Systems in the Healthcare Setting: An Efficient Method for Time-pressed Evaluation. Methods Inf Med. 2011;50:299–307.

    Article  CAS  PubMed  Google Scholar 

  84. Gehlert E, Jacobs J, Barnett P. Costing Methods Used in VA Research, 1980-2012: Technical Report 32; 2016.

    Google Scholar 

  85. Gold HT, McDermott C, Hoomans T, Wagner TH. Cost data in implementation science: categories and approaches to costing. Implement Sci. 2022;17:11.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Azevedo KJ, Gray CP, Gale RC, Urech TH, Ramirez JC, Wong EP, et al. Facilitators and barriers to the Lean Enterprise Transformation program at the Veterans Health Administration. Health Care Manage Rev. 2020; Publish Ahead of Print.

  87. Bradley EH, Brewster AL, McNatt Z, Linnander EL, Cherlin E, Fosburgh H, et al. How guiding coalitions promote positive culture change in hospitals: a longitudinal mixed methods interventional study. BMJ Qual Saf. 2018;27:218–25.

    Article  PubMed  Google Scholar 

  88. Lukas CV, Holmes SK, Cohen AB, Restuccia J, Cramer IE, Shwartz M, et al. Transformational change in health care systems: An organizational model. Health Care Manage Rev. 2007;32:309–20.

    Article  PubMed  Google Scholar 

  89. Stephens TJ, Peden CJ, Pearse RM, Shaw SE, Abbott TEF, Jones EL, et al. Improving care at scale: process evaluation of a multi-component quality improvement intervention to reduce mortality after emergency abdominal surgery (EPOCH trial). Implement Sci. 2018;13:142.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  90. Damschroder LJ, Knighton AJ, Griese E, Greene SM, Lozano P, Kilbourne AM, et al. Recommendations for Strengthening the Role of Embedded Researchers to Accelerate Implementation in Health Systems: Findings from a State-of-the-Art (SOTA) Conference. Healthcare: J Deliv Sci Innov. 2021;8:100455.

  91. Frenk J. Balancing relevance and excellence: Organizational responses to link research with decision making. Soc Sci Med. 1992;35:1397–404.

    Article  CAS  PubMed  Google Scholar 

Download references


We are grateful for the support from our partners at VA’s Pharmacy Benefits Management (PBM), especially for contributions by Dr. Fran Cunningham. We are also grateful for the statistical expertise provided by H. Myra Kim, PhD.


Funded by the Department of Veterans Affairs’ QUERI program Grant #20-025, Washington, DC USA. Study funder had no role in study design nor will they have a role in the collection, management, analysis, and interpretation of data; writing of the report; the decision to submit the report for publication; nor will they have ultimate authority over any of these activities

Author information

Authors and Affiliations



LJD, JBS, PNP, JEK, MBF, CHR, JCL: Study design, concept, writing, critical revision. PS, MLDC, SB, KD, CS, AGWL, GDB, AML, CSU: Concept, critical revision. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Laura J. Damschroder.

Ethics declarations

Ethics approval and consent to participate

This project is a non-clinical research project, conducted primarily to produce information that expands the knowledge base of a scientific discipline or other scholarly fields, and does not involve collecting patient data or performing analyses other than for the purposes of improvement of the quality of care. As such, this evaluation program designed in support of VA quality improvement (QI), does not require IRB review and approval as per VA Central Office designation. Clinic points of contact will not be formally consented; their participation is completely voluntary with respect to level of engagement and responding to invitations to complete surveys and/or interviews. Patient-level data will be used in data analyses but will be anonymized as part of this QI-designated evaluation. All data collection, transfer, management, and analytics will follow protocols that ensure security and protection of confidentiality.

Consent for publication

Not applicable.

Competing interests

Dr. Geoffrey Barnes: Consulting fees - Pfizer/Bristol-Myers Squibb, Janssen, Acelis Connected Health, Boston Scientific. All other authors have none to declare.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Damschroder, L.J., Sussman, J.B., Pfeiffer, P.N. et al. Maintaining Implementation through Dynamic Adaptations (MIDAS): protocol for a cluster-randomized trial of implementation strategies to optimize and sustain use of evidence-based practices in Veteran Health Administration (VHA) patients. Implement Sci Commun 3, 53 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: