Skip to main content

The Systems Analysis and Improvement Approach: specifying core components of an implementation strategy to optimize care cascades in public health

A Correction to this article was published on 27 March 2023

This article has been updated

Abstract

Background

Healthcare systems in low-resource settings need simple, low-cost interventions to improve services and address gaps in care. Though routine data provide opportunities to guide these efforts, frontline providers are rarely engaged in analyzing them for facility-level decision making. The Systems Analysis and Improvement Approach (SAIA) is an evidence-based, multi-component implementation strategy that engages providers in use of facility-level data to promote systems-level thinking and quality improvement (QI) efforts within multi-step care cascades. SAIA was originally developed to address HIV care in resource-limited settings but has since been adapted to a variety of clinical care systems including cervical cancer screening, mental health treatment, and hypertension management, among others; and across a variety of settings in sub-Saharan Africa and the USA. We aimed to extend the growing body of SAIA research by defining the core elements of SAIA using established specification approaches and thus improve reproducibility, guide future adaptations, and lay the groundwork to define its mechanisms of action.

Methods

Specification of the SAIA strategy was undertaken over 12 months by an expert panel of SAIA-researchers, implementing agents and stakeholders using a three-round, modified nominal group technique approach to match core SAIA components to the Expert Recommendations for Implementing Change (ERIC) list of distinct implementation strategies. Core implementation strategies were then specified according to Proctor’s recommendations for specifying and reporting, followed by synthesis of data on related implementation outcomes linked to the SAIA strategy across projects.

Results

Based on this review and clarification of the operational definitions of the components of the SAIA, the four components of SAIA were mapped to 13 ERIC strategies. SAIA strategy meetings encompassed external facilitation, organization of provider implementation meetings, and provision of ongoing consultation. Cascade analysis mapped to three ERIC strategies: facilitating relay of clinical data to providers, use of audit and feedback of routine data with healthcare teams, and modeling and simulation of change. Process mapping matched to local needs assessment, local consensus discussions and assessment of readiness and identification of barriers and facilitators. Finally, continuous quality improvement encompassed tailoring strategies, developing a formal implementation blueprint, cyclical tests of change, and purposefully re-examining the implementation process.

Conclusions

Specifying the components of SAIA provides improved conceptual clarity to enhance reproducibility for other researchers and practitioners interested in applying the SAIA across novel settings.

Peer Review reports

Background

The field of implementation science (IS) focuses on improving the delivery of evidence-based interventions (EBI) to maximize their potential impact across heterogeneous settings. Implementation strategies, defined as methods or techniques employed to improve adoption, implementation, and sustainment of a clinical program or practice [1], are a major focus of the field. As the IS field has developed, generating evidence on effectiveness of implementation strategies to improve the delivery of EBIs across varied contexts has been a focus. Implementation strategies are key in guiding how to effectively realize EBIs in practice settings. In order to build the evidence base on implementation strategies, including how well they work across varied contexts, it is important for researchers to explicitly define and report on the core (essential) elements of implementation strategies.

Unclear terminology or inconsistent specification of implementation strategies has made replication of study findings in novel settings difficult [2,3,4,5]. Guidelines for naming, defining, and operationalizing implementation strategies have been proposed by Proctor et al. [2] to make explicit how others can use and adapt implementation strategies to novel settings, in order to further the science, disseminate more generalizable knowledge, and add conceptual clarity. These guidelines established seven dimensions of nomenclature: actor, action, action targets, temporality, dose, implementation outcomes addressed and theoretical, empirical, or pragmatic justification.

The Expert Recommendations for Implementing Change (ERIC) are another effort to create “a common nomenclature for implementation strategy terms, definitions and categories that can be used to guide implementation research and practice” [5] across heterogeneous health service settings. The ERIC expert panel reached consensus on 73 implementation strategies [6], whose use helps improve conceptual clarity, relevance, and comprehensiveness when reporting on implementation strategies.

The Systems Analysis and Improvement Approach (SAIA) is an evidence-based, multi-component implementation strategy focused on optimizing service delivery cascades [7]. SAIA combines systems engineering tools into an iterative process to guide service delivery staff and managers to visualize treatment cascade drop-offs and prioritize areas for system improvements, identify modifiable organization/facility-level bottlenecks, and propose, implement and assess the impact of modifications to improve system performance [8]. The core systems tools that the SAIA harnesses are cascade analysis [9] (whereby routine data is used to assess how the client population passes through specific sequential steps, identify drop off among the clients and prioritize steps for quality improvement efforts) [10], process mapping (where frontline service providers and managers collaboratively outline the steps that clients currently go through to achieve care in their specific organization/facility), and continuous quality improvement (CQI) [11,12,13,14], to guide service provider-led, data-driven quality improvement. This work is conducted through organization/facility-level learning meetings supported by external facilitators and conducted at set intervals, typically monthly, for a minimum of 6 months, to allow service providers to gain expertise in implementing SAIA to improve outcomes of their specific service. SAIA has been adopted across a range of geographic and clinical settings. The SAIA trial (PI: Sherr) tested SAIA through a 36-facility, cluster randomized trial in three SSA countries in prevention of mother-to-child transmission of HIV services [8]. The intervention led to 3.3-fold greater improvement in antiretroviral uptake for HIV-infected pregnant women (13.3% vs 4.1%; increase to 77.7% in intervention and 65.9% in control facilities) and over 17-fold greater improvement in early infant diagnosis in HIV-exposed infants (11.6% vs 0.7%; increase to 46.1% in intervention and 32.0% in control facilities) [7].

While care cascades have gained increasing traction as a useful way to organize data to inform actions, there are few implementation strategies using and optimizing care cascades that are tailored for LMIC and low-resourced settings. Most strategies target a single step in a system, whereas SAIA focuses on the system as a whole. In addition, the use of CQI ensures the contextual relevance of the proposed solutions to identified bottlenecks. SAIA’s added value relative to CQI stems from the addition of tools to encourage systems thinking among front line care providers and quantitative and qualitative prioritization techniques which use local data sources, prior to CQI solution generation. Over the last decade, there has been a steady rise in funded research to adapt SAIA to novel clinical areas and geographic settings and a growing demonstration of its broader effectiveness across a range of public health settings [15,16,17,18,19,20,21]. To extend on this previously published research and ensure SAIA’s success, its adaptation and implementation should be guided by conceptually clear implementation strategies. In this short report, we comprehensively map the core components of the SAIA implementation strategy to the distinct strategies of the ERIC typology, specify each resultant ERIC implementation strategy according to Proctor’s guidelines for specifying and reporting implementation strategies, and describe implementation outcomes that link to the multi-component SAIA strategy. By empirically and theoretically justifying the inclusion of each component of SAIA, we hope to make clear that CQI must be data driven, and should occur via supporting data use by care providers and support team’s systems thinking and prioritization skills [9, 22].

Methods

Soliciting collective input to specify implementation strategies has been called for by leaders in the field of implementation science [2, 6, 23], in particular as the evidence-base on strategies like SAIA is rapidly emerging. To capture structured feedback and support consensus building, the investigators convened a panel of 23 implementation scientists, researchers, implementing team members, and organizational stakeholders, all with direct experience implementing and/or evaluating SAIA. This panel included those experienced with SAIA’s adaptation and application across a range of clinical areas (including PMTCT [8, 18, 19], mental health [16], hypertension [17], family planning [15], pediatric HIV [20], cervical cancer, community-based naloxone distribution [24], juvenile justice health care services, and malaria), and countries (Mozambique, Kenya, USA, Democratic Republic of the Congo), whose direct implementation experience made them well-suited to synthesize best practices and priorities for further adaptation and spread.

Process

As pre-work, a smaller group of IS experts, engaged in the initial SAIA studies targeting the optimization of prevention of mother-to-child transmission of HIV (PMTCT) programs, convened to specify the components of the SAIA strategy (SAIA strategy meetings, cascade analysis, process mapping, CQI) and discuss the process by which a broader SAIA panel would be engaged. Subsequently, over 12 months, a modified nominal group technique approach (mNGT) [25] was employed to name, define, and operationalize SAIA core components using Proctor’s recommendations [2] and match them to ERIC implementation strategies. Three in-person meetings were held and multiple drafts reviewed to specify the actors, action, action targets, temporality, dose, implementation outcomes, and theoretical justification for each of the SAIA intervention components. Each component was presented independently followed by interactive debate, to gain consensus on the most appropriate mapping to ERIC strategies. Broader conversation across clinical areas highlighted commonalities and differences, clarifying the essential SAIA components, as well as broader linkages of this multi-component strategy to Proctor’s implementation outcomes [26]. Through consensus, the broader SAIA panel determined which Proctor implementation outcomes are effectively addressed through the use of the SAIA implementation strategy, a process that was informed by the published results of the various studies in peer reviewed journals [7, 9, 15, 16, 19, 20, 27] and conferences [24, 28] as well as feedback from field-based research teams. For example, the more recent adaptation of SAIA to optimize community-based Naloxone distribution in Oakland, California, provided a different setting from the remaining SAIA studies which have been primarily health facility-based. The mNGT sessions brought this issue to the group, and more inclusive language was adopted, replacing the terms patients and health care workers with clients and service providers. Implementation outcomes were considered for SAIA as a whole (not its individual components), as to date the multi-component strategy has been implemented holistically and mechanisms of action and contributions of individual components have not been assessed. The SRQR reporting guideline checklist was deemed appropriate for this short report and is available as an additional file (Additional file 1).

Results

The components of the SAIA implementation strategy were named and operationally defined to guide further specification.

Component 1

SAIA Strategy Meeting is defined as an assembly convened of frontline service providers by an external facilitator with expertise in SAIA. These 1–2 h long meetings usually occur monthly and the aforementioned external facilitators provide ongoing support and/or feedback on SAIA implementation to the service delivery team, including by guiding teams to operationalize micro-interventions and assign tasks and providing feedback on the appropriateness of a proposed micro-intervention to the cascade step or bottleneck it is intended to address.

Component 2

Cascade Analysis is defined as use of a Cascade Analysis Tool (CAT, Additional file 2a) to analyze the implementing unit’s data, assess current performance of a multi-step care cascade, identify gaps, and quantify potential improvement to the system if a given step were optimized [9, 29, 30].

Component 3

Process Mapping is when frontline service providers visualize, on paper, the service they are providing from the perspective of the target client population and identify bottlenecks and inefficiencies. Through the resulting process map, service providers discuss modifiable system challenges and then pair the results with the CAT optimization, to identify the step and/or target of future improvement efforts (Additional file 2b).

Component 4

CQI is defined as using the results of the CAT and process mapping to propose and prioritize potential micro-interventions (modifications to workflow or service organization that are within the frontline provider’s power to influence, also referred to as small tests of change) [31] targeting the specific cascade step and/or service bottleneck identified. The micro-intervention is operationalized in terms of its goal, scope, timeframe, specific tasks, and responsible party. Once micro-interventions are identified for testing, their impact is assessed through the plan-do-study-act cycle [32]. At each SAIA strategy meeting, the implementation fidelity and impact of the micro-intervention are assessed and the decision is made to adopt, adapt, or abandon it (Additional file 2c).

Further details on the operationalization of the SAIA components are described in an additional file (Additional file 3) and is also available at www.saia-strategy.com.

Each of these four SAIA components was mapped to distinct ERIC implementation strategies by the broader research team, followed by specification of their strategy-specific actor(s), action(s), action target(s), temporality, dose, and intended implementation outcome(s) [2]. All results are presented in Table 1.

Table 1 Specification of ERIC strategies contained within SAIA

SAIA strategy meetings

The action taken through SAIA strategy meetings is the creation of a discussion space of current processes, enabling engagement with data driven problem solving by the frontline service providers with support from external facilitators. The targets of this action are the frontline service providers implementing SAIA which may include those directly involved in the targeted service delivery or those tangentially impacted by the services (for example laboratory or pharmacy services). The SAIA strategy meetings are the venue through which the three remaining components of the SAIA implementation strategy are shared and discussed. The timing and periodicity of meetings can be adjusted to match the timing of supervision visits, availability of routine data, or other driving considerations at the site level.

Cascade analysis

Cascade analysis is accomplished in SAIA through the CAT [9, 29]. Sequentially linked, summarized outcome data from the site over a previous period (typically 1–3 months) is fed into the CAT and provides the team with a snapshot of current performance, including drop offs across steps. The optimization function simulates the overall improvement to the system if a particular step were fully optimized (assuming other steps remain constant), thereby identifying the steps with the greatest potential for cascade gain. The action targets of the cascade work are the frontline providers whose improved use of data to diagnose problems within the system bolsters their sense of ownership and accountability for overall performance. Cascade analysis is seen as the initial step of SAIA and is typically revisited monthly to assess the impact of CQI’s cyclical tests of change; however, frequency can be adjusted to match the frequency of data aggregation within the unit or systems’ health management information system (HMIS).

Process mapping

Process mapping facilitates the discussion and drawing of a physical map of how clients pass through services within the implementing unit, highlighting steps that are redundant, represent barriers or otherwise do not add value to the individual [33, 34]. Through reviewing these maps, teams discuss and achieve consensus on current service organization across all components of the system, while identifying target areas for improvement. The target is to improve problem identification and prioritization that is tailored to the specific implementing organization or unit. Process mapping, like cascade analysis, also reinforces ownership and accountability for system performance among the frontline team. Process mapping is the second step of SAIA and can be understood as a two-part step, whereby the first is the physical mapping and the second is local consensus discussions conducted after the CAT and process maps are completed [19]. At a minimum, process mapping is conducted once, at the first SAIA strategy meeting, but may be revisited and reworked as often as monthly throughout the implementation period.

CQI

The specific actions for this component are fourfold. First, health care teams use data on systems performance and current processes to select a target step and propose a micro-intervention with potential to improve service delivery and outcomes. Once the broader goal and scope are agreed upon, the micro-intervention itself is delineated into discrete tasks, each clearly assigned to a specific team member or members for implementation and reporting at the subsequent meeting. Once the micro-intervention has been implemented for the aggregated data period, the CAT is repeated to determine whether to integrate the micro-intervention into routine processes. After assessment of the intervention’s impact on cascade performance, the team decides together whether to adopt it as part of routine performance, adapt it and test it for a second cycle, or abandon it. The action targets of CQI include current processes and service provision as well as communication among the health care team. All actions are repeated at every SAIA meeting, with the exception of review of micro-interventions to adopt, adapt, or abandon, which only occurs after the initial SAIA strategy meeting. See Table 2 for example micro-interventions from SAIA projects.

Table 2 Example micro-interventions from SAIA projects

Implementation outcomes linked to SAIA

Implementation outcomes, defined as the effects of deliberate and purposive actions to implement new treatments, practices or services have three key functions: (1) they serve as indicators of implementation success, (2) they are proximal indicators of implementation processes, and (3) they are important intermediate outcomes [26, 35]. According to the broader SAIA research team whose perspective was informed by the published evidence of the existing SAIA studies, the multi-component implementation strategy of SAIA mapped to six of Proctor’s implementation outcomes [26]: acceptability, adoption, feasibility, fidelity, penetration8 and sustainability. The team identified these implementation outcomes to be related to the EBI that SAIA was designed to optimize (e.g., in adapting and effectiveness of SAIA for novel care cascades). However, as the focus of SAIA-related research moves from establishment of SAIA’s effectiveness for novel care cascades to testing strategies to spread and/or sustain SAIA, the implementation outcomes of interest expand to include appropriateness and cost, and focuses on improving implementation of SAIA as an evidence-based implementation strategy (as well as the EBI under study) (see Table 1). Operational definitions within the context of SAIA, as well as tools and approaches to measure each outcome are available in Additional file 3.

Discussion

Based on this review and clarification of the operational definitions of the components of SAIA, the panel of experts mapped the ERIC strategies to each of the four SAIA components. SAIA mapped to 13 distinct ERIC strategies and as a multi-component implementation strategy aimed to impact six implementation outcomes: acceptability, adoption, appropriateness, feasibility, and penetration.

Frontline service providers are actors in the context of some SAIA component and action targets in others. As implemented to date, SAIA relies on an external facilitator to convene meetings and guide teams through SAIA implementation. Fidelity to SAIA facilitation is tracked through routine monitoring and its quality is assessed through periodic qualitative assessment using an implementation science determinants framework, such as the Consolidated Framework for Implementation Research (CFIR) [19, 27]. The completed SAIA trials have been implemented by study nurses as facilitators. Ongoing SAIA trials are experimenting with other types of study facilitators include social workers, community health workers, mental health technicians, and medical doctors. Sustainability may require the external facilitator to eventually be phased out, and a facilitator to instead be assigned directly from existing management structures, such as sub-national agencies, already tasked with organization/facility oversight and support, an approach that was assessed in the scale-up study of SAIA for PMTCT [18], or transition to facilitation by a champion among the frontline service providers themselves.

SAIA is adaptable to a variety of care cascades and contexts. Our current work aims to facilitate future adaptations while maintaining reproducibility. Specific work exploring mechanisms of action (and the relative contributions of individual components of SAIA) is underway and will build upon the generalizability of the SAIA, including through the use of longitudinal structural equation modeling [36].

Of note, existing data on the service implementing the target EBI, which are key to data-driven systems-level thinking on current performance, varied across settings in its availability or accessibility. This required some study teams to work with key stakeholders (Ministry of Health, others) to introduce or add to data collection forms, or to develop creative ways to collate data across multiple data sources. This is particularly critical for the cascade analysis component. Given that many settings in which SAIA is being implemented are transitioning from acute to chronic care systems, this is hardly surprising. Service providers are being tasked to not just generate and supply data ‘up the chain of command’ but use it to identify bottlenecks and generate solutions for their systems. Thus, the initial work of SAIA often addresses the perennial challenge of data use by frontline service providers for decision-making [37].

Conclusions

SAIA represents a promising approach to harness systems-level knowledge of service providers and managers at the frontline of care, both in clinical and community settings. In order to ensure its successful and accurate translation to other clinical areas and geographic regions, the authors have built upon a growing body of SAIA research by detailing its core components and implementation strategies, through use of established specification approaches. This work provides clear definitions of the SAIA components using established taxonomy, and maps the SAIA strategy to implementation outcomes they may activate, in order to facilitate future adaptations and additionally lay the groundwork for future work to define its mechanisms of action.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Change history

Abbreviations

CAT:

Cascade Analysis Tool

EBI:

Evidence-based intervention

ERIC:

Expert Recommendations for Implementing Change

HCW:

Health care worker

HMIS:

Health management and information system

IS:

Implementation science

PMTCT:

Prevention of mother-to-child transmission of HIV

QI:

Quality improvement

SAIA:

Systems Analysis and Improvement Approach

References

  1. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50:217–26.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Colquhoun H, Leeman J, Michie S, et al. Towards a common terminology: a simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. Implement Sci. 2014;9:51.

    PubMed  PubMed Central  Google Scholar 

  4. Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009;4:40.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Waltz TJ, Powell BJ, Chinman MJ, et al. Expert Recommendations for Implementing Change (ERIC): protocol for a mixed methods study. Implement Sci. 2014;9:39.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Rustagi AS, Gimbel S, Nduati R, et al. Implementation and operational research: impact of a systems engineering intervention on PMTCT service delivery in Cote d’Ivoire, Kenya, Mozambique: a cluster randomized trial. J Acquir Immune Defic Syndr. 2016;72:e68–76.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Sherr K, Gimbel S, Rustagi A, et al. Systems analysis and improvement to optimize pMTCT (SAIA): a cluster randomized trial. Implement Sci. 2014;9:55.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Wagner AD, Gimbel S, Asbjornsdottir KH, et al. Cascade analysis: an adaptable implementation strategy across HIV and non-HIV delivery platforms. J Acquir Immune Defic Syndr. 2019;82(Suppl 3):S322–S31.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Subbaraman R, Nathavitharana RR, Mayer KH, et al. Constructing care cascades for active tuberculosis: a strategy for program monitoring and identifying gaps in quality of care. PLoS Med. 2019;16:e1002754.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Ajzen I. The theory of planned behavior. Organizational behavior and human decision processes. 1991;50:197–211.

    Article  Google Scholar 

  12. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260:1743–8.

    Article  CAS  PubMed  Google Scholar 

  13. Jarvis P. Adult learning in the social context. 1st ed. London: Routledge; 2011.

    Google Scholar 

  14. Abela J. Adult learning theories and medical education: a review. Malta Med J. 2009;21:11–8.

    Google Scholar 

  15. Eastment MC, Wanje G, Richardson BA, et al. Results of a cluster randomized trial testing the systems analysis and improvement approach to increase HIV testing in family planning clinics. AIDS. 2022;36:225–35.

    Article  PubMed  Google Scholar 

  16. Fabian KE, Muanido A, Cumbe VFJ, et al. Optimizing treatment cascades for mental healthcare in Mozambique: preliminary effectiveness of the Systems Analysis and Improvement Approach for Mental Health (SAIA-MH). Health Policy Plan. 2021;35:1354–63.

    Article  PubMed  Google Scholar 

  17. Gimbel S, Mocumbi AO, Asbjornsdottir K, et al. Systems analysis and improvement approach to optimize the hypertension diagnosis and care cascade for PLHIV individuals (SAIA-HTN): a hybrid type III cluster randomized trial. Implement Sci. 2020;15:15.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Sherr K, Asbjornsdottir K, Crocker J, et al. Scaling-up the Systems Analysis and Improvement Approach for prevention of mother-to-child HIV transmission in Mozambique (SAIA-SCALE): a stepped-wedge cluster randomized trial. Implement Sci. 2019;14:41.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Gimbel S, Rustagi AS, Robinson J, et al. Evaluation of a systems analysis and improvement approach to optimize prevention of mother-to-child transmission of HIV using the consolidated framework for implementation research. J Acquir Immune Defic Syndr. 2016;72(Suppl 2):S108–16.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Wagner AD, Augusto O, Njuguna IN, Gaitho D, Mburu N, Oluoch G, et al. Systems analysis and improvement approach to optimize the pediatric and adolescent HIV cascade (SAIA-PEDS): a pilot study. Implement Sci Commun. 2022.

  21. Lambdin BH, Zibbell J, Wheeler E, Kral AH. Identifying gaps in the implementation of naloxone programs for laypersons in the United States. Int J Drug Policy. 2018;52:52–5.

    Article  PubMed  Google Scholar 

  22. O'Neill SM, Hempel S, Lim YW, et al. Identifying continuous quality improvement publications: what makes an improvement intervention ‘CQI’? BMJ Qual Saf. 2011;20:1011–9.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Lewis CC, Klasnja P, Powell BJ, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Lambdin BH, Kral A, Wagner A, Wegner L, Sherr K. Optimizing naloxone distribution to prevent opioid overdose fatalities: results from piloting the systems analysis and improvement approach within syringe service programs. In: Proceedings from the 13th Annual Conference on the Science of Dissemination and Implementation. Washington, D.C.: Implementation Science; 2013. p. S-99.

    Google Scholar 

  25. Davies S, Romano PS, Schmidt EM, Schultz E, Geppert JJ, McDonald KM. Assessment of a novel hybrid Delphi and Nominal Groups technique to evaluate quality indicators. Health Serv Res. 2011;46:2005–18.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76.

    Article  PubMed  Google Scholar 

  27. Beima-Sofie KWA, Soi C, Liu W, Tollefson D, Njuguna IN, Awino E, et al. Providing “a beam of light to see the gaps:” – determinants of implementation of the Systems Analysis and Improvement Approach applied to the pediatric and adolescent HIV cascade in Kenya. Implement Sci Commun. 2022;3:73.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Crocker J, Agostinho M, Amaral F, Asbjornsdottir KA, Coutingo J, Cruz E, et al. Measuring the m in the RE-AIM framework: using a stepped wedge design to evaluate maintenance of the saia-scale PMTCT program post-external support in Mozambique. In: 13th Annual Conference on the Science of Implementation. Washington D.C.: Implementation Science; 2021. p. 49.

    Google Scholar 

  29. Gimbel S, Voss J, Mercer MA, et al. The prevention of mother-to-child transmission of HIV cascade analysis tool: supporting health managers to improve facility-level service delivery. BMC Res Notes. 2014;7:743.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Gimbel S, Aburri N, Zunt A, Nduati R. PCAT; 2018.

    Google Scholar 

  31. How to Improve. (https://www.ihi.org/resources/Pages/HowtoImprove/ScienceofImprovementTestingChanges.aspx) Accessed 6 Dec 2022.

  32. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance. 2nd ed. San Francisco: Jossey-Bass Publishers; 2009.

    Google Scholar 

  33. Wagner AD, Crocker J, Liu S, et al. Making smarter decisions faster: Systems engineering to improve the global public health response to HIV. Curr HIV/AIDS Rep. 2019;16:279–91.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Antonacci G, Reed JE, Lennox L, Barlow J. The use of process mapping in healthcare quality improvement projects. Health Serv Manage Res. 2018;31:74–84.

    Article  PubMed  Google Scholar 

  35. Rosen A, Proctor EK. Distinctions between treatment outcomes and their implications for treatment evaluation. J Consult Clin Psychol. 1981;49:418–25.

    Article  CAS  PubMed  Google Scholar 

  36. Cumbe VFJ, Muanido AG, Turner M, et al. Systems analysis and improvement approach to optimize outpatient mental health treatment cascades in Mozambique (SAIA-MH): study protocol for a cluster randomized trial. Implement Sci. 2022;17:37.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Gimbel S, Chilundo B, Kenworthy N, Inguane C, Citrin D, Chapman R, Sherr K, Pfeiffer J. Donor data vacuuming. Med Anthropol Theory. 2018;5.

Download references

Acknowledgements

We would like to acknowledge our Ministry of Health and university collaborators in Mozambique, Kenya, and the Democratic Republic of the Congo, and additional stakeholders at Seattle King County Public Health, the Washington State Department of Adult and Juvenile Detention, as well as community-based partner organizations in Washington, California, Mozambique, Kenya, and the Democratic Republic of the Congo.

Funding

This work was supported from grants from the National Institutes of Health, including R01MH113435 (SAIA-SCALE), F32HD088204 and R34AI129900 (SAIA-PEDS), R21AI124399 (mPCAT), K24HD088229 (SAIA-FP), R21MH113691 (SAIA-MH), P30AI027757 (CFAR), R21DA046703 (SAIA-Naloxone), R01HL142412 (SAIA-HTN), 1UG3HL156390-01 (SCALE SAIA-HTN) R01HD0757 and R01HD0757-02S1 (SAIA), K08CA228761 (CCS SAIA) and T32AI070114 (UNC TIDE), Support was provided by the Implementation Science Core of the University of Washington/Fred Hutch Center for AIDS Research, an NIH-funded program under award number AI027757 which is supported by the following NIH Institutes and Centers: NIAID, NCI, NIMH, NIDA, NICHD, NHLBI, NIA, NIGMS, and NIDDK. This work was also supported by the Doris Duke Charitable Foundation and the Rita and Alex Hillman Foundation (SAIA-JUV), and the Thrasher Foundation (SAIA-MAL). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, the Doris Duke Charitable Foundation, the Rita and Alex Hillman Foundation, or the Thrasher Foundation.

Author information

Authors and Affiliations

Authors

Contributions

All authors participated in the specification process described in the manuscript. SG and KAS conceptualized and drafted the manuscript. KS, BHW, CH, RN, and AW conceptualized and substantially contributed to and revised the manuscript. KB, MB, JC, JC, VC, AD, ME, DG, BL, SP, OU, RSM, AOM, AM, INN, and GW substantively revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sarah Gimbel.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that Sarah Gimbel and Kenneth Scherr are members of the Editorial Board for the Journal of Implementation Science Communications. The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

SRQR Reporting guideline checklist.

Additional file 2.

a: Cascade Analysis Tool (CAT). b: Process mapping guide. c: Continuous Quality Improvement (CQI) action planning tool and instructions.

Additional file 3.

Operational Definitions of SAIA.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gimbel, S., Ásbjörnsdóttir, K., Banek, K. et al. The Systems Analysis and Improvement Approach: specifying core components of an implementation strategy to optimize care cascades in public health. Implement Sci Commun 4, 15 (2023). https://doi.org/10.1186/s43058-023-00390-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-023-00390-x

Keywords