Skip to main content

Systems Analysis and Improvement Approach to optimize the pediatric and adolescent HIV Cascade (SAIA-PEDS): a pilot study



Children and adolescents lag behind adults in achieving UNAIDS 95-95-95 targets for HIV testing, treatment, and viral suppression. The Systems Analysis and Improvement Approach (SAIA) is a multi-component implementation strategy previously shown to improve the HIV care cascade for pregnant women and infants. SAIA merits adaptation and testing to reduce gaps in the pediatric and adolescent HIV cascade.


We adapted the SAIA strategy components to be applicable to the pediatric and adolescent HIV care cascade (SAIA-PEDS) in Nairobi and western Kenya. We tested whether this SAIA-PEDS strategy improved HIV testing, linkage to care, antiretroviral treatment (ART), viral load (VL) testing, and viral load suppression for children and adolescents ages 0–24 years at 5 facilities. We conducted a pre-post analysis with 6 months pre- and 6 months post-implementation strategy (coupled with an interrupted time series sensitivity analysis) using abstracted routine program data to determine changes attributable to SAIA-PEDS.


Baseline levels of HIV testing and care cascade indicators were heterogeneous between facilities. Per facility, the monthly average number of children/adolescents attending outpatient and inpatient services eligible for HIV testing was 842; on average, 253 received HIV testing services, 6 tested positive, 6 were linked to care, and 5 initiated ART. Among those on treatment at the facility, an average of 15 had a VL sample taken and 13 had suppressed VL results returned.

Following the SAIA-PEDS training and mentorship, there was no substantial or significant change in the ratio of HIV testing (RR: 0.803 [95% CI: 0.420, 1.532]) and linkage to care (RR: 0.831 [95% CI: 0.546, 1.266]). The ratio of ART initiation increased substantially and trended towards significance (RR: 1.412 [95% CI: 0.999, 1.996]). There were significant and substantial improvements in the ratio of VL tests ordered (RR: 1.939 [95% CI: 1.230, 3.055]) but no substantial or significant change in the ratio of VL results suppressed (RR: 0.851 [95% CI: 0.554, 1.306]).


The piloted SAIA-PEDS implementation strategy was associated with increases in health system performance for indicators later in the HIV care cascade, but not for HIV testing and treatment indicators. This strategy merits further rigorous testing for effectiveness and sustainment.

Peer Review reports


Children and adolescents living with HIV lag behind adults in reaching the UNAIDS 95-95-95 goals for HIV testing, HIV treatment, and viral load (VL) suppression [1]. While World Health Organization guidelines recommend universal HIV testing for children and adolescents seeking outpatient and inpatient care, as well as immediate test-and-treat strategies for all ages, in 2020, just 53% of children living with HIV globally were receiving life-saving antiretroviral therapy (ART), compared to 68% of adults [1]. ART adherence and virologic suppression require continued adherence to often unpalatable pediatric formulations of medications, and regular visits to health facilities for monitoring.

Barriers to HIV care at all steps of the cascade occur at the systems- and individual-level. At the individual level, children rely heavily on caregivers, while adolescents have emerging autonomy, both facing challenges navigating health systems and maintaining engagement and adherence in chronic care. At the system-level, health care providers and clients face staffing shortages, stock outs of essential supplies, increasing responsibilities to deliver for large populations, unclear clinic flow, and documentation and tracking systems that allow for gaps in coverage. This combination of individual and systems-level barriers yields sub-optimal service delivery for children and adolescents [2,3,4,5]. While many strategies focus on individual barriers, fewer have focused on addressing system-level barriers and health system organization.

The Systems Analysis and Improvement Approach (SAIA) is a multi-component implementation strategy to address health systems organization; SAIA combines three systems engineering [6] tools—flow mapping, cascade analysis [7], and continuous quality improvement—to identify and prioritize gaps in service delivery and identify and test micro-interventions to optimize system performance. SAIA was effective in reducing drop offs in the prevention of mother-to-child transmission of HIV (PMTCT) cascade, specifically in improving ART coverage and infant HIV testing [8]. This flexible implementation strategy has been adapted to different service delivery platforms [9,10,11,12]. Pediatric and adolescent HIV care systems share similar cascade steps, cadres of health care workers, and physical space transitions with PMTCT systems.

In this pilot study, we aimed to define the pediatric and adolescent HIV cascade, characterize the cascade in the absence of the implementation strategy, and pilot and measure the effect of the adapted SAIA-PEDS strategy in Kenya. We assessed the impact of SAIA-PEDS on pediatric and adolescent HIV testing, linkage to care, treatment initiation, VL monitoring, and VL suppression.


Study setting

This pre-post pilot was conducted between July 2017 and June 2018 at six government health facilities in Kenya: three in Nairobi County, one in Homa Bay County, one in Kisumu County, and one in Siaya County. The six facilities were purposively selected to represent diversity in size and level of services, with two County Hospitals, two sub-County hospitals, and two health centers. All facilities provided comprehensive HIV testing and care services, with VL samples sent to centralized laboratories for processing. All invited facilities initially agreed to participate, but due to delays in implementation, one facility in Homa Bay was excluded from analyses, leaving a final sample size of five facilities. During the year prior to the introduction of the implementation strategy, there were two nation-wide health care worker strikes and an initial and repeated presidential election; these events have been documented to have negatively impacted service delivery across Kenya [13,14,15].

Ethical approval

This study was reviewed and approved by the University of Washington Institutional Review Board, the Kenyatta National Hospital Ethics and Research Committee, and the National Commission for Science, Technology, and Innovation (NACOSTI). Additionally, following ethical approval, the study was reviewed and approved by County and sub-County health offices, and further permission was sought from each facility’s medical superintendent and in-charge prior to facility engagement.

SAIA implementation strategy

SAIA consists of three systems engineering [6] tools that are utilized in a cyclical approach by frontline health care workers and managers to identify and prioritize gaps in service delivery and test micro-interventions to improve care delivery systems: cascade analysis tool [7], flow mapping, and continuous quality improvement [8, 16, 17].

Pediatric/adolescent Cascade analysis tool (PedCAT)

The cascade analysis tool (CAT) [7] is an Excel-based simple simulation model with an optimization function. The CAT is populated by routine program data for a specific facility and automatically quantifies the drop off at each step of the HIV cascade and quantifies the additional number of individuals who would complete all steps of the cascade if each single step were individually optimized. The goal of the CAT is to quantify and prioritize gaps in service delivery and allow frontline health care workers to access and interpret their own data.

This tool required adaptation from the original SAIA package to be applicable to the pediatric and adolescent HIV cascade, adapting the original CAT to be the PedCAT. We conducted a physical walk through of each pilot health facility to characterize health information registers, cards, and other data collection and reporting tools; observe patient flow; and ask each operator of the health system to describe what activities were conducted at each step. Following this data mapping activity, an initial tool was created and presented to clinic managers and frontline health care workers to determine whether the service flow modeled in the tool reflected realistic flow patterns, made realistic assumptions, and was sufficiently simple to be useful for routine use; this process was similar to member checking in qualitative research. We conducted several rounds of revisions to the PedCAT before a final tool was agreed upon (Fig. 1).

Fig. 1
figure 1

PedCAT component of the SAIA-PEDS implementation strategy with dummy data. The green cells are entered by health care workers using routine program data sources; the white cells are automatically calculated by the Excel sheet. The numbers in red represent the “cascade gain,” the number of individuals who would successfully complete all steps of the cascade if that step, and only that step, was fully optimized. The development and considerations of the PedCAT have been published elsewhere

Flow mapping

Flow mapping, also known as value stream mapping or process mapping, consists of frontline health care workers creating a visual map of their health system, drawing the sequential steps taken by clients, data, or samples; the goal of flow mapping is to identify system inefficiencies and bottlenecks and also visualize system reorganization [6, 18].

Continuous quality improvement (CQI)

CQI has a large body of effectiveness literature supporting its use in a range of settings [19,20,21], and there are diverse ways in which CQI is delivered. In this study, we utilized the Model for Improvement and “plan, do, study, act (PDSA)” cycles, in which health care worker teams address the following questions in a group setting: What are we trying to accomplish? How will we know a change is an improvement?, and What change can we make that will result in an improvement? and then Plan the details of a test of a micro-change, Do the micro-change, Study whether the micro-change impacted an identified indicator, and Act to either adapt, adopt, or abandon that micro-change based on the indicator data [22].

Intended use of tools

The three SAIA-PEDS tools are intended for combined use in a cyclical way, with flexibility to more or less heavily utilize tools that health care workers find useful or burdensome in a given local setting.

Training and staffing of implementation strategy

Three study staff members were responsible for training frontline health care workers in the SAIA-PEDS tools, and two of these study staff members were responsible for periodic visits to the facilities to coach and mentor frontline health care workers in the use of the SAIA-PEDS tools. The study staff members received intensive training in PedCAT interpretation, flow mapping, and CQI coaching; prior to study activities, both study team members had experience in clinical care for children and adolescents in Kenya.

Frontline health care workers were trained together in a half-day offsite session; facility in-charges were responsible for selecting and recruiting at least one representative from each of the following service delivery areas: outpatient, inpatient, HIV testing services, HIV care clinic, and laboratory to attend the training. Training covered the basics of PedCAT interpretation, the basics of CQI with a practical exercise in “plan, do, study, act (PDSA),” and included creating a flow map of a facility’s patient flow. Following this half day training, study staff visited each facility for a facility-wide sensitization meeting, which covered the intent of the implementation strategy and allowed all facility staff to ask questions.

Schedule of follow-up visits and data collection at facilities

The intended schedule for coaching and mentorship visits by study staff to each facility was weekly for the first 1 month, every 2 weeks for the next 2 months, and monthly for the final 3 months. The intended meeting members were the frontline health care workers trained in the initial training, but substitutions could be made by the in-charge due to staff turnover or transfer. During each 1–2-h coaching and mentorship meeting, study staff guided the facility team through reviewing their micro-changes using PDSA cycles, reviewing data that facility staff had collected to inform indicators to evaluate micro-changes. The PedCAT and flow mapping tools were used as needed to identify and prioritize gaps and brainstorm service flow reorganization.

Data sources and outcome definitions

We considered a range of routine data sources (described in detail elsewhere [7]) with the intent of using easily accessible and accurate data that allowed disaggregation of children (0–9 years), adolescents (10–19 years), and young adults (20–24 years). Ultimately, paper registers and electronic medical records were utilized; two data abstractors per facility were engaged to abstract data from paper registers or electronic medical records, depending on the facility’s data systems. We abstracted anonymous, individual-level, count data aggregated to the calendar day and age band (0–4, 5–9, 10–14, 15–19, and 20–24 years) during data collection. Count data were entered on tablets using Open Data Kit [23]. Daily count data were subsequently aggregated to the month during data cleaning.

Five outcome variables were assessed: HIV testing uptake: # children and adolescents who received HIV testing services (numerator)/# children and adolescents who presented to outpatient or inpatient departments (denominator); Linkage to care: # children and adolescents with new HIV care files (numerator)/# children and adolescents who were reactive in HIV testing (denominator); ART initiation: # children and adolescents starting ART (numerator)/# children and adolescents who were linked to care (denominator); VL monitoring: # children and adolescents with a VL sample collected (numerator)/# children and adolescents due for VL testing (denominator); VL suppression: # children and adolescents with VL < 1000 copies/mL (numerator)/# children and adolescents with VL samples taken (denominator).

All numerator and denominator data were directly abstracted from registers with the exception of the number of children due for a VL sample, which was calculated as a monthly average based on the HIV care guidelines at the time, which indicated six-monthly VL monitoring during the first year of treatment, followed by annual VL monitoring. Of note, the individuals in the numerator and denominator of each outcome were not required to be the same individuals; this was not a longitudinal cohort. As a result, the ratios of numerator to denominator often exceeded one, particularly for indicators where substantial in-migration was common; for example, some facilities had substantial numbers of children diagnosed with HIV at other facilities linking to care at their facility for HIV care services. Conversely, the ratio of numerator to denominator cannot be accurately interpreted as proportions or absolute coverage because some groups of individuals may be systematically missing from denominators for data abstraction simplification; for example, HIV testing uptake denominators include only those children and adolescents accessing care at outpatient and inpatient facilities and would not include those seeking other services (e.g. family planning, specialty clinics). Further details are described elsewhere [7].

Statistical analysis

We considered the baseline period to be the six months prior to facility training in SAIA-PEDS (July 2017–December 2017); we considered the implementation strategy period to be the 6 months following the facility training in SAIA-PEDS (January 2018–June 2018). We conducted a simple pre-post analysis and interrupted time series analyses using linear mixed effects models, including random intercepts and random slopes to account for health facility clustering. Model parameterization details are included in the Appendix. We conducted five separate models for each of the five study outcomes. The presented average monthly counts are modeled values that are geometric means across 5 facilities derived from linear mixed-effects models utilizing log transformed values. Changes were considered substantial if they were 20% greater or 20% less than the null value (relative risks of ≥ 1.2 or ≤ 0.8). All analyses were conducted using STATA 14 (StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC), and all plots were created using R (R Core Team, 2013).


Baseline indicators

Among the five facilities included in this evaluation analysis, baseline values of the five outcomes were heterogeneous over 6 months both in their numerator and denominator count data, as well as the ratio of the numerator to denominator. Particularly high monthly indicators often coincided with outreach activities or special focus initiatives.

Based on the pre-post analysis, per facility, the monthly average number of children/adolescents attending outpatient and inpatient services eligible for HIV testing was 842; on average, 253 received HIV testing services, 6 tested positive, 6 were linked to care, and 5 initiated ART. Among those on treatment at the facility, an average of 15 had a VL sample taken and 13 had suppressed VL results returned.

Based on the interrupted time series analysis, the overall baseline temporal trend in the ratio of each indicator among the five facilities was neither significantly increasing or nor decreasing (HIV testing ratio RR: 0.998 [95% CI: 0.860, 1.158]; linkage to care ratio RR: 1.04 [95% CI: 0.880, 1.235]; ART initiation ratio RR: 0.990 [95% CI: 0.854, 1.148]; VL ordering ratio RR: 1.074 [95% CI: 0.908, 1.271]; VL suppression ratio RR: 1.005 [95% CI: 0.892, 1.132]) (Table 1). Due to the negligible baseline temporal trends in the interrupted time series analysis, we present the simple pre-post as the primary results and ITS as secondary model results.

Table 1 Regression analyses using pre-post model and interrupted time series

Change concepts tested

During the implementation strategy period, a total of 17 change concepts were tested between the five facilities, ranging between two and four changes tested per facility (Table 2). There were eight changes focused on flow reorganization, three focused on newly utilizing checklists or registers, and one each focused on patient navigation, visual cues for providers, job aids for providers, and expanded hours of operation; two had insufficient details to be categorized. Flow reorganization changes focused on addressing waiting time barriers and unclear patient pathways; utilizing checklists, registers, visual cues, and job aids for providers addressed barriers to inconsistent care provision; patient navigation addressed unclear patient flows within complex systems; expanded hours of operation addressed incompatibility between patient availability and service provision times. The majority (nine) of change concepts focused on HIV testing and counseling services, with three focused on linkage to care, three focused on HIV care and treatment, and one focused on VL monitoring; one had insufficient details to be categorized (Table 2). There was moderate alignment between the steps targeted through change concepts and the steps identified as the largest gaps in the facility PedCATs. Across the 6 intervention months at the 5 facilities, the steps most commonly identified as high priority for improvement were as follows: HIV testing (13 occurrences), VL suppression (12 occurrences), VL ordering (8 occurrences), ART initiation (one occurrence), and linkage to care (1 occurrence). Among the 17 change concepts tested, 14 were adopted for further use and three were abandoned.

Table 2 Change concepts tested at each facility

Change associated with strategy

Using a pre-post design comparing the 6 months prior to the implementation strategy to the 6 months of the implementation strategy period, there was no substantial or significant change in the ratio of HIV testing (RR: 0.803 [95% CI: 0.420, 1.532]) and linkage to care (RR: 0.831 [95% CI: 0.546, 1.266]). The ratio of ART initiation increased substantially and trended towards significance (RR: 1.412 [95% CI: 0.999, 1.996]). There were significant and substantial improvements in the ratio of VL tests ordered (RR: 1.939 [95% CI: 1.230, 3.055]) but no substantial or significant change in the ratio of VL results suppressed (RR: 0.851 [95% CI: 0.554, 1.306]) (Table 1; Fig. 2).

Fig. 2
figure 2

Pre-post plots of point estimates (bars) and 95% confidence intervals (gray whiskers) of change in indicators (numerator, denominator, and ratio) for children and adolescents ages 0–24 years. Dotted black line shows null value of relative risk of 1.0

Despite no change in the ratio of HIV testing uptake, both the numerator (those who completed HIV testing) and denominator (those who presented to in- and out-patient clinics and were eligible for HIV testing) both increased substantially during the implementation strategy period, a change that was significant in the denominator only (HIV testing RR: 1.449 [95% CI: 0.627, 3.347]; in- and out-patient clients eligible RR: 1.819 [95% CI: 1.346, 2.457]). In contrast, there was a relatively small change in the numerator and no change in the denominator for linkage to care (RR: 0.841 [95% CI: 0.621, 1.139]; RR: 1.055 [95% CI: 0.659, 1.691], respectively). The substantial but not significant change in the ratio of ART initiation was driven by the increase in the numerator of those who initiated ART (RR: 1.264 [95% CI: 0.850, 1.880]) and the relatively small decrease in those testing positive (RR: 0.841 [95% CI: 0.621, 1.139]) (Table 1; Fig. 3).

Fig. 3
figure 3

Interrupted time series plots of counts and ratios of indicators for children and adolescents ages 0–24 years; facility specific data in gray solid lines; fitted model result point estimates (in solid black line) and 95% confidence intervals (dotted black line); implementation strategy started at vertical dotted line in January 2019. A Number of outpatient and inpatient clients, B ratio of HIV testing uptake: eligible outpatient and inpatient visits (ineligible removed from denominator), C number with HIV testing services (HTS) completed, D number testing HIV positive, E ratio of clients linked to care: those testing positive, F number linked to care, G ratio of clients initiating antiretroviral therapy (ART) same day among those linked to care, H number initiating ART, I ratio of clients with viral load (VL) ordered: those with VL due (fixed value per month), J number with VL ordered, K ratio of clients with suppressed VL: those with VL ordered, L number with suppressed VL

The VL ordering and VL suppression outcomes had one facility with particularly high values. When that one facility was removed, the magnitude of the change in the ratio of VL ordered was reduced and only trended towards statistical significance (ratio VL ordered RR: 1.408 [95% CI: 0.981, 2.021]), driven by the increases in the numerator paired with a constant denominator. When that one facility was removed, the magnitude of the change in the ratio of VL suppressed was relatively unchanged. However, both the numerator (VL suppressed) and denominator (VL ordered) increased substantially and significantly, improvements which were retained when the one high value facility was removed (Table 1). In site specific sub-analyses, facility 1 (the only facility that tested a change concept focused on VL monitoring) drove the observed association, having the largest or second largest effect sizes in VL suppression numerator and denominator.

In the interrupted time series analysis, there were no significant improvements or reductions in the step change at the time of the introduction of the implementation strategy, and there were no clear messages in the improvements or reductions in the slope change during the implementation strategy period (Table 1; Fig. 2).


In this five-facility pilot study of an adapted version of the SAIA multi-component implementation strategy, we assessed the impact of the implementation strategy on pediatric and adolescent HIV testing and treatment cascade indicators. We observed heterogeneous results: during the implementation strategy period, we observed substantial and significant increases in the number of individuals seeking inpatient and outpatient services, the number of viral load samples ordered, the number of viral loads that were suppressed, and the ratio of viral loads ordered compared to viral loads due. The implementation strategy was associated with substantial, but only trending towards significant, improvements in the ratio of those who initiated ART compared to those testing HIV positive. The implementation strategy was not associated with substantial or significant improvements in HIV testing or linkage to care.

The ratio of HIV testing compared to those seeking care and the numbers and ratios of those linking to care were not observed to improve with the implementation strategy, with slight and non-significant decreases in the ratios of testing and linkage to care. This was surprising, given the large number of change concepts that focused on this step of the cascade. There was strong heterogeneity between facilities in the eligibility assessments for HIV testing services; some facilities routinely screened for eligibility, while others did not, and some used a Ministry of Health register with certain criteria while others used implementing partner registries with different criteria. One facility tested a change concept of using a checklist to begin eligibility screening during the implementation strategy period. Importantly, a large health care worker strike resolved at the same time that the implementation strategy was introduced; as such, we observed a substantial and significant increase in the number of individuals seeking inpatient and outpatient services; this was paired with an increase that did not quite keep pace in HIV testing completion. It is likely that the implementation strategy resulted in an increase in HIV testing coverage, but that change was overshadowed proportionally by the massive increase in demand for services overall after the strike resolution. Given the data collection simplifications that were necessary to make the PedCAT tool feasible, including only abstracting the counts of individuals seeking inpatient and outpatient services and not specialty clinics or other entry points within a health center, it is not possible to calculate a true estimate of HIV testing coverage. Had this been feasible, we could have assessed whether this indicator began with high coverage and did not have opportunity to increase, as was noted in the original SAIA trial [8]. Other studies have noted that quality improvement has increased HIV testing coverage substantially [24]; it is unlikely that this implementation strategy would be detrimental to these services.

Linkage to care was particularly challenging to assess and heterogeneous between facilities due to issues of migration, lagged windows for linkage, and duplicate data sources. Individuals living with HIV may prefer to link to HIV care at a different clinic due to stigma [25]; this in and out migration of individuals is both appropriate to meet patient care needs and complicated for data systems due to lack of nation-wide unique identifier systems. Facilities with substantially more in-migration may artificially appear to be performing better than those facilities with more out migration. Unlike HIV testing, which is assessed same-day cross-sectionally, linkage to care is often operationalized with a 1-month window (as was done in this study), meaning that individuals may be diagnosed with HIV and link to care within separate month windows for data aggregation. Finally, linkage to care is often the first time when health facilities that use HIV care electronic medical records enter a patient into their databases. One facility in our study captured linkage to care numbers both in their electronic medical records system as well as paper registers, sources which were inconsistent with one another. From a pragmatic standpoint, the numerator for this indicator was one of the most challenging to abstract from routine program data sources due to the manual assessment of the 1-month diagnosis to linkage window and multiple data sources.

ART initiation within 1 month of linkage to care was substantially, but not significantly, higher in the implementation strategy period than the baseline period. There were three change concepts tested that focused on this cascade step (Table 2). Our findings were similar to the original SAIA trial, in which the implementation strategy was associated with a three-fold non-significant improvement in ART initiation for mothers [8]. While not significant, this is promising for a pilot study effect, given the massive impact that prompt ART has on child mortality and morbidity reduction when given prior to symptomatic disease [26,27,28].

The positive association of the implementation strategy with the number of VL ordered, the ratio of VL ordered compared to VL due, and the number of VL samples suppressed was unexpected. VL suppression is affected by many factors, particularly those at the individual and interpersonal levels, and it was not expected to respond to a health systems implementation strategy strongly. Additionally, just one change concept was focused specifically on VL testing within this pilot. The number of VL tests due was calculated based on national guidelines rather than being abstracted from records due to the massive complexity in direct assessment; this fixed number did not vary monthly, an assumption which was in line with lack of predicted seasonality in HIV care visits, but may not have accounted for natural heterogeneity in visit schedules. Finally, the relatively long turnaround time for VL samples in the Kenyan centralized laboratory system [29] meant that individuals likely had a VL sample collected and results returned in separate month windows for data aggregation.

This pilot study had numerous strengths. The study indicators aligned well with the UNAIDS 95-95-95 goals, which were also Kenyan national guidelines. It used a quasi-experimental analysis method to assess whether baseline temporal trends were driving the magnitude of implementation strategy impact, it included facilities in diverse regions in Kenya, and it utilized routine program data sources without any primary data collection. It began with an effective multi-component implementation strategy and adapted it in partnership with stakeholders to be relevant and applicable to the pediatric and adolescent cascade. A full qualitative evaluation of the implementation strategy was conducted and presented elsewhere (Wagner & Beima-Sofie, under review).

This study also had several limitations. A pre-post analysis is a relatively weak design for determining impact; even an uncontrolled interrupted time series analysis is vulnerable to external temporal changes. In this study, the implementation strategy began at the same time that health service provision resumed after a multi-month strike, which seriously weakens the inference about the independent impact of the implementation strategy. However, we assessed changes in the numerator and denominator of indicators, which partially accounts for changes observed both in demand for services and supply of services separately. Secondly, the PedCAT tool was not available at all facilities for the first few months of implementation due to delays in data abstraction, which limited the use of this prioritization tool. Thirdly, it was challenging to maintain fidelity to the intended implementation strategy coaching visit schedule due to competing service provision priorities, potentially impactive the “dose” of the implementation strategy delivered. Fourthly, lack of financial reimbursement to facility providers negatively impacted willingness to participate in CQI meetings. Future assessments of this implementation strategy would need to carefully address fidelity and select an evaluation design that is robust to temporal changes. Fifthly, the implementation strategy period of 6 months was relatively short and there were relatively few change concepts tested (compared to the original trial), potentially insufficient time and number of change concepts to observe implementation strategy impact on some indicators. Finally, the abstracted data could not accurately be interpreted as proportions or coverage due to incomplete denominators and in and out migration, limiting the ease of interpretation and alignment with set indicators like the UNAIDS 95-95-95 goals.


In conclusion, we saw that the adapted SAIA-PEDS implementation strategy was associated with significant and substantial improvement in some pediatric and adolescent HIV cascade indicators, including viral load monitoring and suppression, and trended towards significant impact on ART initiation. Given the critical and urgent nature of pediatric and adolescent HIV testing and treatment, the relative flexibility of this implementation strategy to meet local contexts’ needs and structures, and demonstrated impact in other settings, this pilot merits follow-up with a cluster randomized trial for rigorous evaluation in diverse contexts.

Availability of data and materials

The datasets used during the current study are available from the corresponding author on reasonable request.


  1. UNAIDS. Seizing the moment: tackling entrenched inequalities to end epidemics; 2020.

    Google Scholar 

  2. Kim MH, Mazenga AC, Yu X, Ahmed S, Paul ME, Kazembe PN, et al. High self-reported non-adherence to antiretroviral therapy amongst adolescents living with HIV in Malawi: barriers and associated factors. J Int AIDS Soc. 2017;20:21437.

    Article  Google Scholar 

  3. Jones C, Ritchwood TD, Taggart T. Barriers and facilitators to the successful transition of adolescents living with HIV from pediatric to adult care in low and middle-income countries: a systematic review and policy analysis. AIDS Behav. 2019;23:2498–513.

    Article  Google Scholar 

  4. Chikwari CD, Dringus S, Ferrand RA. Barriers to, and emerging strategies for, HIV testing among adolescents in sub-Saharan Africa. Curr Opin HIV AIDS. 2018;13:257–64.

    Article  Google Scholar 

  5. Wagner AD, O’Malley G, Firdawsi O, Mugo C, Njuguna IN, Maleche-Obimbo E, et al. Disclosure, consent, opportunity costs, and inaccurate risk assessment deter pediatric HIV testing: a mixed-methods study. J Acquir Immune Defic Syndr. 2018:77.

  6. Wagner AD, Crocker J, Liu S, Cherutich P, Gimbel S, Fernandes Q, et al. Making smarter decisions faster: systems engineering to improve the global public health response to HIV. Curr HIV/AIDS Rep. 2019;16.

  7. Wagner AD, Gimbel S, Ásbjörnsdóttir KH, Cherutich P, Coutinho J, Crocker J, et al. Cascade analysis: an adaptable implementation strategy across HIV and non-HIV delivery platforms. J Acquir Immune Defic Syndr. 2019:82.

  8. Rustagi AS, Gimbel S, Nduati R, Cuembelo MDF, Wasserheit JN, Farquhar C, et al. Implementation and operational research: impact of a systems engineering intervention on PMTCT service delivery in Côte d’Ivoire, Kenya, Mozambique: a cluster randomized trial. J Acquir Immune Defic Syndr. 2016;72:e68–76.

    Article  Google Scholar 

  9. Eastment MC, Wanje G, Richardson BA, Nassir F, Mwaringa E, Barnabas RV, et al. Performance of family planning clinics in conducting recommended HIV counseling and testing in Mombasa County, Kenya: a cross-sectional study. BMC Health Serv Res. 2019;19:4–9.

    Article  Google Scholar 

  10. Fabian KE, Muanido A, Cumbe VFJ, Manaca N, Hicks L, Weiner BJ, et al. Optimizing treatment cascades for mental healthcare in Mozambique: preliminary effectiveness of the systems analysis and improvement approach for mental health (SAIA-MH). Health Policy Plan. 2020;35:1354–63.

    Article  Google Scholar 

  11. Gimbel S, Mocumbi AO, Ásbjörnsdóttir K, Coutinho J, Andela L, Cebola B, et al. Systems analysis and improvement approach to optimize the hypertension diagnosis and case cascade for PLHIV individuals (SAIA-HTN): a hybrid type III cluster randomized trial. Implement Sci. 2020;15:1–14.

    Article  Google Scholar 

  12. Sherr K, Ásbjörnsdóttir K, Crocker J, Coutinho J, de Fatima CM, Tavede E, et al. Scaling-up the systems analysis and improvement approach for prevention of mother-to-child HIV transmission in Mozambique (SAIA-SCALE): a stepped-wedge cluster randomized trial. Implement Sci. 2019;14:41.

    Article  Google Scholar 

  13. Kaguthi GK, Nduba V, Adam MB. The impact of the nurses’, doctors’ and clinical officer strikes on mortality in four health facilities in Kenya. BMC Health Serv Res. 2020:20.

  14. Waithaka D, Kagwanja N, Nzinga J, Tsofa B, Leli H, Mataza C, et al. Prolonged health worker strikes in Kenya- perspectives and experiences of frontline health managers and local communities in Kilifi County. Int J Equity Health. 2020;19.

  15. Adam M, Muma S, Modi J, Steere M, Cook N, Ellis W, et al. Paediatric and obstetric outcomes at a faith-based hospital during the 100-day public sector physician strike in Kenya. BMJ Glob Heal. 2018:3.

  16. Sherr K, Gimbel S, Rustagi A, Nduati R, Cuembelo F, Farquhar C, et al. Systems analysis and improvement to optimize pMTCT (SAIA): a cluster randomized trial. Implement Sci. 2014;9:55.

    Article  Google Scholar 

  17. Gimbel S, Rustagi AS, Robinson J, Kouyate S, Coutinho J, Nduati R, Pfeiffer J, Gloyd S, Sherr K, Granato SA, Kone A, Cruz E, Manuel JL, Zucule J, Napua M, Mbatia G, Wariua G, Maina M. With input from the SAIA study team. Evaluation of a Systems Analysis and Improvement Approach to Optimize Prevention of Mother-To-Child Transmission of HIV Using the Consolidated Framework for Implementation Research. J Acquir Immune Defic Syndr. 2016;72 Suppl 2(Suppl 2):S108-16.

  18. Hoffmann CJ, Milovanovic M, Kinghorn A, Kim HY, Motlhaoleng K, Martinson NA, et al. Value stream mapping to characterize value and waste associated with accessing HIV care in South Africa. PLoS One. 2018. Published Online First.

  19. Jin J, Akau’ola S, Yip CH, Nthumba P, Ameh EA, de Jonge S, et al. Effectiveness of quality improvement processes, interventions, and structure in trauma systems in low- and middle-income countries: a systematic review and meta-analysis. World J Surg. 2021;45:1982–98.

    Article  Google Scholar 

  20. Jin J, Akau Ola S, Yip C, Nthumba P, Ameh E, de Jonge S, et al. The impact of quality improvement interventions in improving surgical infections and mortality in low and middle-income countries: a systematic review and meta-analysis. World J Surg. 2021. Published Online First.

  21. Fang L, Wu L, Han S, Chen X, Yu Z. Quality improvement to increase breastfeeding in preterm infants: systematic review and meta-analysis. Front Pediatr. 2021:9.

  22. Institute for Healthcare Improvement (IHI), (IHI) I for HI, Institute for Healthcare Improvement (IHI). Science of improvement: how to improve. 2017. (Accessed 19 Jan 2017).

  23. Brunette CHW, Lerer A, Tseng C, Gaetano Borriello YA. Open data kit: tools to build information services for developing regions; 2010.

    Google Scholar 

  24. Memiah P, Tlale J, Shimabale M, Nzyoka S, Komba P, Sebeza J, et al. Continuous quality improvement (CQI) institutionalization to reach 95:95:95 HIV targets: a multicountry experience from the global south. BMC Health Serv Res. 2021;21:711.

    Article  Google Scholar 

  25. Munyaneza F, Ntaganira J, Nyirazinyoye L, Birru E, Nisingizwe M, Gupta N, et al. Community-based accompaniment and the impact of distance for HIV patients newly initiated on antiretroviral therapy: early outcomes and clinic visit adherence in rural Rwanda. AIDS Behav. 2018;22:77–85.

    Article  Google Scholar 

  26. Wamalwa DC, Farquhar C, Obimbo EM, Selig S, Mbori-Ngacha DA, Richardson BA, et al. Early response to highly active antiretroviral therapy in HIV-1-infected Kenyan children. J Acquir Immune Defic Syndr. 2007;45:311–7.

    Article  CAS  Google Scholar 

  27. Wagner A, Slyker J, Langat A, Inwani I, Adhiambo J, Benki-Nugent S, et al. High mortality in HIV-infected children diagnosed in hospital underscores need for faster diagnostic turnaround time in prevention of mother-to-child transmission of HIV (PMTCT) programs. BMC Pediatr. 2015;15:10.

    Article  Google Scholar 

  28. Newell ML, Coovadia H, Cortina-Borja M, Rollins N, Gaillard P, Dabis F, et al. Mortality of infected and uninfected infants born to HIV-infected mothers in Africa: a pooled analysis. Lancet. 2004;364:1236–43.

    Article  Google Scholar 

  29. Mwau M, Syeunda CA, Adhiambo M, Bwana P, Kithinji L, Mwende J, et al. Scale-up of Kenya’s national HIV viral load program: findings and lessons learned. PLoS One. 2018;13:1–13.

    Article  Google Scholar 

Download references


We gratefully acknowledge and thank the Ministry of Health representatives from Kisumu, Siaya, Homa Bay, and Nairobi Counties. We thank the health care workers at each of the study facilities and all of the study participants. We thank Mr. Martin Maina for his essential role in study administration.


This work was supported by the National Institutes of Health (NIH) National Institute of Allergy and Infectious Disease (NIAID) award R34AI129900 (KS), the National Institute of Child Health and Development (NICHD) F32HD088204 to ADW, and the Fogarty International Center (FIC) R25 TW009345 to ADW. Additional support was provided by the UW Global Center for Integrated Health of Women, Adolescents and Children (Global WACh) and the University of Washington CFAR (P30 AI027757). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the funders. The funders did not have any role in the study’s data collection, management, analysis and interpretation, writing of reports, or decisions to submit reports for publication.

Author information

Authors and Affiliations



KS, RN, and ADW obtained study funding and served as study principal investigators. ADW, IN, PC, LO, GJS, RN, KS, and SG led protocol development. DG, NM, GO, and PM led the training and collected and managed data. ADW, OA, and NC coded and analyzed the study findings. ADW wrote the first draft of the manuscript. All authors critically read and revised the final manuscript.

Corresponding author

Correspondence to Anjuli D. Wagner.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed and approved by the Kenyatta National Hospital/University of Nairobi Ethics and Research Committee and the University of Washington Institutional Review Board. All participants were ≥ 18 years of age and provided written informed consent for participation.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Modeling equations


$$\log \left[{\left( count\ or\ ratio\right)}_{ht}\right]=\left({\alpha}_0+{b}_{0h}\right)+\left({\alpha}_1+{b}_{1h}\right)\cdot post$$

where h indexes health facilities and t indexes time in months.

count or ratio is the outcome as an aggregated count or ratio of two counts.

post is 1 for the last 6 months (the implementation strategy period).

b 0h and b1h are random intercepts and slopes, respectively.

Therefore, \({e}^{\alpha_{1}}\) is the step change trend. A relative change of the average counts or ratios from the first 6 months to the 6 months of implementation strategy

Interrupted time series analysis

$${\displaystyle \begin{array}{r}\log \left[{\left( count\ or\ ratio\right)}_{ht}\right]=\left({\beta}_0+{b}_{0h}\right)+\left({\beta}_1+{b}_{1h}\right)\cdot time+{\beta}_2\cdot post+{\beta}_3\cdot time post\end{array}}$$

where h indexes health facilities and t indexes time in months.

count or ratio is the outcome as an aggregated count or ratio of two counts.

time counts the months from 0 to 11.

post is 1 for the last 6 months (the implementation strategy period).

timepost counts time since the 7th month. Before that is zero.

β 0 is the average log-counts (or ratio) at the beginning of the study period.

β 1 is the 6 months pre-implementation strategy slope.

β 2 is the immediate level change at beginning of the implementation strategy period.

β 3 is the change in β1 slope.

b 0h and b1h are random intercepts and slopes, respectively.

Therefore, \({e}^{\beta_1}\) is the baseline monthly relative trend; \({e}^{\beta_2}\) is the step change trend; \({e}^{\beta_3}\) is relative change of the baseline trend.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wagner, A.D., Augusto, O., Njuguna, I.N. et al. Systems Analysis and Improvement Approach to optimize the pediatric and adolescent HIV Cascade (SAIA-PEDS): a pilot study. Implement Sci Commun 3, 49 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: