Skip to main content

What’s the “secret sauce”? How implementation variation affects the success of colorectal cancer screening outreach

Abstract

Background

Mailed fecal immunochemical testing (FIT) programs can improve colorectal cancer (CRC) screening rates, but health systems vary how they implement (i.e., adapt) these programs for their organizations. A health insurance plan implemented a mailed FIT program (named BeneFIT), and participating health systems could adapt the program. This multi-method study explored which program adaptations might have resulted in higher screening rates.

Methods

First, we conducted a descriptive analysis of CRC screening rates by key health system characteristics and program adaptations. Second, we generated an overall model by fitting a weighted regression line to our data. Third, we applied Configurational Comparative Methods (CCMs) to determine how combinations of conditions were linked to higher screening rates. The main outcome measure was CRC screening rates.

Results

Seventeen health systems took part in at least 1 year of BeneFIT. The overall screening completion rate was 20% (4–28%) in year 1 and 25% (12–35%) in year 2 of the program. Health systems that used two or more adaptations had higher screening rates, and no single adaptation clearly led to higher screening rates. In year 1, small systems, with just one clinic, that used phone reminders (n = 2) met the implementation success threshold (≥ 19% screening rate) while systems with > 1 clinic were successful when offering a patient incentive (n = 4), scrubbing mailing lists (n = 4), or allowing mailed FIT returns with no other adaptations (n = 1). In year 2, larger systems with 2–4 clinics were successful with a phone reminder (n = 4) or a patient incentive (n = 3). Of the 10 systems that implemented BeneFIT in both years, seven improved their CRC screening rates in year 2.

Conclusions

Health systems can choose among many adaptations and successfully implement a health plan’s mailed FIT program. Different combinations of adaptations led to success with health system size emerging as an important contextual factor.

Peer Review reports

Background

Colorectal cancer (CRC) screening remains an underutilized preventive health measure despite its effectiveness at reducing mortality and morbidity [1, 2]. Only 60% of US commercially or Medicare-insured adults are up-to-date on screening, and the rate is even lower for Medicaid-insured adults (47%) [3]. Numerous US state and federal programs, health care systems, and insurance plans are trying to improve rates of CRC screening using various population-based approaches [4, 5]. The most successful organizations apply multifaceted, population-based strategies [6].

Some health care systems are working to raise CRC screening rates through screening outreach programs, such as those that mail fecal immunochemical test (FIT) kits to patients due for screening [7,8,9]. Clinic- and health care system-based outreach efforts have increased screening rates, and studies have demonstrated that more screening is associated with reduced CRC incidence and cancer mortality [10,11,12,13,14,15]. Despite the effectiveness of screening outreach, and specifically for mailed FIT programs [16, 17], challenges remain with implementation of such programs in practice [7, 18, 19].

One promising approach that addresses some barriers faced by health systems is mailed FIT outreach initiated by health insurance plans [20]. Health plan-initiated mailed FIT programs can minimize the burden on clinics, and lower program costs by creating efficient ways to implement the programs [6, 20]. Prior research shows that an evidence-based program of CRC outreach in health systems and clinics is typically adapted to fit an organization’s structure and available resources [21,22,23,24,25]. Qualitative findings from this approach have indicated a mailed FIT program can be adapted to the culture and needs of individual health insurance plans [18, 26]. Little information exists, however, on how health systems adapt their programs over time, and which adaptations are most effective for positive outcomes.

To address some of these questions, a collaboration (named BeneFIT) was formed between researchers and health insurance plans to understand the implementation of a health plan-driven mailed FIT program. A health plan in Oregon coordinated and administered the mailing of FIT kits while partnering with the health systems that delivered care to their health plan members. The research team has previously reported on the BeneFIT program’s effectiveness using a research sample from six health systems and found that of those who were mailed an introductory letter, FIT, and reminder postcard, 18.3% completed the FIT, and 20.6% completed any CRC screening [27]. However, FIT completion rates varied greatly (from 10.0 to 21.1%) across the health systems in that research study [27]. A similar mailed FIT intervention in a pragmatic trial (STOP CRC) also showed improved screening rates with substantial variation between health systems [28].

The Oregon BeneFIT mailed outreach program used a collaborative model, in which health systems were able to customize the basic mailing offered by the health plan and choose options for how to implement the program in a way that corresponded with health system organizational constraints and capabilities. This paper examines the major adaptations made to the mailed FIT program during implementation in relation to CRC screening rates. We then identify which combinations of adaptations uniquely distinguished health systems with higher CRC screening rates.

Methods

Setting

The health insurance plan is a non-profit organization that provides insurance in Oregon for Medicaid, Medicare, and dental coverage for about 220,000 enrollees at the time of the study. The mailed FIT program was implemented in collaboration with a research team in 2016 (May to November) and 2017 (May to November). A total of 17 health systems took part in the BeneFIT program offered by the Oregon health insurance plan over the first 2 years that the program was implemented. We report here on 2-year CRC screening rates and implementation variations (i.e., adaptations) for these health systems. Six of these health systems had the capacity to provide FIT test results to the research team and implement the program quickly enough to be included in a prior analysis of the 1-year mailed FIT outcomes [27].

Mailed FIT intervention

The BeneFIT program is described in detail elsewhere [20]. Briefly, health plan staff generated lists of enrollees due for CRC screening for each health system that took part in the program. To be eligible for the mailed FIT program, a member must have been between the ages of 51 and 75 and not have had a health plan claim indicating CRC screening or a screening exclusion (i.e., colon cancer). Health plan staff provided the member lists and FIT kits to a mail vendor that prepared and mailed introductory letters. Enrollees whose introductory letter was returned as undeliverable were removed from the list. The mail vendor mailed remaining enrollees a FIT kit about 4 weeks later, followed by a postcard reminder 2 weeks later.

Within this framework, each health system was allowed by the health plan to customize how they implemented the program. For all health systems, the FIT results came back directly to clinics and were followed up directly by the patient care teams using the clinics’ usual care procedures. The basic program (specifically the mailing elements coordinated by the health plan) was presented to clinic managers, who then determined if they would be able to add clinic-supported adaptations, such as phone call reminders. The adaptations (i.e., differences in implementation) fell into five types:

  • Lists of eligible enrollees scrubbed before mailing the introduction letters: Health systems could review the list of eligible members that the health plan generated and remove patients based on their own patient data. Health system staff either looked for patients who were current for screening according to clinic-based medical records or simply validated that the patients correctly belonged to the clinic’s population [e.g., were regularly seeing one of their providers or had an electronic health record (EHR)]. The health system then returned a “scrubbed” list back to the health plan.

  • Twelve-month visit exclusion: Some clinics chose to have the health plan automatically exclude patients who had not had a clinic visit in the last year. In this case, the health plan staff removed patients without a visit in the last 12 months using the claims database. (Often, this adaptation was chosen simply because clinics could not staff the effort of scrubbing the mailing lists.)

  • Phone call reminders: Some health systems had staff call patients who were mailed an introduction letter and FIT kit to remind them to return the test. The health plan provided the clinic staff with a list of plan members who were mailed an introduction letter and FIT kit.

  • Financial incentives (gift card) offered for completing CRC screening: In some regions or health systems, the health plan offered incentives for completion of CRC screening (either by FIT or by colonoscopy). The incentives ($25 gift cards) were mentioned in the letters that accompanied the FIT kits.

  • Allowing FIT kits to be mailed back (vs. requiring in-person drop off): Some health systems required members to return the completed FIT kits in person to a clinic. Other health systems allowed members to mail back the completed kits in pre-stamped return mailers that were provided when the kits were sent (referred to as a mailed return).

In addition to these five major implementation variations, other health care system characteristics were available for the analysis. Some of the health systems had participated in prior research efforts involving mailed FIT outreach and therefore had some existing FIT mailing workflows and staff experience. The health systems varied in size, both in number of clinics and number of patients they served. Finally, the health plan allowed the program to mail whichever type of FIT was already in use by the health system. All health systems used one of the following three types of FIT: the two-sample Insure® by Clinical Genomics, one-sample Hemosure® by Hemosure, Inc., or one-sample OC-Light® or OC-Auto® by Polymedco.

Study measures

The main outcome for these analyses was completed CRC screening rates. A screening was considered complete if a claim was submitted that indicated a patient received any type of CRC screening procedure within 6 months of the date that the introductory letter was mailed. A CRC screening procedure was defined as any of the following:

  • FIT test or fecal occult blood test (FOBT)

  • FIT-DNA test

  • Flexible sigmoidoscopy

  • Computed tomography (CT) colonography (virtual colonoscopy)

  • Colonoscopy.

The number of FIT kits mailed indicates the number of eligible health plan members who were mailed a FIT kit through the BeneFIT program. All implementation outcomes were tracked internally by health plan staff as they generated lists of eligible patients and worked with health systems and the mailing vendor to conduct the mailing itself [27]. For FIT kits mailed in late 2017, there was a minimum 3-month period for claims to be received by the health plan following the 6-month screening period.

Each variable was a potential explanatory factor that could have a plausible connection to the outcome. Health plan characteristic variables included health system name, health system size (number of clinics per system), participation in the prior CRC screening study, and FIT test type used by the health system. Intervention variables included the length of participation in BeneFIT, number of adaptations, number of kits mailed, list scrubbing, 12-month visit exclusion, reminder calls, patient incentive, and a mailed return option.

Analysis

This study incorporated a multi-method approach. A descriptive analysis comparing CRC screening completion rates by health system characteristics and interventions was completed using Minitab and Tableau Software. Configurational Comparative Methods (CCMs) analyses were then performed using the R package “cna” to analyze the dataset using Coincidence Analysis (CNA) [29,30,31]. RStudio, R, and Microsoft Excel were also used to support the configurational analysis with CCMs. The configurational analysis examined the combinations of adaptations and health system characteristics that together distinguished the health systems with higher screening rates from those with lower screening rates.

The configurational analyses used a dichotomous outcome for each of three analyses: percent completed year 1, percent completed year 2, and change from year 1 to year 2 (positive or not positive). We set the threshold for our main outcome, CRC screening completion rate, at 19%. This cutoff was determined by tertiles, where we compared cases in the upper two tertiles of the screening rate percent versus cases in the lowest tertile. In the year 1 analysis, there were 17 cases in the overall sample, with 11 cases in the upper two tertiles and 6 cases in the lowest tertile. For year 2, there were 10 cases in the overall sample, with 7 cases in the upper two tertiles and 3 cases in the lowest tertile. In both year 1 and year 2, the 19% cut point separated the upper two tertiles from the lowest tertile, and in both years, there was a sizable performance gap in the outcome across this threshold, a difference of more than 3.5 points in absolute terms. Only health systems that participated in both years were included in the change from year 1 to year 2 analysis.

The configurational analyses produced an overall model with high consistency and coverage that identified combinations of conditions that explained the presence of the outcome. Consistency refers to how often health systems identified by the model had the outcome present (i.e., higher screening rates); coverage accounts for the percent of health systems with higher screening rates explained by the model.

To achieve data reduction, we used a configurational method to identify candidate factors, described in detail in prior studies [32,33,34]. To summarize, we used the “minimally sufficient conditions” function within the R package “cna” to look across all 17 cases and all 8 factors at once. The consistency threshold was initially set to 100% and the coverage threshold to 15%. We considered all 1-, 2-, 3-, 4-, and 5-condition configurations in our dataset that met this dual threshold. If no configurations met these criteria during the data reduction phase, we iteratively dropped the consistency threshold by increments of 5 percentage points (i.e., from 100 to 95%) and repeated the process of creating a new condition table until configurations emerged that satisfied all criteria.

Next, we sorted the condition table by complexity and coverage and identified the configurations with the highest coverage scores. We began with 1-condition configurations to see if they met the consistency and coverage thresholds and were uniquely distinguished from all other 1-condition configurations. We then proceeded to examine 2-, 3-, 4-, and 5-condition configurations, working upwards to minimize possible redundancy. Using this approach, we reduced the dataset to a smaller subset of candidate factors. We selected final solutions based on high overall model consistency (i.e., as close to 100% as possible, and at least 80%) and coverage (i.e., as close to 100% as possible, and at least 70%).

Results

In total, 17 health systems (representing 51 total clinics) took part in at least 1 year of the BeneFIT program; 13 health systems were in the first year of the program and 14 were in the second year. Ten health systems took part in both program years. Table 1 shows implementation details by health system organized by decreasing rates of CRC screening in year 2 (2017) and FIT test type. Most health systems (12 of 17) used OC-Auto® or OC-Light® FIT tests. These health systems tended to be larger, with an average mailing size of 363 kits mailed and 3.4 clinics per system. The health systems that used “other types of FIT tests” tended to be smaller, with an average mailing size of 153 kits mailed and 2 clinics per system. Smaller health systems tended to use fewer adaptations. Small systems, defined to be one clinic, implemented fewer adaptations (mean = 1.75) than large systems (mean = 2.6) although the median number of adaptations was the same (median = 2).

Table 1 Participation details by health system and FIT test type

Table 2 shows yearly unadjusted CRC screening completion rates by adaptations and system characteristics. In 2016, the median completion rate was 20% among 13 health systems (range, 4–28%). In 2017, the median completion rate was 25% among 14 health systems (range, 12–35%). In both years, the median completion rates increased as the total number of adaptations increased. For the five interventions, scrubbing showed the largest difference in median rates for both years (15% vs 24% for no scrubbing vs scrubbing in 2016; 19% vs 27% for no scrubbing vs scrubbing for 2017). Median screening rates were higher in large systems, for OC-Auto® or OC-Light® FIT tests, and for systems with prior research study experience.

Table 2 Screening completion rates by adaptations and system characteristics

Figure 1 presents a multivariate visualization indicating that screening completion rates were positively associated with the number of adaptations implemented by a health system. Note that all systems tried at least one adaptation, and no system tried all five. The size of each mailing is also indicated in the plot, and systems with larger mailings tended to implement more adaptations. Systems with smaller mailings used fewer adaptations and had lower screening completion rates. To characterize the general relationship between screening completion rates and number of adaptations, we fitted a weighted regression line, with weights determined by the number of mailed kits. The slope of the weighted regression line was 0.04, suggesting that, on average, the screening rate increased by 4% for each additional adaptation (P = 0.006). Figure 1 also shows that the screening rates were generally higher in the second year of the study.

Fig. 1
figure1

Completed screening rates by number of adaptations with year and mailing size. *Mailing size is the number of kits mailed for each system and is shown by the size of the circles in the plot. Mailing size spanned a low of 44 to a high of 757 kits

Health system size was not sufficient by itself to account consistently for implementation outcomes. In year 1, for example, while larger systems on average did tend to have more implementation success, three of the eleven systems with implementation success were small (only one clinic); moreover, two of the six systems without implementation success in year 1 were in the larger size category (≥ 2 clinics). In year 2, the system with the lowest screening rate overall (12.2.%) was in the largest category of health system size (≥ 5 clinics).

Combinations of conditions were formally assessed in the second phase of analysis using the configurational approach. In the year 1 model, 11 of the 17 health systems had screening rates over 19%, i.e., had a successful outcome as defined by the model, while 6 did not. The final model for year 1 featured four solution pathways (i.e., four different ways to achieve the outcome). The model had a consistency level of 100% (9/9) and a coverage level of 82% (9/11) (i.e., it explained nine of 11 health systems with the outcome present). Table 3 lists health systems with the explanatory factors (adaptations or clinic characteristics) that contributed to the solution pathways; the highlighted cells show the combinations that led to a successful outcome. While health system size was not sufficient by itself to explain implementation outcomes, health system size played a pivotal role in the configurational solutions, as health system size in conjunction with other specific conditions consistently distinguished systems with implementation success. Small health systems (with only one clinic) that used phone reminders represented one solution pathway for higher rates of CRC completion (n = 2). Health systems with more than one clinic, by contrast, were successful if they offered a gift card incentive (n = 4), scrubbed the lists prior to mailing (n = 4), or allowed mailed versus in-person return but had no other adaptations (n = 1). In the year 1 dataset, two systems had a contradictory configuration, meaning they had identical values for all potential explanatory factors but different outcomes and thus could not be explained by the factors in the analysis.

Table 3 Year 1 CCMs model with four solution pathways

The year 2 configurational analysis (data not shown) included ten health systems that participated in both years of the study. In year 2, seven of the 10 health systems had the outcome present, while three did not. The final year 2 model featured two solution pathways with no inconsistent cases: (1) offering a gift card incentive (n = 2) or (2) a phone reminder conducted by a health system size of 2-5 clinics (n = 4). The model had a consistency level of 100% and a coverage level of 86% (i.e., it explained six of seven systems with higher rates with perfect consistency). FIT type did not show up in any of the solution pathways.

Comparison of first and second year CRC screening completion rates

Of the 10 health systems that took part in both years of the program, three systems had lower rates in year 2 than year 1, whereas the remaining seven had positive gains in year 2 over year 1.

The third configurational analysis assessed the change in screening rates from year 1 to year 2 in the 10 systems that participated in both years of the program (Table 4). The results yielded three solutions with 100% consistency (5/5) and 71% coverage (5/7). Higher rates of change were found in health systems that implemented phone reminders (n = 3) in year 2 after not offering them in year 1, that instituted a 12-month visit requirement in year 2 (n = 1) after not requiring it in year 1, or that had participated in the prior research study (n = 2).

Table 4 CRC screening rate change from years 1 to 2, among health systems in both years (n = 10)

Discussion

Our multi-method analysis did not find a single adaptation that improved response rates across all clinics. While no single adaptation was the “secret sauce” for implementation success, our linear regression results do indicate that the centralized mailed FIT program was more effective when multiple adaptations were added to the health plan’s basic program. Health systems achieved higher rates when they were able to combine the health plan’s program with two or more of the following adaptations: a phone call reminder, reviewing the mailing lists (e.g., scrubbing), allowing mailed FIT returns, excluding patients without a recent visit, or offering financial incentives. This finding that more comprehensive efforts yielded higher screening rates supports prior research suggesting that FIT return rates can be increased by delivering more and additional types of reminders [35,36,37].

The configurational analyses identified multiple ways for health systems to achieve higher rates of screening. It found size to be an important contextual factor, with different solutions for larger health systems than those for smaller health systems. Phone call reminders appeared in multiple solutions, consistent with findings from other studies [27]. Systems that added phone calls in year 2, instituted a 12-month visit requirement in year 2, or had been in prior research studies achieved higher screening rates in year 2 over year 1, indicating that a focus on specific process improvements can lead to success.

A comparative analysis of the STOP CRC study, the intervention upon which the BeneFIT program was based, found two conditions that accounted for successful implementation (i.e., percent of eligible patients mailed a FIT): having a centralized process for delivering the intervention and mailing an introductory letter prior to the FIT [38]. These two implementation components indicated a greater clinic capacity to staff the program and internal commitment to the evidence-based research. The BeneFIT study had a similar mailed FIT program, but in a centralized capacity with a vendor mailing all components. Therefore, mailing implementation was no longer an issue, and we were able to look at the effect of health system factors and adaptations on screening completion rates. The variation in BeneFIT results might indicate differing health system commitment to the program or capacity to add additional implementation components. However, while the screening rate was an average of 4 percentage points higher for each additional adaptation, we found different combinations leading to a successful overall result. Therefore, while it is tempting to conclude the “secret sauce” is more adaptations, we need to consider which adaptations are effectively combined. A key benefit of configurational analyses is to help understand which adaptations result in the greatest impact so that low-resource clinics can choose options that are most effective.

Screening rates generally improved over time for the health systems that took part in both years; other literature supports this finding [39, 40]. Staff and patients becoming familiar with the FIT screening process, consistent messaging with patients, and conversations between patients and physicians might have contributed to higher screening rates. Health systems that had participated in the prior STOP CRC research had higher screening rates in year 2, perhaps indicating that they already had staff and workflow familiarity for a mailed FIT program. Baker et al.’s patient-level study of FIT mailing with phone call and text message reminders suggested that prior FIT screening might be a predictor of screening completion [41]. Therefore, those patients screened in year 1 might have been more likely to complete the FITs in year 2 leading to higher system screening rates.

These results have several limitations. We used observational data, with no control group of health systems for comparison. Also, our sample size was not large enough to stratify the FIT test brands (or one-sample vs. two-sample FIT tests) into different groups. Our data analysis is based on claims submitted to the health plan; therefore, we cannot ascertain if health systems became more efficient at submitting claims in year 2. Finally, we could not conclude that there was a direct effect of individual adaptations on outcomes because the health system size and FIT test type variables were confounded with adaptations. Some health systems used multiple adaptations but still had low screening rates that did not improve. These results were possibly related to factors we could not measure, such as populations that are more resistant to the mailed FITs, lab or mailing issues, or FIT processing issues. Some adaptations (such as phone call reminders) could have been variably implemented. Also, this outreach was offered in addition to existing in-clinic screening efforts (such as direct provider outreach) that we do not capture here.

Despite these limitations, these results can offer guidance to health systems on implementation efforts. FIT is a lower barrier method towards achieving greater CRC screening rates, which is a metric used by several national agencies such as UDS reporting, HEDIS measures, and Medicare STARS metrics.

Conclusions

Overall, our results identified several solution paths to implementation of a successful mailed FIT program. Larger and smaller health systems may be able to use different approaches to adapting an outreach intervention offered by their health plan. If a health plan can be flexible in its approach, it could benefit from customizing the approach to CRC screening outreach to particular environments or clinics. Future research might help establish the strength of the causal relationships between specific conditions and CRC screening rates.

Availability of data and materials

The datasets used and/or analyzed for the current study are available from the corresponding author on reasonable request. Templates used for the mailed FIT program materials and implementation workflow are available at the mailedfit.org website.

Abbreviations

CRC:

Colorectal cancer

FIT:

Fecal immunochemical testing

CCMs:

Configurational comparative methods

EHR:

Electronic health record

FOBT:

Fecal occult blood test

CT:

Computed tomography colonoscopy

CNA:

Coincidence analysis

References

  1. 1.

    Fedewa SA, Ma J, Sauer AG, et al. How many individuals will need to be screened to increase colorectal cancer screening prevalence to 80% by 2018? Cancer. 2015;121(23):4258–65.

    PubMed  Article  Google Scholar 

  2. 2.

    Preventive Services Task Force US, Bibbins-Domingo K, Grossman DC, et al. Screening for colorectal cancer: US Preventive Services Task Force Recommendation Statement. Jama. 2016;315(23):2564–75.

    Article  CAS  Google Scholar 

  3. 3.

    de Moor JS, Cohen RA, Shapiro JA, et al. Colorectal cancer screening in the United States:tTrends from 2008 to 2015 and variation by health insurance coverage. Prev Med. 2018;112:199–206.

    PubMed  PubMed Central  Article  Google Scholar 

  4. 4.

    Wilensky JD. Colorectal cancer initiatives in Medicaid agencies – a national review. Prepared for the American Cancer Society: Atlanta, GA; September 2016.

    Google Scholar 

  5. 5.

    Centers for Medicare and Medicaid Services. Medicare Star Ratings. https://www.medicare.gov/find-a-plan/staticpages/rating/planrating-help.aspx. Accessed 04/22/2019.

  6. 6.

    National Colorectal Cancer Roundtable. Colorectal Cancer Screening Best Practices Handbook For Health Plans. American Cancer Society, Inc. http://nccrt.org/resource/handbook-health-plans/. Published 2017. Updated 03/28/2017. Accessed2017.

  7. 7.

    Green BB, Fuller S, Anderson ML, Mahoney C, Mendy P, Powell SL. A quality improvement initiative to increase colorectal cancer (CRC) screening: collaboration between a primary care clinic and research team. J Fam Med. 2017;4(3):1115.

    PubMed  PubMed Central  Article  Google Scholar 

  8. 8.

    Levin TR, Jamieson L, Burley DA, Reyes J, Oehrli M, Caldwell C. Organized colorectal cancer screening in integrated health care systems. Epidemiol Rev. 2011;33:101–10.

    PubMed  Article  Google Scholar 

  9. 9.

    Coronado GD, Vollmer WM, Petrik A, et al. Strategies and opportunities to STOP colon cancer in priority populations: pragmatic pilot study design and outcomes. BMC Cancer. 2014;14:55.

    PubMed  PubMed Central  Article  Google Scholar 

  10. 10.

    Green BB, Wang CY, Anderson ML, et al. An automated intervention with stepped increases in support to increase uptake of colorectal cancer screening: a randomized trial. Ann Intern Med. 2013;158(5 Pt 1):301–11.

    PubMed  PubMed Central  Article  Google Scholar 

  11. 11.

    Levy BT, Xu Y, Daly JM, Ely JW. A randomized controlled trial to improve colon cancer screening in rural family medicine: an Iowa Research Network (IRENE) study. J Am Board Fam Med. 2013;26(5):486–97.

    PubMed  Article  Google Scholar 

  12. 12.

    Levin TR, Corley DA, Jensen CD, et al. Effects of organized colorectal cancer screening on cancer incidence and mortality in a large community-based population. Gastroenterology. 2018;155(5):1383–91 e1385.

    PubMed  PubMed Central  Article  Google Scholar 

  13. 13.

    Zorzi M, Fedeli U, Schievano E, et al. Impact on colorectal cancer mortality of screening programmes based on the faecal immunochemical test. Gut. 2015;64(5):784–90.

    PubMed  Article  Google Scholar 

  14. 14.

    Chiu HM, Chen SL, Yen AM, et al. Effectiveness of fecal immunochemical testing in reducing colorectal cancer mortality from the One Million Taiwanese Screening Program. Cancer. 2015;121(18):3221–9.

    PubMed  PubMed Central  Article  Google Scholar 

  15. 15.

    Lin JS, Piper MA, Perdue LA, et al. U.S. Preventive Services Task Force Evidence Syntheses, formerly Systematic Evidence Reviews. In: Screening for Colorectal Cancer: A Systematic Review for the U.S. Preventive Services Task Force. Rockville: Agency for Healthcare Research and Quality (US); 2016.

    Google Scholar 

  16. 16.

    Dougherty MK, Brenner AT, Crockett SD, et al. Evaluation of interventions intended to increase colorectal cancer screening rates in the United States: a systematic review and meta-analysis. JAMA Intern Med. 2018;178(12):1645–58.

    PubMed  PubMed Central  Article  Google Scholar 

  17. 17.

    Jager M, Demb J, Asghar A, et al. Mailed outreach is superior to usual care alone for colorectal cancer screening in the USA: a systematic review and meta-analysis. Dig Dis Sci. 2019;64(9):2489–96.

    PubMed  PubMed Central  Article  Google Scholar 

  18. 18.

    Coronado GD, Schneider JL, Petrik A, Rivelli J, Taplin S, Green BB. Implementation successes and challenges in participating in a pragmatic study to improve colon cancer screening: perspectives of health center leaders. Transl Behav Med. 2017;7(3):557–66.

    PubMed  PubMed Central  Article  Google Scholar 

  19. 19.

    Liles EG, Schneider JL, Feldstein AC, et al. Implementation challenges and successes of a population-based colorectal cancer screening program: a qualitative study of stakeholder perspectives. Implement Sci. 2015;10:41.

    PubMed  PubMed Central  Article  Google Scholar 

  20. 20.

    Coury JK, Schneider JL, Green BB, et al. Two Medicaid health plans’ models and motivations for improving colorectal cancer screening rates. Transl Behav Med. 2020;10(1):68-77. https://doi.org/10.1093/tbm/iby094.

  21. 21.

    Carvalho ML, Honeycutt S, Escoffery C, Glanz K, Sabbs D, Kegler MC. Balancing fidelity and adaptation: implementing evidence-based chronic disease prevention programs. J Public Health Manag Pract. 2013;19(4):348–56.

    PubMed  Article  Google Scholar 

  22. 22.

    Bopp M, Saunders RP, Lattimore D. The tug-of-war: fidelity versus adaptation throughout the health promotion program life cycle. J Prim Prev. 2013;34(3):193–207.

    PubMed  Article  Google Scholar 

  23. 23.

    van Daele T, van Audenhove C, Hermans D, van den Bergh O, van den Broucke S. Empowerment implementation: enhancing fidelity and adaptation in a psycho-educational intervention. Health Promot Int. 2014;29(2):212–22.

    PubMed  Article  Google Scholar 

  24. 24.

    Coronado GD, Petrik AF, Vollmer WM, et al. Effectiveness of a mailed colorectal cancer screening outreach program in community health clinics: the STOP CRC cluster randomized clinical trial. JAMA Intern Med. 2018;178(9):1174–81.

    PubMed  PubMed Central  Article  Google Scholar 

  25. 25.

    Barrera M Jr, Berkel C, Castro FG. Directions for the advancement of culturally adapted preventive interventions: local adaptations, engagement, and sustainability. Prev Sci. 2017;18(6):640–8.

    PubMed  PubMed Central  Article  Google Scholar 

  26. 26.

    Coronado GD, Schneider JL, Sanchez JJ, Petrik AF, Green B. Reasons for non-response to a direct-mailed FIT kit program: lessons learned from a pragmatic colorectal-cancer screening study in a federally sponsored health center. Transl Behav Med. 2015;5(1):60–7.

    PubMed  Article  Google Scholar 

  27. 27.

    Coronado GD, Green BB, West II, et al. Direct-to-member mailed colorectal cancer screening outreach for Medicaid and Medicare enrollees: implementation and effectiveness outcomes from the BeneFIT study. Cancer. 2020;126(3):540–8.

    PubMed  Article  Google Scholar 

  28. 28.

    Coronado GD, Petrik AF, Vollmer WM, et al. Effectiveness of a Mailed Colorectal Cancer Screening Outreach Program in Community Health Clinics: The STOP CRC Cluster Randomized Clinical Trial. JAMA Intern Med. 2018;178(9):1174–81. https://doi.org/10.1001/jamainternmed.2018.3629.

  29. 29.

    Ambuehl M, Baumgartner M. cna: causal modeling with coincidence analysis. R package version 2.1.1; 2018.

    Google Scholar 

  30. 30.

    Baumgartner M, Ambühl M. Causal modeling with multi-value and fuzzy-set Coincidence Analysis. Polit Sci Res Methods. 2020;8(3):526–42.

  31. 31.

    Whitaker RG, Sperber N, Baumgartner M, Thiem A, Cragun D, Damschroder L, Miech EJ, Slade A, Birken S. Coincidence analysis: a new method for causal inference in implementation science. Imp Sci. 2020;15:108.

  32. 32.

    Hickman SE, Miech EJ, Stump TE, Fowler NR, Unroe KT. Identifying the implementation conditions associated with positive outcomes in a successful nursing facility demonstration project. Gerontologist. 2020;60(8):1566-74.

  33. 33.

    Yakovchenko V, Miech EJ, Chinman MJ, et al. Strategy configurations directly linked to higher hepatitis C virus treatment starts: an applied use of Configurational Comparative Methods. Med Care. 2020;58(5):e31–8.

    PubMed  PubMed Central  Article  Google Scholar 

  34. 34.

    Petrik AF, Green B, Schneider J, et al. Factors influencing implementation of a colorectal cancer screening improvement program in community health centers: an applied use of Configurational Comparative Methods. In: J Gen Int Med; 2020.

    Google Scholar 

  35. 35.

    Coronado GD, Rivelli JS, Fuoco MJ, et al. Effect of reminding patients to complete fecal immunochemical testing: a comparative effectiveness study of automated and live approaches. J Gen Intern Med. 2018;33(1):72–8.

    PubMed  Article  Google Scholar 

  36. 36.

    Dietrich AJ, Tobin JN, Robinson CM, et al. Telephone outreach to increase colon cancer screening in medicaid managed care organizations: a randomized controlled trial. Ann Fam Med. 2013;11(4):335–43.

    PubMed  PubMed Central  Article  Google Scholar 

  37. 37.

    Brenner AT, Rhode J, Yang JY, et al. Comparative effectiveness of mailed reminders with and without fecal immunochemical tests for Medicaid beneficiaries at a large county health department: a randomized controlled trial. Cancer. 2018;124(16):3346–54.

    PubMed  PubMed Central  Article  Google Scholar 

  38. 38.

    Petrik A, Miech E, Green B, et al. Factors influencing implementation of a colorectal cancer screening improvement program in community health centers: an applied use of Configurational Comparative Methods. J Gen Int Med. 2020;35:815–22.

  39. 39.

    Nielson CM, Vollmer WM, Petrik AF, Keast EM, Green BB, Coronado GD. Factors affecting adherence in a pragmatic trial of annual fecal immunochemical testing for colorectal cancer. J Gen Intern Med. 2019;34(6):978–85.

    PubMed  PubMed Central  Article  Google Scholar 

  40. 40.

    van der Vlugt M, Grobbee EJ, Bossuyt PM, et al. Adherence to colorectal cancer screening: four rounds of faecal immunochemical test-based screening. Br J Cancer. 2017;116(1):44–9.

    PubMed  Article  CAS  Google Scholar 

  41. 41.

    Baker DW, Brown T, Goldman SN, et al. Two-year follow-up of the effectiveness of a multifaceted intervention to improve adherence to annual colorectal cancer screening in community health centers. Cancer Causes Control. 2015;26(11):1685–90.

    PubMed  Article  Google Scholar 

Download references

Acknowledgements

Thank you to Robin Daily for administrative support. The authors would like to acknowledge the clinic staff at all the organizations implementing the mailed FIT program for the perseverance during program roll-out.

Funding

This publication is a product of a Health Promotion and Disease Prevention Research Center grant supported by Cooperative Agreement Number U48DP005013 from the Centers for Disease Control and Prevention. The findings and conclusions in this publication are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.

Author information

Affiliations

Authors

Contributions

JC was the lead author of the paper and led paper design and conception, assisted with the data collection, contributed to the interpretation of analysis results, and drafted initial and subsequent drafts of the manuscript. EJM conducted the configurational analysis in the study, contributed to the interpretation of results, and helped write, review, and edit the manuscript, especially sections related to the configurational analysis. PS coded and summarized source data to produce data for analysis and presentation, performed exploratory data analysis, wrote sections of paper pertaining to data summarization and exploratory analysis, produced figures and data for tabular summaries, and reviewed the paper. AP contributed to data collection, interpretation of analytical results, and helped write, review, and edit the manuscript.

KC aided in the curation of primary data used for the study and helped edit the manuscript as the primary contact for the health plan. BG and LMB contributed to the acquisition of financial support leading to this publication, analytic plan, to the interpretation of study results, and to review and editing of the article. JS contributed to the planning of the study, to the interpretation of study results, and to the review and editing of the manuscript. GC was the senior member of the research team and contributed to the acquisition of financial support leading to this publication, interpretation of study results, and editing the article. All authors read and approved the final manuscript.

Authors’ information

JC is a Senior Research Associate at Oregon Health & Science University (OHSU) where she manages a body of research related to colorectal cancer in community health clinics. Prior to joining OHSU, JC managed the implementation of CareOregon’s mailed FIT program in partnership with collaborating health systems. She has over 20 years of experience working with evidence-based health care research, health communications, and practice implementation. GC is an epidemiologist who champions affordable, long-term solutions to health disparity issues. Her research portfolio includes several cost-effective interventions to improve rates of participation in cancer screening among patients served by community health centers. EM is an implementation researcher with expertise in mixed-methods evaluations of facility-level interventions and is a national expert in conducting research with Configurational Comparative Methods.

Corresponding author

Correspondence to Jennifer Coury.

Ethics declarations

Ethics approval and consent to participate

All study documents were reviewed and approved by the University of Washington Institutional Review Board (Protocol number: 00000472); a waiver of informed consent was obtained given minimal risks imposed to study participants.

Consent for publication

There are no individual person’s data provided in any form.

Competing interests

From September 2017 to June 2018, Kaiser Permanente Center for Health Research (Dr. Coronado served as the Principal Investigator) participated in an industry-funded study to compare the clinical performance of an experimental fecal immunochemical test (FIT) to an FDA-approved FIT. This study was funded by Quidel Corporation. From February 2016 to July 2018, Jennifer Coury was contracted with CareOregon, Inc., to improve colorectal cancer screening rates in health plan members, including coordination of a mailed FIT program. All other authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Coury, J., Miech, E.J., Styer, P. et al. What’s the “secret sauce”? How implementation variation affects the success of colorectal cancer screening outreach. Implement Sci Commun 2, 5 (2021). https://doi.org/10.1186/s43058-020-00104-7

Download citation

Keywords

  • Implementation
  • Colorectal cancer
  • Program adaptation
  • Cancer screening outreach
  • Cancer prevention