Skip to main content

“It’s good to feel like you’re doing something”: a qualitative study examining state health department employees’ views on why ineffective programs continue to be implemented in the USA

Abstract

Background

Mis-implementation, the inappropriate continuation of programs or policies that are not evidence-based or the inappropriate termination of evidence-based programs and policies, can lead to the inefficient use of scarce resources in public health agencies and decrease the ability of these agencies to deliver effective programs and improve population health. Little is known about why mis-implementation occurs, which is needed to understand how to address it. This study sought to understand the state health department practitioners’ perspectives about what makes programs ineffective and the reasons why ineffective programs continue.

Methods

Eight state health departments (SHDs) were selected to participate in telephone-administered qualitative interviews about decision-making around ending or continuing programs. States were selected based on geographic representation and on their level of mis-implementation (low and high) categorized from our previous national survey. Forty-four SHD chronic disease staff participated in interviews, which were audio-recorded and transcribed verbatim. Transcripts were consensus coded, and themes were identified and summarized. This paper presents two sets of themes, related to (1) what makes a program ineffective and (2) why ineffective programs continue to be implemented according to SHD staff.

Results

Participants considered programs ineffective if they were not evidence-based or if they did not fit well within the population; could not be implemented well due to program restraints or a lack of staff time and resources; did not reach those who could most benefit from the program; or did not show the expected program outcomes through evaluation. Practitioners described several reasons why ineffective programs continued to be implemented, including concerns about damaging the relationships with partner organizations, the presence of program champions, agency capacity, and funding restrictions.

Conclusions

The continued implementation of ineffective programs occurs due to a number of interrelated organizational, relational, human resources, and economic factors. Efforts should focus on preventing mis-implementation since it limits public health agencies’ ability to conduct evidence-based public health, implement evidence-based programs effectively, and reduce the high burden of chronic diseases. The use of evidence-based decision-making in public health agencies and supporting adaptation of programs to improve their fit may prevent mis-implementation. Future work should identify effective strategies to reduce mis-implementation, which can optimize public health practice and improve population health.

Peer Review reports

Introduction

Chronic diseases such as cardiovascular disease, cancer, and diabetes cause the majority of deaths worldwide and are costly to individuals, healthcare systems, and communities [1,2,3]. Governmental public health systems have been tasked with addressing the burden of chronic diseases by using evidence-based approaches to implement evidence-based programs and policies (EBPPs) that can improve modifiable chronic disease risk factors [4,5,6,7]. One approach is evidence-based public health, which is characterized by using the best available scientific evidence and information about the characteristics, needs, and preferences of the community to plan, implement, and evaluate programs and policies [8, 9]. Despite its known benefits, evidence-based public health is not used as often as it should be [9,10,11]—practitioners report that as many as 60% of programs implemented are not evidence-based [12, 13].

State health departments (SHDs) in the USA are one of three levels of governmental public health agencies in the USA—national, state, and local. The US Constitution grants much of the authority to protect the public’s health to the states. SHDs receive funds from national public health agencies, state-level legislatures, and other funders to implement chronic disease prevention and management programs and policies [14]. SHDs often act as the granting agency for local partners (e.g., local health departments, community-based organizations), which are responsible for delivering programs to individuals and communities. Decision-making in SHDs about implementing programs and policies is complex and varies by state but typically includes top leadership (e.g., department or division directors), middle managers, and programmatic staff. Within a SHD, chronic disease departments vary widely in terms of the level of hierarchy, specific positions and titles, and how governance is shared with local-level agencies [15]. Decision-making is likewise complex and shared among several types of practitioners: those in leadership positions (e.g., division directors, program managers) and those in lower-level positions (e.g., health educators).

Within SHDs, an emerging area of research is focused on mis-implementation, which contributes to our understanding of the underuse of evidence-based public health practices and strengthens efforts to promote its use [16]. Mis-implementation is defined as the inappropriate continuation of programs or policies that are not evidence-based or the inappropriate termination of evidence-based programs and policies [17]. It is important to study mis-implementation as a unique phenomenon since it may occur as a result of mechanisms beyond the absence of evidence-based public health, although both result in sub-optimal implementation of EBPPs. Cross-sectional studies have quantified the extent to which mis-implementation occurs in public health settings [17, 18] and identified practitioner- and organizational-level correlates of mis-implementation, such as individuals having the skills to modify programs or policies to a new population and the use of economic evaluation in decision-making about programs [18,19,20]. Additionally, a qualitative analysis of the same data presented in this paper identified key characteristics of agency leaders, e.g., being transparent and facilitating bidirectional communication, that can prevent mis-implementation [21]. These studies lay a strong foundation for the understanding of mis-implementation. To date, there has been little focus on understanding mis-implementation from the perspective of public health practitioners, including qualitative research to determine practitioners’ views of mis-implementation and why it occurs. Previous research focused on staff perspectives of program adoption, implementation, and sustainability has identified multiple factors influencing these outcomes, including the fit of a program with its target population, strong leadership and training opportunities, sufficient agency capacity and funding, and internal and external support for a program [22,23,24,25]. However, additional research is needed to understand practitioners’ views on what influences mis-implementation, which may be different than what supports program implementation.

Thus, the purpose of this study was to understand practitioners’ views about what makes a program ineffective and why ineffective programs sometimes continue to be implemented in public health practice. For this study, we focus on the inappropriate continuation of ineffective programs. Based on our previous work, the inappropriate termination of effective programs is primarily related to the lack of funding available for a program [17, 18], whereas there are many factors at multiple levels that influence the continued implementation of ineffective programs [18]. By focusing on practitioners’ perspectives on mis-implementation, this research can inform future work to develop and identify strategies to prevent or reduce mis-implementation that address the contextual factors that are key drivers of mis-implementation and that are relevant to practitioners.

Methods

This qualitative study used a qualitative description approach to understand the perspectives of those with first-hand experiences with mis-implementation in governmental public health agencies [26]. A constructivist/interpretivist paradigm was used to guide the development of the interview guide questions, analysis, and interpretation because the research study was designed to understand and gain insights about how employees within governmental public health agencies, which are different across the USA, perceive and experience mis-implementation within their workplaces [27].

Interview recruitment

For this study, states were purposively selected based on responses from a previously conducted quantitative survey that investigated implementation decisions in public health programs in all USA state chronic disease units of SHDs [18]. The states in this study (n = 8) were chosen to maximize variation, including states with lower and higher levels of self-reported frequency of mis-implementation, population size, and geographic representation from each of the four US Census Bureau regions (South, Midwest, Northeast, West). These contextual differences may influence public health practice; thus, it was important to obtain a variety of experiences to identify generalizable knowledge of mis-implementation. After selecting states, the research team contacted their chronic disease directors. The goal of this contact was to inform each director about the invitation that the research team would send to their employees. Directors were asked if they wanted their staff to participate and if they had any other contact suggestions besides those identified for the previously conducted national survey. If the director requested not to contact their employees, the research team replaced the state and repeated the approach.

All SHD chronic disease prevention or health promotion program staff in the eight sampled states, who participated in our national survey or were recommended by the chronic disease director, received an invitation to participate. The administrative staff were ineligible to participate, since they are typically not involved in programmatic decision-making. In the event that fewer than expected potential participants responded, the National Association of Chronic Disease Directors membership list was used to identify additional potential participants, and they were invited to participate. Participants were told that the purpose of the study was to learn about the factors that influence decision-making processes at SHDs to continue or end chronic disease programs. From the initial respondents, we asked for recommendations of additional contacts.

Interviews occurred between February 2019 and June 2019. Participants who completed the interview were offered a $40 Amazon gift card incentive or donation to a health-related non-profit organization from a list of options. This study was approved by the Washington University in St. Louis Institutional Review Board of the Human Research Protection Office (IRB# 201812062).

Interview guide development

The interview guide questions focused on understanding the contextual factors associated with mis-implementation. The questions were developed based on the quantitative findings of the national survey previously conducted [16, 18]. These two data collection efforts, the national survey and interviews, were designed as part of a larger study to develop an agent-based model to understand mis-implementation from a systems perspective [16, 28]. The national survey used the social-ecological framework to understand potential factors related to mis-implementation. This framework, widely used in public health research, highlights the multi-level, biredictional influences on program implementation, whereby individual SHD staff characteristics, SHD-level factors, organizational capacity, and the external funding and policy environments all influence program implementation [29]. Interview guide questions were developed to build upon the survey findings and understand the influence of these factors in more depth. For example, interview questions asked why decisions were made within an SHD to continue an ineffective program and how key factors influenced the continuation of ineffective programs.

The interview guide questions were revised with input from the research team and stakeholder advisory board of public health practitioners. The development process included a pilot test with a member of our advisory board, a former SHD practitioner, to ensure the appropriateness of the length and language of the interview guide. The final interview guide consisted of open-ended questions about the perceptions of public health professionals working in SHDs’ chronic disease programs about the decision-making processes, reasons, facilitators, and barriers for continuing programs. Full text of the interview guide questions is available in Additional file 1. The interviews were conducted by trained research assistants over the telephone, and the interview guide was sent in advance to the interviewees. Each interview was audio recorded.

Data analysis

Field notes were compiled after each interview to guide the interpretation of interview coding and to guide discussions to determine if thematic saturation had been reached. Each recorded interview was professionally transcribed using an online service (rev.com). Transcripts were de-identified by the research team and uploaded to NVivo (version 12).

We used a deductive approach for our thematic analysis [30]. An initial codebook was developed based on the interview guide questions. The initial codebook, particularly the initial set of sub-codes, was informed by the social-ecological framework used to develop the quantitative survey on which these interviews were based (Additional file 2). For example, in the parent code “Decision Making: why an ineffective program was continued,” sub-codes representing the multiple levels of influence on implementation were added to the initial codebook to facilitate deductive coding—i.e., program-level factors such as alternative programs available, agency-level factors such as capacity for implementation, community support, and political will/influence on a program. The codes and sub-codes of the first version were revised throughout the coding of the transcripts.

For the coding process, five research team members coded transcripts randomly assigned to them. All transcripts were submitted to consensus coding in pairs. Differences between coders were discussed and addressed. In case of lack of consensus, a third team member facilitated the process to achieve consensus. After consensus was reached between the pairs of coders on their assigned transcripts, the team members altogether identified and summarized sub-codes to investigate potential overlapping sub-codes and provided the needed adjustments. The final codebook consisted of nine codes and many sub-codes. While reviewing the codes and their coding reports, the saturation of the data was defined when all codes and sub-codes had a variety of data representing them, and few or no new concepts emerged from subsequent interviews. The presented work focuses on the themes from two parent codes: (1) why a program is considered ineffective and (2) what multi-level factors lead to its continuation.

Results

A total of 44 interviews were conducted, with a range of 3 to 9 interviews per state. Interviews lasted between 20 and 68 min (average = 43 min). Nearly all (n = 43) interviewees were female. The average amount of time these practitioners had been in their agency and working in public health was 11 and 15 years, respectively. Most of the practitioners served as program managers or section directors, i.e., these interviewees were mainly middle managers. No discernable differences in themes were identified based on the reported level of mis-implementation or geographic region; thus, themes are presented for the overall sample.

The results are presented separately by major theme (i.e., code) and are summarized in Table 1. Themes are underlined in each section. First, we present practitioners’ conceptualizations of what makes a public health program ineffective, illustrating how practitioners define “ineffective.” Then, we summarize key reasons why practitioners believed ineffective programs continued to be implemented.

Table 1 Overview of themes

Why a program is considered ineffective

Lack of program fit

Practitioners described instances where a program was ill-suited to the populations they serve (i.e., a lack of external validity). As a result, the program, as implemented, does not reach the priority population who could benefit from it and/or does not improve the intended health outcomes. For example, the mode of delivery (e.g., in-person, group-based, or home visits) sometimes made participation in the program difficult for individuals, either because of logistical challenges of transportation or because of other challenges:

…so a lot of families are unwilling to do the program because they’re afraid of the city coming to their home, they’re afraid of landlord retaliation, so the numbers are pretty low. [Participant 1]

Several of the programs that were discussed as ineffective in this manner were those that have research evidence in support of their effectiveness and have been recognized nationally as an evidence-based program. For example, the Diabetes Prevention Program, which uses in-person, educational sessions and goal setting to decrease diabetes risk, was deemed ineffective because of the lack of implementation flexibility to better reach high-risk population groups who cannot attend multiple in-person sessions.

No measured benefits

Often, programs were described as ineffective because the local data did not show evidence of effectiveness, even though most were considered evidence-based programs, i.e., those that have a strong research base supporting their effectiveness or those that are included in the Community Guide. These programs were described as ineffective overall or as ineffective because specific components of the program did not achieve a given objective.

there hasn’t been really much change … in the past three surveys that were done. And they’re usually done approximately five years apart, so for a long time we haven’t really seen any major change at a population level... [Participant 5]

We’re hearing wonderful things from it but our numbers and the data that we’re seeing isn’t bearing it out. [Participant 6]

Unknown effectiveness

Some practitioners mentioned that there were programs that were deemed ineffective within their departments because there was a lack of evaluation data to measure and understand the impact of the program on individuals, even though the program may not have been truly ineffective:

I don’t think it was ineffective. We just couldn’t assess how much it was contributing… we couldn’t to the degree of saying, well, from $220,000 to $300,000 a year, we know that putting in that much amount was getting us this much impact. We couldn’t describe that adequately. [Participant 7]

…sometimes we get stuck in a rut when doing programs with same way, you know, year after year, and maybe not thinking about evaluating outcomes. ‘Cause it’s hard to evaluate [individual] outcomes from [an informational program]. [Participant 8]

One reason it’s ineffective is we cannot analyze and evaluate any kind of result ‘cause we just give it out. [Participant 9]

Staffing constraints

Practitioners noted that programs were often ineffective due to a lack of sufficient staff or dedicated time to support successful implementation or the scaling up of a program, throughout the processes of program planning and implementation. For example, there was not enough money to support necessary costs, such as ongoing support for those delivering programs, which resulted in the program being ineffective.

It just comes down to their capacity to actually do the necessary groundwork to make all those necessary connections. [Participant 2]

There has been a lot of movement on the federal level, so even our print materials aren’t necessarily up to date at this point. Just because there’s so much else going on, I don’t have a lot of time to focus on this project. [Participant 3]

Because we don’t have the capacity to go full blown, and really reach a significant number of people. [Participant 4]

Why ineffective programs are continued

Inertia and sunk costs

Practitioners described the ideas of inertia and sunk costs when discussing why ineffective programs are continued. Practitioners highlighted that it was difficult to make changes to a program once it is already in place. As such, modifying an ineffective, already implemented program to improve its effectiveness would be too difficult. Also, if a program is funded and in place, many would be hesitant to end the program and lose the money and time already spent on implementing the program.

And you’re dictated by funding. It’s hard to change the course, especially three years into a five year grant. [Participant 17]

There’s a pretty hefty bureaucratic process to creating and maintaining contracts. So you had this ongoing thing that’s been approved time and time again. Each time you do it it’s just easier to get approved… And the way I kinda think about it is if you spend four months training for a marathon and then the week before you got shin splints, you might still run the marathon just because you put in all the work up to this point. [Participant 12]

Information gaps

The importance of program evaluation was discussed in relation to having adequate information to decide whether or not to continue a program. Some interviewees shared that there was not an evaluation built into the original implementation plan or there was one, but it did not capture the right set of outcomes at the right time, given the mechanism of change of the program. For example, only shorter-term outcomes were collected for a program that would have needed longer-term outcomes to demonstrate effectiveness. Also, there was a discussion of challenges on how program evaluation data could be used to inform decisions about whether to continue, modify, or discontinue a program. In one instance, there was an evaluation plan, but the data were reviewed too far into the grant period to make any changes to the program.

I think we just needed more time and more information from the field. [Participant 22]

I think for this was the struggle because we hadn’t put in an evaluator to the intervention from the start or the minimum an evaluation plan that would lead back to an evaluator at determined time. It was really hard for us. We didn’t have a baseline. [Participant 7]

Desire to act

Practitioners discussed the idea that it feels good to know that at least something is being done to address a public health concern, even if the program is not effective for its intended outcome and especially when there are no alternative programs that fit the disease, behavior, or population.

…at least we’re doing something or at least we’re out there, that kind of attitude. [Participant 15]

But it made people feel good that they were doing something for a vulnerable population. [Participant 11]

Agency capacity

Several practitioners discussed the impact of agency capacity, e.g., money, time, personnel, on decisions to continue ineffective programs. Some expressed concern that the decision to end a program, ineffective or effective, would put further strain on limited resources. For example, ending a program would force a workgroup to find other salary support for all employees funded by that program or risk losing their job.

I think another thing that it somewhat went into our decision making, but it probably is something we could have looked at more, is even our own staffing capacity and how we could better leverage other programs within the state because I think a lot of it, like [Respondent 1] mentioned, we just had so much staff turnover that we weren’t able to really dive in and make the changes in later parts of the grant. [Participant 19]

[The existing program] maintained a position and kept a certain level of funding coming in. [Participant 15]

I mean, I think a lot of the reason that it continued was just because that was a way to pay his salary. If it didn’t continue, then we were going to have to let him go because he couldn’t only work part-time through the other grant that he was through… I mean if we have somebody who’s willing to stay, we try to keep them as much as possible… [Participant 21]

Partnership maintenance

The main reason cited by practitioners for why an ineffective program was continued was that the continued implementation of a program, regardless of its evidence base, supported the maintenance of good relationships with community partners such as local health departments. Forcing community partners to make modifications to a program or ending a program could result in burning bridges with partners who were often the frontline implementers in communities.

Like they were excited about it, you know, like you don’t want to squash their excitement. [Participant 10]

…there can be incredible pushback to being told what to do, because that’s how people feel in the field. They feel like they’re being told what their community needs and what to do. [Participant 11]

…we don’t want to burn bridges and we want to continue to work with the people that we’ve been working with for a long time. [Participant 12]

Program champions

Additionally, practitioners noted that sometimes ineffective programs were continued due to the presence of program champions who advocated for the program. Program champions could be members of the community receiving the program, partnering organizations who implement the program locally, SHD staff, or policymakers who have influence over funding decisions.

It was hard to convince the commissioner at that time that we shouldn’t have it, … that this was not effective anymore and certainly is not sustainable because the health departments aren’t being paid for these visits. [Participant 13]

It’s a program that was, that had advocates from outside the department that established that even got the program going that convinced people that the program was needed and it was a program that was never going to accomplish it’s large stated goal. [Participant 11]

Discussion

This study sought to understand SHD employees’ perspectives on why programs were ineffective and why some ineffective programs still continued to be implemented. To our knowledge, this is the first study to qualitatively explore why mis-implementation occurs from the perspective of public health practitioners. Themes discussed by practitioners aligned with the social-ecological framework used to guide the conceptualization of the study, in that practitioners described factors at multiple levels (i.e., program characteristics, staff-, agency-, and outer context-level factors) operating to influence mis-implementation. This is consistent with research identifying the multi-level determinants of low-value care, which is similar to the conceptualization of ineffective programs in this study [31]. Programs were typically ineffective if they were ill-suited to the population, could not be implemented sufficiently, or if they failed to reach those who could most benefit from the program. Some ineffective programs continued to be implemented because of internal decision-making dynamics, as a way to preserve relationships with partners, or because it was a way to keep staff fully funded.

This study extends the existing literature on mis-implementation, which has quantified the prevalence of mis-implementation in state and local health departments [17,18,19,20] and multi-level correlates of mis-implementation [18], and generated potential strategies from public health practitioners for ending ineffective programs [32]. Importantly, this study begins to fill an important gap in knowledge of public health practitioners’ perceptions of programs as something that should continue or end, as noted by Allen and colleagues [19]. These results provide novel information about public health practitioners conceptualize ineffective programs—in particular, programs that are not effective despite being evidence-based in other contexts. This indicates that additional work is needed to build robust evidence about effectiveness in diverse settings and the contextual factors that influence intervention effectiveness to aid public health practitioners in selecting an evidence-based program that will be effective in their community. Also, the qualitative findings in this study support the quantitative findings in Padek et al. of correlates of inappropriate continuation of ineffective programs [18]. Factors common between both studies include funding available and support from external audiences such as policymakers, program champions, and the general public. New in this study is the focus on maintaining strong partner relationships as a reason for continuing to implement ineffective programs and that practitioners are hesitant to jeopardize future collaborations, due to the importance of partnerships in public health practice [33]. Additional research is needed to determine how to address key contributors to mis-implementation, such as supporting public health practitioners in navigating the difficult conversations and negotiations with their partners about adapting or de-implementing ineffective programs.

These findings reiterate the importance of program-level factors (e.g., program fit, collecting evaluation data) that have been widely recognized in implementation science theories, including the Dynamic Sustainability Framework and the Consolidated Framework for Implementation Research [24, 34,35,36,37]. Of particular relevance to the findings in this study, the Dynamic Sustainability Framework “anchors the ultimate benefit of the intervention in terms of its ability to fit within a practice setting” and posits that the best fit of a program is achieved through ongoing evaluation of program effectiveness and contextual factors that inform refinements made to a program, ultimately leading to the sustainment of a program [34]. A greater focus on building skills of public health practitioners to develop, implement, and track effective refinements, i.e., adaptations, is needed [38], which can be facilitated by the longstanding attention in implementation science to adaptation [39, 40]. Adaptation in public health practice is complex and requires practitioners to have skills and resources to adapt evidence-based programs based on local evaluation data, and top leadership to buy into the value of adaptation, and funders to allow formal adaptations to be implemented. Often, state and local health departments in the USA are funded to deliver a particular program exactly as it was developed, with little flexibility for adapting programs. Recent efforts to support public health researchers and professionals as they plan for, implement, and track adaptations, such as the IM ADAPT tool (www.imadapt.org) and FRAME, have great potential to increase the use, tracking, and assessment of adaptation in public health practice [41, 42].

Practitioners described the influence of higher-level factors on mis-implementation, including agency capacity (e.g., dedicated staff), program champions, and maintaining relationships with external partners. These factors align with those that were previously identified as influential in the successful implementation of an evidence-based program [22, 23, 25, 43]. For example, a public health program sustainability framework developed by Schell and colleagues based on a review of empirical research and a concept mapping exercise identified several domains that correspond with the themes identified in this study, such as funding stability, partnerships, organizational capacity, program evaluation, and program adaptation [44]. These constructs were identified as contributing to the sustainability of evidence-based programs in public health practice; however, these same factors may also contribute to the sustainment of ineffective programs when incentives (e.g., funding) and infrastructures are in place for program implementation [31, 45]. Extending these previous studies, the emphasis on the emotional and relational influences on the continued implementation of an ineffective program beyond program and organizational factors identified in previous research is noteworthy. The typical focus of capacity-building efforts for public health practitioners is on individual skill-building and modifying organizational factors to support the successful implementation of EBPPs [46]. Specific to mis-implementation, it may be necessary to address additional concepts in these efforts, like the idea noted by participants that “it’s good to feel like you’re doing something” and how to effectively manage partnerships in instances where changes need to be made to a program about which partnering organizations or program champions feel strongly.

Themes from these interviews highlight how critical it is to prevent mis-implementation. Practitioners noted how difficult it was to make changes to a program or to discontinue it once it was implemented, discussed as the inertia effect. While evidence-based public health and mis-implementation are distinct concepts, the use of an evidence-based public health framework is a key strategy to prevent mis-implementation [8, 9]. The evidence-based public health approach includes the collection of local data to gain insights about the health problem within a community and the contextual drivers of the problem; matching this information with the best available evidence to prioritize program options, implementation of a program, and evaluation of the program [8]. Using an evidence-based public health approach may prevent some of the problems associated with mis-implementation discussed by these practitioners, such as the program not fitting with the context of their community, a lack of evaluation data to understand whether or not a program was effective, and the influence of stakeholders such as program champions and community partners on mis-implementation. Leaders within public health agencies play an important role in setting the expectations for using an evidence-based public health approach [19, 47,48,49]. Leadership support may be especially important in preventing mis-implementation, as the expectation and support for using ongoing, evidence-based decision-making could ameliorate the effects of inertia and sunk cost on decisions to continue implementation of ineffective programs. Previous research has focused on developing, testing, and disseminating strategies to increase the use of evidence-based public health [50,51,52,53,54,55,56], and future research and practice efforts should continue to determine the best ways to support the use of evidence-based public health in governmental public health agencies [57].

If mis-implementation cannot be prevented and programs cannot be adapted, lessons from the emerging area of de-implementation may be useful for reversing mis-implementation, by ending ineffective programs [57,58,59,60]. De-implementation has been primarily focused in clinical settings [59] on low-value and ineffective programs. Merging research about de-implementation in public health and social service settings can guide the de-implementation of ineffective programs by funders, SHDs, and contracted local implementing agencies [57]. McKay and colleagues suggest a stepwise fashion for de-implementation: identifying the programs that should be de-implemented, assessing the context for de-implementation, actively de-implementing, and evaluating the de-implementation process [57]. As in other aspects of implementation research, stakeholders’ buy-in for de-implementation will be critical [61] given that SHD practitioners were willing to allow ineffective programs to continue in part to maintain relationships with partner organizations and due to the presence of a program champion. Future work should focus on developing strategies to address the social and organizational contexts that facilitate de-implementation where mis-implementation is occurring.

Several limitations of this study should be taken into account when interpreting the findings, including generalizability due to selection bias, defining ineffective programs, and privacy concerns. States were selected based on responses to a national survey, which did not have equal response between states, and not all divisions within a SHD were interviewed. Thus, our results may not generalize to those states that were not selected or to the entire SHD. In addition, many of our respondents were middle managers, who are at the intersection of the day-to-day operations of a program and hold some decision-making power and are uniquely suited to answer the questions that relate to program mis-implementation and organizational-level factors influencing it [62]. However, the perspectives of those who are closer to the frontlines of program implementation, who were less represented in this study, may be different from those of middle managers; future research should seek to obtain their perspectives and understand differences by position. Also, we did not objectively assess whether or not the programs discussed were ineffective (i.e., that the program lacks research evidence supporting its effectiveness) and instead defined ineffective programs according to the perspectives of the interviewees. Last, although interviews were considered confidential, interviewees sometimes expressed hesitancy in divulging certain information that if shared, may have had funding or political consequences.

Conclusion

The qualitative results presented in this paper contribute to and extend our understanding of mis-implementation in public health practice. A novel finding of this work is that SHD practitioners described ineffective programs as those that were ineffective within their local communities. As such, careful consideration should be given to what “evidence-based” means. The disconnect between what researchers and federal funders deem as an evidence-based program and what public health practitioners view as evidence-based within their communities could create tension in efforts to support the implementation of programs in public health practice. Additionally, practitioners in SHDs in the USA described multiple levels of factors that contribute to the continuation of ineffective programs—features of the program itself, agency capacity for implementation, relationships with partner organizations, and funding considerations. The description of these factors by practitioners highlights the importance of considering the emotional and relational implications of public health work.

The results from this study can inform the development of strategies to prevent or reduce mis-implementation in a manner that focuses on stakeholder-relevant contextual factors that contribute to mis-implementation. By identifying practitioners’ perceptions about mis-implementation and incorporating them into strategies to prevent mis-implementation and support the use of evidence-based public health, it is more likely that efforts will address salient contextual factors and be relevant to SHD practitioners. Future research is needed to identify effective strategies to address mis-implementation in public health practice and how to integrate them into governmental public health agencies, to optimize public health practice and ultimately improve population health.

Availability of data and materials

The datasets during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CDC:

Centers for Disease Control and Prevention

EBPPs:

Evidence-based programs and policies

SHD:

State health department

References

  1. Heron M. Deaths: leading causes for 2017. National Vital Statistics Reports, [Internet]. 2019; 68. Available from: https://www.cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_06-508.pdf.

  2. James SL, Abate D, Abate KH, Abay SM, Abbafati C, Abbasi N, et al. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;392(10159):1789–858.

    Article  Google Scholar 

  3. Waters H, Graf M. The costs of chronic disease in the U.S. Milken Institute 2018. Available from: https://milkeninstitute.org/sites/default/files/reports-pdf/ChronicDiseases-HighRes-FINAL.pdf.

  4. American Public Health Association. Supporting research and evidence-based public health practice in state and local health agencies 2017 [Available from: https://www.apha.org/policies-and-advocacy/public-health-policy-statements/policy-database/2018/01/18/supporting-research-and-evidence-based-public-health-practice].

    Google Scholar 

  5. Canadian Institutes of Health Research. Canadian Institutes of Health Research Act 2019 [Available from: https://laws.justice.gc.ca/PDF/C-18.1.pdf].

  6. National Health and Medical Research Council. Research translation [Available from: https://www.nhmrc.gov.au/research-policy/research-translation-and-impact].

  7. National Institute for Health and Care Excellence. What we do [Available from: https://www.nice.org.uk/about/what-we-do].

  8. Brownson RC, Baker EA, Deshpande AD, Gillespie KN. Evidence-based public health. Oxford: Oxford University Press; 2017.

    Google Scholar 

  9. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.

  10. Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health. 2003;93(8):1261–7.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Hannon PA, Fernandez ME, Williams RS, Mullen PD, Escoffery C, Kreuter MW, et al. Cancer control planners’ perceptions and use of evidence-based programs. J Public Health Manage Pract. 2010;16(3):E1–8.

    Article  Google Scholar 

  12. Dreisinger M, Leet TL, Baker EA, Gillespie KN, Haas B, Brownson RC. Improving the public health workforce: evaluation of a training course to enhance evidence-based decision making. J Public Health Manag Pract. 2008;14(2):138–43.

    Article  PubMed  Google Scholar 

  13. Gibbert WS, Keating SM, Jacobs JA, et al. Training the workforce in evidence-based public health: an evaluation of impact among US and international practitioners. Prev Chronic Dis. 2013;10:E148. .https://doi.org/10.5888/pcd10.130120.

  14. Federal Grant and Cooperative Agreement Act, Pub. L. No. 95–224 Stat. 3 (1978).

  15. Centers for Disease Control and Prevention. Health Department Governance 2018 [Available from: https://www.cdc.gov/publichealthgateway/sitesgovernance/index.html].

  16. Padek M, Allen P, Erwin PC, Franco M, Hammond RA, Heuberger B, et al. Toward optimal implementation of cancer prevention and control programs in public health: a study protocol on mis-implementation. Implement Sci. 2018;13(1):49.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Brownson RC, Allen P, Jacob RR, Harris JK, Duggan K, Hipp PR, et al. Understanding mis-implementation in public health practice. Am J Prev Med. 2015;48(5):543–51.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Padek MM, Mazzucca S, Allen P, Rodriguez Weno E, Tsai E, Luke DA, et al. Patterns and correlates of mis-implementation in state chronic disease public health practice in the United States. BMC Public Health. 2021;21(1):101.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Allen P, Jacob RR, Parks RG, Mazzucca S, Hu H, Robinson M, et al. Perspectives on program mis-implementation among U.S. local public health departments. BMC Health Serv Res. 2020;20(1):258.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Furtado KS, Budd EL, Armstrong R, Pettman T, Reis R, Sung-Chan P, et al. A cross-country study of mis-implementation in public health practice. BMC Public Health. 2019;19(1):270.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Moreland-Russell S, Weno ER, Smith R, Padek M, Saliba LF, Brownson R. Leading the way: qualities of leaders in preventing mis-implementation of public health programs. Implementation Sci Commun. Under review.

  22. Day RE, Sahota P, Christian MS. Effective implementation of primary school-based healthy lifestyle programmes: a qualitative study of views of school staff. BMC Public Health. 2019;19(1):1239.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Farb H, Sacca K, Variano M, Gentry L, Relle M, Bertrand J. Provider and staff perceptions and experiences implementing behavioral health integration in six low-income health care organizations. J Behav Health Serv Res. 2018;45(1):143–55.

    Article  PubMed  Google Scholar 

  24. Shelton RC, Brotzman LE, Johnson D, Erwin D. Trust and mistrust in shaping adaptation and de-implementation in the context of changing screening guidelines. Ethn Dis. 2021;31(1):119–32.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Stamatakis KA, Lewis M, Khoong EC, Lasee C. State practitioner insights into local public health challenges and opportunities in obesity prevention: a qualitative study. Prev Chronic Dis. 2014;11:E39.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Neergaard MA, Olesen F, Andersen RS, Sondergaard J. Qualitative description - the poor cousin of health research? BMC Med Res Methodol. 2009;9:52.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Creswell J, Poth C. Qualitative inquiry and research design: choosing among five approaches. Thousand Oaks: Sage; 2018.

    Google Scholar 

  28. Ornstein JT, Hammond RA, Padek M, Mazzucca S, Brownson RC. Rugged landscapes: complexity and implementation science. Implement Sci. 2020;15(1):85.

    Article  PubMed  PubMed Central  Google Scholar 

  29. McLeroy KR, Bibeau D, Steckler A, Glanz K. An ecological perspective on health promotion programs. Health Educ Q. 1988;15(4):351–77.

    Article  CAS  PubMed  Google Scholar 

  30. Miles MB, Huberman AM, Saldana J. Qualitative data analysis: a methods sourcebook. 4th ed. Thousand Oaks: SAGE Publications; 2019.

    Google Scholar 

  31. Augustsson H, Ingvarsson S, Nilsen P, von Thiele Schwarz U, Muli I, Dervish J, et al. Determinants for the use and de-implementation of low-value care in health care: a scoping review. Implement Sci Commun. 2021;2(1):13.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Rodriguez Weno E, Allen P, Mazzucca S, et al. Approaches for Ending Ineffective Programs: Strategies From State Public Health Practitioners. Front Public Health. 2021;9:727005. https://doi.org/10.3389/fpubh.2021.727005.

  33. Hann NE. Transforming public health through community partnerships. Prev Chronic Dis 2005;2 Spec no(Spec No):A03-A.

  34. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

    Article  Google Scholar 

  37. Shelton RC, Cooper BR, Stirman SW. The sustainability of evidence-based interventions and practices in public health and health care. Annu Rev Public Health. 2018;39:55–76.

    Article  PubMed  Google Scholar 

  38. Escoffery C, Lebow-Skelley E, Haardoerfer R, Boing E, Udelson H, Wood R, et al. A systematic review of adaptations of evidence-based public health interventions globally. Implement Sci. 2018;13(1):125.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Baumann AA, Cabassa LJ, Stirman SW. Adaptation in dissemination and implementation science. Dissemination Implementation Res Health Transl Sci Pract. 2017;2:286–300.

    Google Scholar 

  40. Chambers DA, Norton WE. The adaptome: advancing the science of intervention adaptation. Am J Prev Med. 2016;51(4, Supplement 2):S124–S31.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Kelly P, Hegarty J, Barry J, Dyer KR, Horgan A. A systematic review of the relationship between staff perceptions of organizational readiness to change and the process of innovation adoption in substance misuse treatment programs. J Subst Abus Treat. 2017;80:6–25.

    Article  Google Scholar 

  44. Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, et al. Public health program capacity for sustainability: a new framework. Implement Sci. 2013;8:15.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Roczniewska M, von Thiele Schwarz U, Augustsson H, Nilsen P, Ingvarsson S, Hasson H. How do healthcare professionals make decisions concerning low-value care practices? Study protocol of a factorial survey experiment on de-implementation. Implement Sci Commun. 2021;2(1):50.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27–53.

    Article  PubMed  Google Scholar 

  47. Allen P, O’Connor JC, Best LA, Lakshman M, Jacob RR, Brownson RC. Management practices to build evidence-based decision-making capacity for chronic disease prevention in Georgia: a case study. Prev Chronic Dis. 2018;15:E92.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Hu H, Allen P, Yan Y, Reis RS, Jacob RR, Brownson RC. Organizational supports for research evidence use in state public health agencies: a latent class analysis. J Public Health Manage Pract. 2019;25(4):373–81.

    Article  Google Scholar 

  49. Jacob RR, Allen PM, Ahrendt LJ, Brownson RC. Learning about and using research evidence among public health practitioners. Am J Prev Med. 2017;52(3 Suppl 3):S304–s8.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Armstrong R, Waters E, Dobbins M, Anderson L, Moore L, Petticrew M, et al. Knowledge translation strategies to improve the use of evidence in public health decision making in local government: intervention design and implementation plan. Implement Sci. 2013;8:121.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Armstrong R, Waters E, Moore L, Dobbins M, Pettman T, Burns C, et al. Understanding evidence: a statewide survey to explore evidence-informed public health decision-making in a local government setting. Implement Sci. 2014;9:188.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:3.1–3.27.

    Article  Google Scholar 

  53. Diem G, Brownson RC, Grabauskas V, Shatchkute A, Stachenko S. Prevention and control of noncommunicable diseases through evidence-based public health: implementing the NCD 2020 action plan. Glob Health Promot. 2016;23(3):5–13.

    Article  PubMed  Google Scholar 

  54. Pettman TL, Armstrong R, Jones K, Waters E, Doyle J. Cochrane update: building capacity in evidence-informed decision-making to improve public health. J Public Health. 2013;35(4):624–7.

    Article  Google Scholar 

  55. Waters E, Armstrong R, Swinburn B, Moore L, Dobbins M, Anderson L, et al. An exploratory cluster randomised controlled trial of knowledge translation strategies to support evidence-informed decision-making in local governments (the KT4LG study). BMC Public Health. 2011;11:34.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Yousefi Nooraie R, Lohfeld L, Marin A, Hanneman R, Dobbins M. Informing the implementation of evidence-informed decision making interventions using a social network analysis perspective; a mixed-methods study. BMC Health Serv Res. 2017;17(1):122.

    Article  PubMed  PubMed Central  Google Scholar 

  57. McKay VR, Morshed AB, Brownson RC, Proctor EK, Prusaczyk B. Letting go: conceptualizing intervention de-implementation in public health and social service settings. Am J Community Psychol. 2018;62(1-2):189–202.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Helfrich CD, Hartmann CW, Parikh TJ, Au DH. Promoting health equity through de-implementation research. Ethn Dis. 2019;29(Suppl 1):93–6.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Norton WE, Kennedy AE, Chambers DA. Studying de-implementation in health: an analysis of funded research grants. Implement Sci. 2017;12(1):144.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Prusaczyk B, Swindle T, Curran G. Defining and conceptualizing outcomes for de-implementation: key distinctions from implementation outcomes. Implementation Sci Commun. 2020;1:1–10.

    Article  Google Scholar 

  61. Minkler M, Salvatore A, Chang C. Participatory approaches for study design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2017. p. 175–90.

    Google Scholar 

  62. Birken S, Clary A, Tabriz AA, Turner K, Meza R, Zizzi A, et al. Middle managers’ role in implementing evidence-based practices in healthcare: a systematic review. Implement Sci. 2018;13(1):149.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We are grateful to the participants who provided their time and expertise about public health practice. We thank Mackenzie Robinson and Emily Davis for their assistance in conducting some of the interviews. Also, we appreciate the administrative support of Mary Adams and Linda Dix.

Funding

This work was supported by the National Cancer Institute R01CA214530 and P50CA244431, the National Institute of Diabetes and Digestive and Kidney Diseases P30DK092950, and Cooperative Agreement number U48DP006395 from the Centers for Disease Control and Prevention. The findings and conclusions in this paper are those of the authors and do not necessarily represent the official positions of the National Institutes of Health or the Centers for Disease Control and Prevention.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization and design: PA, MP, and RCB. Interview guide development: SM, ERW, PA, MP, and RCB. Interview coding and summary: SM, LFS, RS, ERW, PA, and MP. Review of findings: all authors. Writing: SM and LFS. All authors read, edited, and approved the final manuscript.

Corresponding author

Correspondence to Stephanie Mazzucca.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Washington University in St. Louis’ Institutional Review Board. Participant consent was obtained.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Interview guide questions.

Additional file 2.

Codebook.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mazzucca, S., Saliba, L.F., Smith, R. et al. “It’s good to feel like you’re doing something”: a qualitative study examining state health department employees’ views on why ineffective programs continue to be implemented in the USA. Implement Sci Commun 3, 4 (2022). https://doi.org/10.1186/s43058-021-00252-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-021-00252-4