Skip to main content

Refining Expert Recommendations for Implementing Change (ERIC) strategy surveys using cognitive interviews with frontline providers

Abstract

Background

The Expert Recommendations for Implementing Change (ERIC) compilation includes 73 defined implementation strategies clustered into nine content areas. This taxonomy has been used to track implementation strategies over time using surveys. This study aimed to improve the ERIC survey using cognitive interviews with non-implementation scientist clinicians.

Methods

Starting in 2015, we developed and fielded annual ERIC surveys to evaluate liver care in the Veterans Health Administration (VA). We invited providers who had completed at least three surveys to participate in cognitive interviews (October 2020 to October 2021). Before the interviews, participants reviewed the complete 73-item ERIC survey and marked which strategies were unclear due to wording, conceptual confusion, or overlap with other strategies. They then engaged in semi-structured cognitive interviews to describe the experience of completing the survey and elaborate on which strategies required further clarification.

Results

Twelve VA providers completed surveys followed by cognitive interviews. The “Engage Consumer” and “Support Clinicians” clusters were rated most highly in terms of conceptual and wording clarity. In contrast, the “Financial” cluster had the most wording and conceptual confusion. The “Adapt and Tailor to Context” cluster strategies were considered to have the most redundancy. Providers outlined ways in which the strategies could be clearer in terms of wording (32%), conceptual clarity (51%), and clarifying the distinction between strategies (51%).

Conclusions

Cognitive interviews with ERIC survey participants allowed us to identify and address issues with strategy wording, combine conceptually indistinct strategies, and disaggregate multi-barreled strategies. Improvements made to the ERIC survey based on these findings will ultimately assist VA and other institutions in designing, evaluating, and replicating quality improvement efforts.

Peer Review reports

Background

Moving evidence-based practices (EBPs) into routine care settings to improve healthcare quality and outcomes requires the skillful selection of implementation strategies, defined as “methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice” [1]. Still, it is estimated that it takes 17 years from the time of development for EBPs to achieve 50% penetration into routine clinical practice [2]. In addition to depriving people of the best care, these delays also mean that, by the time the evidence reaches people, it may already be out of date.

Implementation strategies can vary widely, as can their labels, definitions, and applications. Since meaning often derives from naming, it is semantically important to accurately describe strategies, understand their referents, and their relationships to other strategies and contexts. It is likewise important that each strategy be clear and distinct from all other strategies as to understand which strategies and combinations of strategies enhance EBP adoption, implementation, and sustainment. However, implementers rarely justify the selection of certain strategies over others [3], despite Proctor et al.’s [1] guidance from 2013 to thoughtfully select and specify strategies. Failing to characterize strategies appropriately has hampered advancement of the science of implementation and its practical applications.

To generate a common nomenclature for implementation strategies and facilitate standardization of research methods and replication, the Expert Recommendations for Implementing Change (ERIC) study engaged experts in modified Delphi approach and concept mapping to 1) refine a compilation of implementation strategies and 2) develop conceptually distinct categories of implementation strategies [4]. This led to a compilation of 73 discrete implementation strategies, which were further organized into nine thematic clusters, including financial, infrastructure, supporting clinicians, education, and patient-facing strategies, among others [5].

As part of a Veterans Health Administration (VA) program evaluation of a national hepatitis C virus (HCV) quality improvement (QI) initiative in 2015, we developed a novel survey of ERIC implementation strategies to longitudinally identify strategies used throughout the course of an initiative [6]. We have previously described our ERIC strategy survey development process [7]. In brief, the ERIC surveys present 73 strategies by cluster and offer a binary choice of having done or not done a strategy in the past year. Parenthetical examples are tailored to the EBP of interest (e.g., hepatitis C treatment, advanced liver disease, HIV prevention and care, opioid safety measures). Five years of longitudinal data with hundreds of responses informed us about strategy use and effectiveness.

While these ERIC surveys have been employed to document strategy use across several clinical areas and prescribe strategies that may work, it is unclear how non-implementation scientists (specifically front-line health care providers) interpret survey items. Thus, the overarching goal of this study was to understand how non-implementation scientists interpreted and experienced the ERIC implementation strategy survey.

Methods

Design

This mixed-methods study was approved by the VA Pittsburgh Healthcare System Institutional Review Board (Pro00003422). Interview data were collected from providers focused on improving liver care across VA between October 2020 and October 2021. Participants were purposively selected based on their experience responding to multiple ERIC strategy surveys over the course of a national quality improvement initiative. Agreeable participants provided verbal consent, reviewed an online survey with the 73 strategy items, and participated in a cognitive interview about ERIC strategies. Cognitive interviewing, often used to learn about the perceptions of survey respondents, is a method in which individuals are invited to verbalize thoughts and feelings as they examine information—namely items on a survey [8]. Qualitative methods followed COREQ guidelines [9].

Participants and data collection

We purposively selected participants who had completed a strategy survey at least three times within seven years in order to gauge the experience of those who repeatedly engaged with the survey over time. Two pilot interviews were conducted to review and refine the interview guide prior to starting the study. Thirty participants were invited to participate via email, 14 agreed, and 12 completed interviews. Participants completed a 15-min pre-interview survey in SurveyMonkey and a virtual interview via Microsoft Teams lasting 60–90 min. Semi-structured interviews included three parts and were guided by visual displays of the strategies in PowerPoint and PowerBI. Participants also provided their degree, role, and experience with quality improvement. All interviews were conducted by a master’s level qualitatively trained member of the study team (CL). Field notes were taken by two other team members (SG, BN). Interviews were digitally recorded and transcribed verbatim.

Pre-interview survey development

The pre-interview survey paralleled the typical ERIC strategy survey and contained all 73 implementation strategies and asked the respondent to comment about their potential confusion and clarity with wording, or whether the strategy was similar or distinct from every other strategy. Participants were presented both the original generic ERIC strategies and the tailored ERIC strategies for the pre-interview survey. First, the pre-interview survey displayed the original generic ERIC strategies as to achieve the most consistent interpretation of strategies given participants had either responded to advanced liver disease and/or hepatitis C care surveys which had had uniquely tailored strategies.

Interview development

As is typical in cognitive interviewing, the interview for this study was developed to accomplish three goals, including 1) understand users’ experience with survey completion, 2) evaluate issues with comprehension, and 3) identify multiple embedded and conceptually indistinct strategies, to determine which would be best combined versus disaggregated [8]. Typically, strategy interpretations were not “corrected” unless participants were highly confused (as reinforced by the interviewer through “there’s no right or wrong answer”). During interviews, both generic and strategies tailored to hepatitis C treatment were the reference points to draw out general perceptions and specific abstractions of strategy details.

Participants were asked semi-structured questions about their experiences with completing the survey. The interviewer followed a semi-structured script and the think aloud method [10] to ask questions about strategy comprehension. This included asking about strategy specifications based on Proctor et al.: “For the strategies that you did report using, could you give further details on what your site did? If yes, what kinds of details (for example who did it, what did they do, who were the targets, when was it done/temporality, how often it was done/dose, outcomes addressed, and justification for doing)?” [1].

Participants were asked to interpret a subset of strategies that were identified a priori by the study team as either (1) similar or potentially overlapping or (2) having multiple embedded strategies. The team reached consensus about ten strategy pairs that were potentially overlapping. For example, “work with educational institutions” and “develop academic partnerships” were considered overlapping strategies. Participants were asked to rate “How clear is the difference to you?” between the two strategies on a 4-item Likert scale (“very unclear,” “unclear,” “clear,” “very clear”), as well as to describe the difference between the strategies in their own words. In the instances when participants asked for more details on the strategy descriptions, the interviewer would refer to the complete original ERIC definition on the screen for the participant to read.

To define whether multiple strategies should be considered as an integrated process or two sequential activities, the study team independently read through the strategy survey and then discussed to consensus which strategies should be examined in detail through cognitive interviews. Ten strategies that had two or more components or multiple embedded strategies were divided into parts by their distinct verbs. For example, “capture and share local knowledge” was split into “capture local knowledge” and “share local knowledge,” and participants were asked to rate “How often are these done together?” on a 4-item Likert scale (“never,” “sometimes,” “usually,” “always”). The intent was to understand how often strategies with multiple embedded activities were done together, the timing of proposed multiple parts, and other relevant details.

Data analysis

Analysis included several steps and was conducted in NVivo, Microsoft Excel, and Microsoft Word. First, pre-interview survey responses were summarized to evaluate the frequency of wording, concept, and similarity difficulties. Proportions in text exclude the two pilor interview participants. Second, two coders (CL, MM) used the rigorous and accelerated data reduction (RADaR) technique and content analysis to code and analyze interviews [11]. Rapid coding and analysis allowed us to identify data saturation. Coders used a priori codes based on the interview guides and generated new codes through a general inductive approach [12, 13]. A matrix template was used to organize and manage the data. Coding (CL, MM) was followed by discussion with a third coder (VY) for consensus. Then, all coders collectively identified the final themes.

Results

Participant characteristics

Twelve cognitive interviews were conducted with VA providers, who reported responding to an average of five of the seven surveys between 2015 and 2020. The 12 participants were geographically diverse and covered a range of areas of expertise (i.e., one MD, four PharmDs, five advanced practice providers, and two RNs). Half had quality improvement training in addition to clinical expertise, but none had prior implementation science or research training.

Survey response process

Who should complete the survey

Most participants (83%) confirmed they were correctly identified as a key informant and felt comfortable reporting on implementation strategies. However, 33% also engaged other informants when responding to the survey. One participant explained, “What we typically do is I go through [the survey] individually, and then I review with our team…and then we made a general consensus” (P02). Nevertheless, participants qualified that response validity and reliability was contingent on how closely someone worked to the clinical effort in question.

How the survey should be introduced

Participants had several suggestions about introducing the survey to a clinical audience. They suggested that explaining how the data would be used would encourage responses. As one participant remarked, “That’s the only way it’s going to make them see how it matters to them” (P11).

The impacts of completing the survey multiple times

All participants completed the annual survey over multiple years, and half said their understanding of strategy questions increased over time. Most explained that, if they were unsure about a strategy’s meaning or use, that they would report not using that strategy (78%). One participant said, “[I] don’t even know what that really means, so I’m just gonna say no” (P08).

Comprehensiveness of the survey

When asked if there were any activities that were done but not included in the survey, participants did not suggest additional strategies. One participant said, “I don’t know how you would ever miss something” (P09).

Language and wording

Using clinical language

Participants universally recommended minimizing implementation science jargon or “doublespeak” (P11). Many suggested adding more explicit parentheticals to highlight possible minor differences between strategies. This was particularly important for “when you’re dealing with clinical people…you may have to use less implementation science verbiage and sort of translate that into normal English that somebody is going to understand” (P10). One participant considered this tradeoff when adding strategy details: “It might make it longer but it, you might get more accurate responses” (P04). Participants expressed “brain fatigue” resulting from the current wording and length but that, with focus, they could understand the differences between strategies: “If I slow down and really think about it and kind of overanalyze it, because that’s what I tend to do, I think I can tell the difference” (P03). Several participants emphasized the need for the language to reflect the “real world” perspective. For instance, participant clinical background shaped interpretation of common words such as “visit,” “consultation,” and “technical assistance” in ways that may not have aligned with the intended ERIC definitions. One nurse asked, “What do you mean facilitate?” (P07).

Asking about strategy “use” not “implementation”

Furthermore, many (67%) noted confusion about whether “implemented” referred to whether they started using a strategy or continued an ongoing strategy. As such, some respondents did not endorse strategies they were using, because they thought strategies that were ongoing but institutionalized and in the sustainment phase were not of interest.

Organizing and specifying strategies

Clinicians’ ability to specify strategies

When asked whether they could specify Proctor et al. strategy details, participants confirmed they could feasibly and confidently provide information about the action, the frequency, and the justification for the strategy; however, they had more difficulty defining who performed and received the strategy, the outcomes that were targeted, and the stage of implementation. Those with more QI experience could better articulate strategy specifics, but everyone alluded to difficulty disaggregating how strategies were actually used in complex clinical environments. Interestingly, we did not observe differences in responses based on how many times a survey response was provided (three vs five vs seven) suggesting a possible plateau effect at three.

Challenges with variable strategy specification

Participants underscored that strategies operated at differing levels and had differing specificity in their descriptions. Participants noticed that certain strategies could be employed by a single provider, while others required a clinical team or leadership support. Regarding the timing and stage of strategy use, clinicians were readily able to distinguish pre-implementation and implementation timing but could not easily delineate which strategies were used for sustainment. Likewise, they noted several strategies had embedded dosing information (e.g., one-time vs ongoing education), while most did not specify dosing. As such, some strategies were perceived to be more nebulous or dynamic than other, more clearly delineated, standardized strategies.

Placing less feasible strategies later in the survey

Participants were often frustrated with being asked about strategies that were perceived out of scope or out of their purview, suggesting “It leads to this sense of failure because you have not done something like work with an educational institution and then you start spinning in your brain like, “How would I even accomplish that?” (P11). Specifically, placing “Financial” cluster strategies at the beginning of the survey may have inadvertently discouraged participation because “we don’t have any control over that whatsoever” (P08).

Unintended uses of the survey

We observed several unintended consequences of participating in the survey. First, the survey served as a tool for ongoing tracking of activities and to anticipate responding to the strategy survey in the future years so “I didn’t have to rely on just my memory alone” (P03). Second, for a few participants, the survey was an “idea generator” and inspired future implementation: “each time we do the survey…you look at it as, ‘Oh, I have to do this’” (P02). Also, one participant recommended to ask “prospective questions…not what did you do in the past, but what do you plan on doing in the future?” (PL2).

Strategy clarity

Strategies clarity varied

According to the participants with a pre-interview survey, most strategies (90%) had at least one confusing element for one or more respondents and half (48%) had at least two. Strategies were unclear due to wording (32%), conceptual confusion (51%), or similarity between strategies (51%) for one or more respondents. Table 1 presents the most confusing clusters and strategies as endorsed by participants. Strategies within the “Financial” cluster were the most unclear to this group of VA clinicians, both in terms of language and conceptually (mean total concerns 6.8, range among strategies 0 to 9). Conversely, clarity was highest for strategies in the “Engage patient” cluster (mean total concerns 0.8, range 0 to 2) and “Support Clinicians” (mean total concerns 1.2, range 0 to 3) clusters. Wording concerns were most likely in the "Provide interactive assistance" cluster, while conceptual concerns were most present in the "Change infrastructure" cluster. Strategies in the "Train and educate stakeholders" and “Adapt and tailor to context” clusters were perceived to have the most overlap between one another. Almost half of strategies (44%) had “Other” uncategorized confusion which primarily reflected perceptions of relevance to the VA setting such as with “Make billing easier”: “I wasn’t aware billing for patient care services could be altered at the local facility level? So, this question of the survey seemed odd” (P03).

Table 1 Survey user validity concerns

Similar strategies

Some strategies should be combined

Of the 10 similar strategy pairs selected by the study team, participants suggested combining five, separating three, and were undecided on two (Table 2). Five of the 10 pairs were from the same ERIC cluster, while five were from different clusters; this did not impact whether strategies were perceived as similar or different. A nurse reflected many strategies were “synonyms of each other” and yet could not identify why a certain “word just sounds better” (P07). Beyond the pre-identified pairs of similar strategies, participants universally recommended to continue to “take out any redundancy” (PL2). Two of ten strategy pairs were difficult to discern, including “Facilitation” and “Provide ongoing consultation,” and “Conduct educational meetings” and “Conduct educational visits” because of the lack of detail in the labels and parenthetical examples.

Table 2 Similar strategies

Including similar strategies can result in unintended overinterpretation

When asked for side-by-side comparisons, participants often overinterpreted the wording to distinguish strategies in ways that the ERIC group may not have intended. Others recognized there were “subtle differences” between strategies but also said “I can’t verbalize the difference very well” (P06). In a minority of instances, strategy “definitions made [the differences between strategies] more unclear” (P10). One advanced practice provider described the strategy target as the key to interpreting between strategies (e.g., was the strategy targeting clinicians as primary recipients or targeting clinicians to reach patients). The strategies “Facilitate relay of clinical data to providers” and “Audit and provide feedback” were clearly distinct “because you’re trying, you’re going to modify behavior in the second one. OK, ‘collect data’. You’re going to give it to them, and then you’re going to change what they do” (P05).

Patient-facing strategies often overlapped or were unclear

Participants interpreted “Intervene with patients to enhance uptake and adherence” to overlap with “prepare patients to be active participants” because of the patient-orientation, although no details of patient activities were described. Notably, as frontline providers, participants wondered about the lack of specificity in the patient-facing strategies. Some reinforced strategies “may seem duplicate to us, but you guys are obviously trying to get at two totally different things” (P08).

Multiple embedded or multi-barreled strategies

Participants were asked to comment on their understanding of the composition of ten multi-barreled strategies (i.e., those with multiple embedded strategies). Furthermore, they were asked whether such strategies should remain as is or be divided into multiple strategies. Participants reported that all multi-barreled strategies included sequential steps in a process. Five strategies were considered to always or usually occur together, while five were considered less likely to co-occur together (Table 3). Overall, participants agreed “this is a chain of events that’s going to happen” (PL2), but they were more “hesitant on the timeline” (P07).

Table 3 Multiple embedded strategies

Some embedded strategies should remain as is

Participants recommended that five of the multi-barreled strategies remain together. These strategies focused on data and opinion leaders. For example, “Develop and organize quality monitoring systems” was seen as including two sequential but cohesive steps in one process (develop and then organize systems for monitoring quality). Of the 83% who saw the strategy as “stepping blocks” done together, one commented “Almost always you need to put those two together to make sure we’re doing things correctly and have a way to measure” (P02). Similarly, the “Develop and implement tools for quality monitoring” strategy was perceived as a stepwise process: “those are done sort of sequentially, but part of the same process…Because you can’t implement something you haven’t developed yet” (P10). Likewise, participants explained that certain multi-barreled strategies should be done together. For instance, one participant clarified (about the strategy “obtain and use feedback”), “You shouldn’t obtain feedback if you’re not going to use it for anything, but I think a lot of times we do. We ask for feedback and then we do nothing with it” (P10). Likewise with the strategy “Obtain and use patients/consumers and family feedback”, one participant explained, “somebody in this facility obtains feedback, but I don't know what they do with it” (P06).

Some strategies should be disaggregated into parts

In contrast, the five strategies that could be disaggregated into multiple parts were focused on resources and knowledge exchange. Participants noted that these compound strategies often were missing clarifying information, such as an intermediate step, details about who would do each part, or the intended outcomes. For example, one participant thought that “Capture and share local knowledge” may involve an intermediate step to “find out what your audience knows” (P12). In contrast to the obtaining and using feedback strategy, participant recommended "capture and share local knowledge" were actions meant to be split.

Discussion

These cognitive interviews with clinicians identified how ERIC-based surveys can be made more acceptable and understandable for end-users. We identified strengths of the ERIC survey, including the comprehensiveness, unintended positive consequences, and ability to gather useful data. We also identified areas of confusion that can be easily addressed through wording and organization changes. Incorporating feedback such as adding project-tailored labeling and definitions may improve ease and usability of the survey, reduce confusion, and decrease participant burden. These pragmatic improvements to the ERIC survey could ultimately assist VA and other institutions in designing, evaluating, and replicating quality improvement efforts.

The ERIC survey has helped to advance data collection and the science of selecting implementation strategies. We previously demonstrated the face validity of the ERIC survey and identified strategies associated with better performance on EBPs over time [14]. For example, analyses showed that using more strategies was associated with more HCV treatment starts and yet some strategies were more impactful early in the initiative [6, 7]. Recent work has also reinforced the survey’s concurrent validity through interviews with respondents about their local activities [15]. These cognitive interviews demonstrated that there were unintended benefits of responding to the survey. Not only did certain uncommon but feasible strategies in the VA context prompt participant interest for QI planning, but the survey format assisted with within and cross year tracking efforts.

While there is no shortage of recent calls for clarity of strategies to improve precision implementation, complete characterization of strategies is possible only when there is a clear taxonomy. Therefore, consistent naming conventions, as pioneered by the ERIC project, are needed, as are discernable core strategy specifications. The ways in which individuals attach meaning to words is grounded in their experience, such that clinicians and implementation scientists interpret strategies differently. We generally found that clinicians were frustrated with implementation terminology, and certain potentially innocuous terms were shaped by clinical experience. This resulted in some terms being imbued with unanticipated meaning (“visit”) and others being rendered meaningless to clinicians (“facilitation” and “facilitator”). Likewise, we found that specific clusters of ERIC strategies were more confusing to clinicians than others leading to potential underreporting of strategy use. For example, the patient-facing strategies were easier for clinicians to understand (albeit were underspecified), while the differences in interactive assistance strategies were universally confusing to clinicians. More steps need to be taken to demystify implementation strategies to frontline providers and, conversely, to engage end-users in data collection strategy development.

Over the 8 years of fielding ERIC surveys, we have continued to grapple with the ongoing tension between making strategy assessments generic versus tailoring them to a specific context. ERIC was developed to create a generic taxonomy of implementation strategies to further cooperative learning across projects. Yet, strategies are expected to be tailored to the setting, making the adaptation of strategies both its own strategy and an important element of specification. Our findings further support the notion that generic strategy descriptions are poorly understood. There is thus a tension between maintaining universality versus providing specificity to make strategies more relevant and understandable. One solution may be including a project-tailored glossary with definitions that reflect the clinical innovation, setting, and actors [16].

Others have similarly recommended changes to the ERIC taxonomy. One such project, the School Implementation Strategies Translating ERIC Resources Project, made surface-level changes to 52 of the strategies, deeper changes to five, deleted six, and added seven new strategies [17]. Perry et al. refined definitions of 13 strategies and proposed three new strategies in the context of primary care cooperative: “assess and redesign workflow, create online learning communities, and engage community resources” [18]. We found additional overarching themes by talking with healthcare workers. There is a need for such efforts to learn from each other to advance the science of implementation.

In strategy assessment, there is a tension between decreasing the survey length and being comprehensive. We included all 73 strategies, in part, as a validity check and to also not omit potentially important but unanticipated strategies. However, these interviews highlighted ways that including uncommonly used strategies (in our case, financial strategies) may have inadvertently deterred survey participation. In contrast, deciding on strategy inclusion based on a priori perceptions of feasibility is likely inappropriate, given our findings that respondents’ perceptions of feasibility do not match those of researchers [5]. One potential solution may include presenting strategies that are perceived to be less feasible later in the survey. We have also changed the survey directions to ensure respondents know there is not an expectation they would have used all of the strategies. However, deciding which strategies to include in ERIC surveys requires more study.

One way to manage the large array of strategies is to be thoughtful about their presentation. Respondents were confused by the variable level of specification provided in the stems. ERIC includes multifaceted strategies that combine multiple discrete strategies and strategy bundles and ERIC strategies are variably specified (e.g., some include the actor and dose and others do not), which impacted interpretation. While there is a push for focusing on the mechanisms underlying the strategies [19,20,21,22], we found that clinicians wanted concrete, relatable activities to respond to. The same strategies can be used for different purposes, and different strategies may be targeting the same mechanism of behavior change. Future work should focus on organizing the strategies in ways that are understandable to providers and in ways that address both form and function. Likewise, strategy combinations and sequencing are important elements that are challenging to capture in simple surveys [23, 24]. Though we have addressed this (in part) through annual surveys across implementation efforts, this is not always feasible. Ultimately, strategies likely need to be disaggregated to core components and mechanisms as to enhance specificity.

Strengths and limitations

These cognitive interviews with ERIC survey respondents provide novel insights into how these data should be collected. Participants were individuals who had completed at least three ERIC surveys, those with fewer or no experience with ERIC may have had even more difficulty understanding than presented here. Therefore, the changes that we make will need to be vetted with providers who are “survey naïve” and those in other disciplines, as we further adapt and refine the survey and associated methods. Selecting staff with more managerial and/or leadership positions may have yielded different results. Recall bias was cited as limitation to responding to annual surveys and was likewise a limitation here. Given this work was entirely in the VA, some findings may be less applicable to other settings. For example, financial strategies may be more applicable outside of the VA.

Future work

Emerging and existing tools can help lay practitioners enter implementation science, report strategies, and enhance translation of strategy information across different groups [25,26,27,28]. A pragmatic implementation strategy reporting tool is in process now by Rudd and colleagues [29] and may aid in strategy use and specification among those with no specialized implementation science training. Similarly, Walsh-Bailey et al. have tested pragmatic strategy reporting tools with varying degrees of detail and found them to be largely acceptable, appropriate, and feasible [30]. We will also consider strategy de-implementation reporting in the future [31].

Conclusion

This study identified ways in which ERIC strategy surveys can be improved for use in clinical settings. These findings contribute to the ongoing efforts to correct and improve the inventory of implementation strategies.

Availability of data and materials

Data are available upon reasonable request from the corresponding author.

Abbreviations

EBP:

Evidence-based practice

ERIC:

Expert Recommendations for Implementing Change

VA:

Veterans Health Administration

HCV:

Hepatitis C virus

QI:

Quality improvement

RADaR:

Rigorous and accelerated data reduction

References

  1. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

  3. Wensing M, Grol R. Knowledge translation in health: how implementation science could contribute more. BMC Med. 2019;17(1):88.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Waltz TJ, Powell BJ, Chinman MJ, Smith JL, Matthieu MM, Proctor EK, et al. Expert Recommendations for Implementing Change (ERIC): protocol for a mixed methods study. Implement Sci. 2014;9:39.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Yakovchenko V, Morgan TR, Chinman MJ, Powell BJ, Gonzalez R, Park A, et al. Mapping the road to elimination: a 5-year evaluation of implementation strategies associated with hepatitis C treatment in the veterans health administration. BMC Health Serv Res. 2021;21(1):1348.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Kirchner JE, Proctor EK, et al. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample. Implement Sci. 2017;12(1):60.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Willis GB, Artino AR Jr. What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ. 2013;5(3):353–6.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Tong A, Sainsbury P, Craig J. Consolidated Criteria for Reporting Qualitative Research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

    Article  PubMed  Google Scholar 

  10. Someren M, Barnard Y, Sandberg J. The Think aloud method - a practical guide to modelling cognitive processes. 1994.

    Google Scholar 

  11. Watkins DC. rapid and rigorous qualitative data analysis: the “RADaR” technique for applied research. Int J Qual Methods. 2017;16(1):1609406917712131.

    Article  Google Scholar 

  12. Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2006;5(1):80–92.

    Article  Google Scholar 

  13. Thomas DR. A general inductive approach for analyzing qualitative evaluation data. Am J Eval. 2006;27(2):237–46.

    Article  Google Scholar 

  14. Rogal SS, Yakovchenko V, Waltz TJ, Powell BJ, Gonzalez R, Park A, et al. Longitudinal assessment of the association between implementation strategy use and the uptake of hepatitis C treatment: year 2. Implement Sci. 2019;14(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Yakovchenko V, Morgan TR, Miech EJ, Neely B, Lamorte C, Gibson S, et al. Core implementation strategies for improving cirrhosis care in the Veterans Health Administration. Hepatology. 2022;76(2):404–17.

    Article  PubMed  Google Scholar 

  16. Nathan N, Powell BJ, Shelton RC, Laur CV, Wolfenden L, Hailemariam M, et al. Do the Expert Recommendations for Implementing Change (ERIC) strategies adequately address sustainment? Frontiers in Health Services. 2022;2:94.

    Article  Google Scholar 

  17. Cook CR, Lyon AR, Locke J, Waltz T, Powell BJ. Adapting a compilation of implementation strategies to advance school-based implementation research and practice. Prev Sci. 2019;20(6):914–35.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Perry CK, Damschroder LJ, Hemler JR, Woodson TT, Ono SS, Cohen DJ. Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory. Implement Sci. 2019;14(1):32.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11(10):e053474.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19(2):e1003918.

    Article  PubMed  PubMed Central  Google Scholar 

  22. McHugh S, Presseau J, Luecking CT, Powell BJ. Examining the complementarity between the ERIC compilation of implementation strategies and the behaviour change technique taxonomy: a qualitative analysis. Implement Sci. 2022;17(1):56.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Yakovchenko V, Miech EJ, Chinman MJ, Chartier M, Gonzalez R, Kirchner JE, et al. Strategy configurations directly linked to higher hepatitis C virus treatment starts: an applied use of configurational comparative methods. Med Care. 2020;58(5):e31–8.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Beidas RS, Dorsey S, Lewis CC, Lyon AR, Powell BJ, Purtle J, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci. 2022;17(1):55.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Rogal SS, Powell BJ, Chinman M, Gastroenterology, Hepatology Implementation Research G. Moving toward impact: an introduction to implementation science for gastroenterologists and hepatologists. Gastroenterology. 2020;159(6):2007–12.

    Article  PubMed  Google Scholar 

  26. Curran GM. Implementation science made too simple: a teaching tool. Implement Sci Commun. 2020;1:27.

    Article  PubMed  PubMed Central  Google Scholar 

  27. National Cancer Institute, US Department of Health and Human Services. Implementation science at a glance: a guide for cancer control practitioners (NIH publication 19-CA-8055). Bethesda: National Institutes of Health; 2019.

    Google Scholar 

  28. Lane-Fall MB, Curran GM, Beidas RS. Scoping implementation science for the beginner: locating yourself on the “subway line” of translational research. BMC Med Res Methodol. 2019;19(1):133.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Rudd BN, Davis M, Beidas RS. Integrating implementation science in clinical research to maximize public health impact: a call for the reporting and alignment of implementation strategy use with implementation outcomes in clinical research. Implement Sci. 2020;15(1):103.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Walsh-Bailey C, Palazzo LG, Jones SM, Mettert KD, Powell BJ, WiltseyStirman S, et al. A pilot study comparing tools for tracking implementation strategies and treatment adaptations. Implement Res Pract. 2021;2:26334895211016028.

    Google Scholar 

  31. Patey AM, Grimshaw JM, Francis JJ. Changing behaviour, ‘more or less’: do implementation and de-implementation interventions include different behaviour change techniques? Implement Sci. 2021;16(1):20.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

N/A.

Funding

Funding for the investigators’ time was partly supported by VA QUERI grant PEC 19–307 (PI: Rogal) and NIDA grant K23DA048182 (PI: Rogal).

Author information

Authors and Affiliations

Authors

Contributions

VY, MJC, and SSR conceptualized and designed the study. CL, SG, and BN collected the data. VY, MC, CL, MM, and SSR analyzed the data. BJP and TJW provided substantial editing. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Shari S. Rogal.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed and approved by the VA Pittsburgh Healthcare System Institutional Review Board. All participants provided informed consent.

Consent for publication

N/A.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yakovchenko, V., Chinman, M.J., Lamorte, C. et al. Refining Expert Recommendations for Implementing Change (ERIC) strategy surveys using cognitive interviews with frontline providers. Implement Sci Commun 4, 42 (2023). https://doi.org/10.1186/s43058-023-00409-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-023-00409-3

Keywords