Skip to main content

A pragmatic context assessment tool (pCAT): using a Think Aloud method to develop an assessment of contextual barriers to change

Abstract

Background

The Consolidated Framework for Implementation Research (CFIR) is a determinant framework that can be used to guide context assessment prior to implementing change. Though a few quantitative measurement instruments have been developed based on the CFIR, most assessments using the CFIR have relied on qualitative methods. One challenge to measurement is to translate conceptual constructs which are often described using highly abstract, technical language into lay language that is clear, concise, and meaningful. The purpose of this paper is to document methods to develop a freely available pragmatic context assessment tool (pCAT). The pCAT is based on the CFIR and designed for frontline quality improvement teams as an abbreviated assessment of local facilitators and barriers in a clinical setting.

Methods

Twenty-seven interviews using the Think Aloud method (asking participants to verbalize thoughts as they respond to assessment questions) were conducted with frontline employees to improve a pilot version of the pCAT. Interviews were recorded and transcribed verbatim; the CFIR guided coding and analyses.

Results

Participants identified several areas where language in the pCAT needed to be modified, clarified, or allow more nuance to increase usefulness for frontline employees. Participants found it easier to respond to questions when they had a recent, specific project in mind. Potential barriers and facilitators tend to be unique to each specific improvement. Participants also identified missing concepts or that were conflated, leading to refinements that made the pCAT more understandable, accurate, and useful.

Conclusions

The pCAT is designed to be practical, using everyday language familiar to frontline employees. The pCAT is short (14 items), freely available, does not require research expertise or experience. It is designed to draw on the knowledge of individuals most familiar with their own clinical context. The pCAT has been available online for approximately two years and has generated a relatively high level of interest indicating potential usefulness of the tool.

Peer Review reports

Background

Implementation scientists recognize that determinants (barriers or facilitators) within local context impact implementation efforts. Assessing context before, during, and/or after implementation is important so that implementers can use this information identify optimal strategies that can be used to address barriers and leverage facilitators [1]. Easy-to-use quantitative context assessment tools rooted in the concepts and evidence-base within implementation science need to be developed. Such tools rely on frontline clinicians and staff accurately understanding of what is being asked within assessment instruments. However, these individuals are often not familiar with the language used in these assessments or how it applies to their own situation. Assessments should be rooted in theoretical constructs and yet also need to be conceptually clear using every-day language.

The Consolidated Framework for Implementation Research (CFIR) is a determinant framework, designed to identify barriers and facilitators that potentially impact implementation outcomes. Though frameworks like the CFIR seek to provide clarity and consistency in terms and definitions for each construct, the language used can be highly technical. The dominant approach for identifying barriers and facilitators has relied on researchers conducting assessments based on information elicited through qualitative interviews that are analyzed, interpreted, and used to develop tailored strategies with guidance for local practitioners to help them navigate their context for successful implementation [1,2,3,4,5]. Measurement instruments seek to elicit quantitative assessments of barriers and facilitators because this can be a more efficient way to assess context. However, these instruments are often exceedingly long or require expertise and training to use [6,7,8,9,10,11]. Frontline clinicians and staff who do the work of implementation may misunderstand or misapply questions designed to elicit potential barriers and facilitators; they are often more familiar with quality improvement language [12,13,14,15,16].

Pragmatic measures of context are needed. Glasgow and Riley define pragmatic measures as being important to stakeholders, low burden (usually indicated by a low number of survey items), actionable, and sensitive to change [17]. Stanick et al. add that pragmatic measures are feasible, low cost, and brief [18]. Guided by these principles, an abbreviated pragmatic context assessment tool (pCAT) was developed based on the CFIR. This instrument has been available online (www.CFIRguide.org) and has generated a high level of interest, generating nearly 50 requests over approximately 18 months (2021–2022). Thus, the purpose of this paper is to document methods used to develop the pCAT.

Methods

Our research team developed an abbreviated context assessment tool based on CFIR constructs that repeatedly arose as potential barriers or facilitators in implementation [19,20,21,22,23]. This tool was piloted with six frontline improvement teams (see Table 1); the teams collectively comprised 21 individuals who participated in the Learn. Engage. Act. Process. (LEAP) Program [23]. LEAP is a 26-week, virtual, coach-led, structured learning program designed to develop competency in the application of quality improvement methods and techniques for frontline clinicians and staff. The goal was for teams to use the assessment tool to identify potential barriers and facilitators to implementing improvements, so they could better understand the micro-level context within which they were working to improve processes and programs. We had concerns with the piloted version, however, because many responses did not reflect actual barriers and facilitators observed by and reported to the LEAP coaches who worked closely with frontline teams. We took the opportunity to pause, reflect, and update the pCAT.

Table 1 List of CFIR constructs included in Think Aloud survey development

Think Aloud method

The updated version of the pCAT (see Table 1) was incorporated into the interview guide with the goal of engaging individuals using a Think Aloud method [24] that asks participants to verbalize their thoughts as they consider how to respond to questions in the assessment tool. Specifically, as participants responded, we asked them to verbalize their considerations, interpretations, and to ask questions or seek clarifications, if needed. We encouraged participants to verbally identify areas of disconnect, misinterpretation, and misunderstanding with the language and concepts being used. Interviewees were instructed to read each item out loud and say out loud, everything that came to mind. This included thoughts about the CFIR construct itself, the formatting of the tool, the language used to frame each construct, and their actual response as it related to their local quality improvement context. Interviewees were informed that the interviewer may periodically ask follow-up questions but capturing stream-of-consciousness interpretation of the tool was the primary goal. Iterative changes to the pCAT tool were made based on interviewee feedback (see Fig. 1).

Fig. 1
figure 1

Think Aloud interview procedure

Participants

Participants included members of teams that participated in the LEAP quality improvement learning program after its initial pilot. Potential participants were invited to a telephone interview approximately 6 months after completing LEAP.

Interviews

Interviews lasted for about an hour and were conducted from March 2018 through August 2019, audio recorded, and transcribed verbatim.

Coding and analysis

Qualitative descriptions of barriers and facilitators in the transcripts were coded using CFIR constructs as preliminary codes. Additional codes were developed to capture more specificity when needed (e.g., adding consideration of Time as a subconstruct of Available Resources). As each interview was completed, language in the pCAT was iteratively updated as needed, based on input from each participant.

NVivo 12 Pro was used to facilitate coding [25]. Interviews were conducted by CHR. CHR and LJD examined early interview transcripts independently and participated in consensus discussions to establish initial coding and preliminary findings; all subsequent coding and iterative updates of the pCAT were done by (CHR) [26]. The Consolidated Criteria for Reporting Qualitative Studies checklist was used to guide the reporting of data collection and analysis activities [27].

Human protections

This work was developed as a non-research activity (i.e., without Institutional Review Board approval under the authority of Veterans Health Administration (VHA) operations) and complies with the guidance about authorization of non-research manuscripts outlined in VHA Program Guide 1200.21: VHA Operations Activities That May Constitute Research [28]. All authors attest that the activities that resulted in the production of this manuscript were conducted as part of the non-research activities conducted under the authority of the VHA National Center for Health Promotion and Disease Prevention.

Results

Thirty-eight invitations were sent to individuals on 34 teams that participated in LEAP after the initial pilot; 27 interviews were completed (71% response rate). Two interviews included two individuals from the same team at their request; the rest were one-on-one. The average length of the interviews was 47 min (range 27–63 min); all participants successfully completed their interview. Additional file 1 contains the final version of the abbreviated pragmatic context assessment tool (pCAT) based on results from interviews. The pCAT evolved as interviews progressed, based on experiences and input from the first nine people interviewed; the remaining 18 people did not express any challenges in responding to questions and their responses were in line with the intent of each question, indicating stability of the tool. The following sections highlight key themes that influenced changes made to the context assessment tool.

Specificity of the change: question stem

The first task for participants was to describe the change or improvement being implemented. Initially, the guidance was, “Please enter your problem area (area for improvement). This should reflect whatever topic you and your team are currently considering. It does not have to be final (e.g., The majority of patients fail to show up for scheduled orientation)”. However, participants found this guidance too broad and speculative, and they struggled to provide assessments. It was easier for participants when they anchored their responses to a specific, recent, or on-going improvement or implementation effort as they considered each construct. Participants observed that each construct could be a facilitator with one improvement effort and a barrier with another, affirming that context and knowing what the change is, matters. For example, communication may be a facilitator when the implementation involves people from the same service line but becomes a barrier when the change requires communication and cooperation across service lines. Attempting to rate CFIR constructs was much more difficult and far less useful than critically assessing the specific context of a specific planned or on-going implementation.

Thus, we edited the “stem” to be more specific and concrete. The final guidance was developed as, “We’ve found that it’s best to think concretely about a planned or on-going implementation (as opposed to the more general implementation environment). Include the specifics of the implementation/improvement project here.” We allowed flexibility in interpretation of “changes” as “implementation” or “improvement” because both involve implementing a planned change.

Identifying barriers versus facilitators

For each construct, participants were asked whether they agreed or disagreed with each statement. Agreeing meant the construct was a facilitator and disagreeing meant that the construct was a barrier. Participants could also be “neutral.” However, participants had difficulty indicating a level of agreement and instead wanted to answer with yes/no. To address this, we added explanatory text for Agree (this means the item is a potential facilitator) and Disagree (this means the item is a potential barrier). This change helped participants respond more accurately.

Response options

After introducing explanations for assessing constructs as barrier versus facilitator (or neutral), participants were asked to assess the potential impact on implementation. Choices included three levels of impact (low, moderate, and high). Participants had difficulty differentiating between three levels and understanding how to assess impact (or influence). They were more comfortable assessing the effect (or consequence). Thus, we simplified responses to include “Weak/no effect” and “Strong effect” options.

CFIR construct assessments

Six of ten CFIR constructs in the final version of the pCAT were unchanged from the version initially used in the think-aloud interviews (Patient Needs & Resources, Networks & Communications, Compatibility, Goals & Feedback, and Reflecting & Evaluating). The remaining four CFIR constructs shifted from future focus (e.g., “we will have…”) to current state (e.g., “we have…”). Additional changes are described below.

Relative advantage and tension for change

References to “key people” in these constructs were too vague for respondents. We revised language to refer to “people here” so respondents could tailor respond based on their knowledge of people most relevant for assessing relative advantage; this appeared to resolve difficulties in subsequent interviews.

Leadership engagement

The pCAT initially had a single question about “leaders here.” Participants had difficulty responding to this question without first considering the levels and types of leaders they work with, who may or may not have been involved in the improvement and then determining what they knew about their respective degree of engagement. Based on this feedback, we split CFIR’s “Leadership Engagement” construct to include two levels of leadership: (1) “leaders I work with most closely” and (2) “higher level leaders.” This change enabled respondents to respond more accurately.

Available resources

The pCAT Version 1.0 included a single question about “Available Resources.” Based on LEAP coach experiences with LEAP teams prior to our Think Aloud interviews, we separated this single question into three separate questions in pCAT Version 2.0. With this change, respondents had no difficulty answering separate questions about time and space. For “other needed resources,” respondents revealed a range of resources that might be needed including incentives for program participants and having a discretionary budget. Version 2.0 also incorporated current-state language instead of future-focused language as described above.

Other suggested improvements

Participants were asked about any additional barriers or facilitators. One participant suggested asking about longer-term sustainment instead of focusing on short-term change. Another participant suggested adding open-text space to allow respondents to explain and justify their responses and to reflect on variation or disagreement among team members.

Discussion

Our Think Aloud approach engaged frontline clinicians in the process of developing an abbreviated practical context assessment tool using plain language. The pCAT comprises 14 questions that assess ten CFIR constructs that range across four of the five framework domains: Innovation Characteristics, Outer Setting, Inner Setting, and Process (a copy is provided in Additional file 1). These constructs are among the most frequently reported as key determinants of implementation outcomes using the CFIR [2, 29]. Some of these constructs are also important for Lean quality improvement principles such as Goals and Feedback (i.e., alignment with objectives), Reflecting and Evaluating (e.g., using data to track outcomes), and Networks and Communications (e.g., open lines of dialogue) [30].

Context assessments are rarely done by practitioners within their own setting [31]. One reason for this is that measurement instruments often require expertise and are burdensome to apply [18, 31]. In deference to expertise and knowledge of frontline clinicians within their own setting [32] and in acknowledgement of their limited time, practical context assessment tools are needed that provide brief ratings of context to generate reflection and problem-solving by frontline teams engaged in improvement and that may help increase response rates for researchers and implementers who rely on these assessments to design strategies for successful implementation [1, 33].

Stanick et al. developed objective criteria by which to assess pragmatism of a measurement instrument [18], dividing criteria into “stakeholder-facing” and “objective” criteria. We applied each of the five objective criteria, which each use a six-point rating scale (− 1 to + 4; see Table 2). Based on these objective criteria, the pCAT is relatively pragmatic with scores of + 3 or + 4 for all criterion.

Table 2 Objective pragmatic rating criteria

The pCAT is available online [30]. It requires no specialized training to administer and can be completed electronically or on paper. The pCAT has limitations. First, this tool is an abbreviated assessment and is not designed to comprehensively assess all CFIR constructs. Though construct coverage is limited, those included align with the updated version of the CFIR [34]. The pCAT does not provide guidance about what respondents should do with the information elicited. Within the LEAP program [23], coaches worked with teams and highlighted the value of identifying barriers and facilitators when implementing changes so that barriers can be avoided or minimized and facilitators can be leveraged for success. Waltz et al. list recommended strategies that may best address each CFIR construct that manifests as a barrier [1]. Table 3 lists implementation strategies with the highest rate of endorsement for each of the ten constructs in pCAT that could be considered. Another key limitation of the pCAT is that each CFIR construct is assessed with a single question and does not follow a psychometric paradigm of development. The pCAT is offered as a brief practical tool for use by frontline teams and or coaches or facilitators to encourage collective understanding of local barriers and facilitators and to generate discussion about potential strategies based on this information. Content and structure of the final version is based on experiences of 27 individuals who were engaged in a quality improvement learning program. All respondents were frontline clinicians who were members in quality improvement teams embedded in a VHA medical center-based weight management program.

Table 3 List of implementation strategies recommended to address pCAT constructs

Conclusion

The pragmatic context assessment tool (pCAT) is designed as an abbreviated pragmatic approach to assess barriers and facilitators in clinical settings. It is short (14 items), available online (www.cfirguide.org), and is designed to draw on the expertise and knowledge of people who work at the frontline and are most familiar with their own clinical context.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available because qualitative and quantitative data are highly processed to support this study and to protect the identity of the individuals and locations who participated in the study. These data are, however, available from the corresponding author on reasonable request.

Abbreviations

CFIR:

Consolidated Framework for Implementation Research

LEAP Program:

Learn. Engage. Act. Process

pCAT:

Pragmatic context assessment tool

VHA:

Veterans Health Administration

References

  1. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):1–5.

    Article  Google Scholar 

  2. Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the consolidated framework for implementation research. Implement Sci. 2015;11(1):1–3.

    Article  Google Scholar 

  3. Krause J, Van Lieshout J, Klomp R, Huntink E, Aakhus E, Flottorp S, Jaeger C, Steinhaeuser J, Godycki-Cwirko M, Kowalczyk A, Agarwal S. Identifying determinants of care for tailoring implementation in chronic diseases: an evaluation of different methods. Implement Sci. 2014;9(1):1–2.

    Article  Google Scholar 

  4. McEvoy R, Ballini L, Maltoni S, O’Donnell CA, Mair FS, MacFarlane A. A qualitative systematic review of studies using the normalization process theory to research implementation processes. Implement Sci. 2014;9(1):1–3.

    Article  Google Scholar 

  5. Bergström A, Ehrenberg A, Eldh AC, Graham ID, Gustafsson K, Harvey G, Hunter S, Kitson A, Rycroft-Malone J, Wallin L. The use of the PARIHS framework in implementation research and practice—a citation analysis of the literature. Implement Sci. 2020;15(1):1–51.

    Article  Google Scholar 

  6. Weiner BJ, Mettert KD, Dorsey CN, Nolen EA, Stanick C, Powell BJ, Lewis CC. Measuring readiness for implementation: a systematic review of measures’ psychometric and pragmatic properties. Implement Res Pract. 2020;1:2633489520933896.

    Google Scholar 

  7. Powell BJ, Mettert KD, Dorsey CN, Weiner BJ, Stanick CF, Lengnick-Hall R, Ehrhart MG, Aarons GA, Barwick MA, Damschroder LJ, Lewis CC. Measures of organizational culture, organizational climate, and implementation climate in behavioral health: a systematic review. Implement Res Pract. 2021;2:26334895211018864.

    Google Scholar 

  8. Dorsey CN, Mettert KD, Puspitasari AJ, Damschroder LJ, Lewis CC. A systematic review of measures of implementation players and processes: Summarizing the dearth of psychometric evidence. Implement Res Pract. 2021;2:26334895211002470.

    Google Scholar 

  9. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8(1):1–20.

    Article  Google Scholar 

  10. Clinton-McHarg T, Yoong SL, Tzelepis F, Regan T, Fielding A, Skelton E, Kingsland M, Ooi JY, Wolfenden L. Psychometric properties of implementation measures for public health and community settings and mapping of constructs against the consolidated framework for implementation research: a systematic review. Implement Sci. 2016;11(1):1–22.

    Article  Google Scholar 

  11. Lennox L, Maher L, Reed J. Navigating the sustainability landscape: a systematic review of sustainability approaches in healthcare. Implement Sci. 2018;13(1):1–7.

    Article  Google Scholar 

  12. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, Boynton MH, Halko H. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):1–2.

    Article  Google Scholar 

  13. Martinez RG, Lewis CC, Weiner BJ. Instrumentation issues in implementation science. Implement Sci. 2014;118(9):1–9.

    Google Scholar 

  14. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65–76.

    Article  Google Scholar 

  15. Glasgow RE. Critical measurement issues in translational research. Res Soc Work Pract. 2009;19(5):560–8.

    Article  Google Scholar 

  16. Tinc PJ, Gadomski A, Sorensen JA, Weinehall L, Jenkins P, Lindvall K. Applying the Consolidated Framework for implementation research to agricultural safety and health: barriers, facilitators, and evaluation opportunities. Saf Sci. 2018;1(107):99–108.

    Article  Google Scholar 

  17. Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013;45(2):237–43.

    Article  Google Scholar 

  18. Stanick CF, Halko HM, Nolen EA, Powell BJ, Dorsey CN, Mettert KD, Weiner BJ, Barwick M, Wolfenden L, Damschroder LJ, Lewis CC. Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale. Transl Behav Med. 2021;11(1):11–20.

    Article  Google Scholar 

  19. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8(1):1–7.

    Article  Google Scholar 

  20. Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, Oddone EZ. Implementation evaluation of the telephone lifestyle coaching (TLC) program: organizational factors associated with successful implementation. Transl Behav Med. 2017;7(2):233–41.

    Article  Google Scholar 

  21. Goodrich DE, Lowery JC, Burns JA, Richardson CR. The phased implementation of a national telehealth weight management program for veterans: mixed-methods program evaluation. JMIR Diabetes. 2018;3(4):e9867.

    Google Scholar 

  22. Damschroder LJ, Reardon CM, AuYoung M, Moin T, Datta SK, Sparks JB, Maciejewski ML, Steinle NI, Weinreb JE, Hughes M, Pinault LF. Implementation findings from a hybrid III implementation-effectiveness trial of the Diabetes Prevention Program (DPP) in the Veterans Health Administration (VHA). Implement Sci. 2017;12(1):1–4.

    Article  Google Scholar 

  23. Damschroder LJ, Yankey NR, Robinson CH, Freitag MB, Burns JA, Raffa SD, Lowery JC. The LEAP Program: quality improvement training to address team readiness gaps identified by implementation science findings. J Gen Intern Med. 2021;36(2):288–95.

    Article  Google Scholar 

  24. Charters E. The use of think-aloud methods in qualitative research an introduction to think-aloud methods. Brock Educ J. 2003;12(2). https://doi.org/10.26522/brocked.v12i2.38.

  25. QSR International Pty Ltd. (2018) NVivo (Version 12). https://www.qsrinternational.com/nvivo-qualitative-dataanalysis-software/home.

  26. Leavy P, editor. The Oxford handbook of qualitative research. USA: Oxford University Press; 2014.

    Google Scholar 

  27. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

    Article  Google Scholar 

  28. Veterans Health Administration. Program Guide 1200.21 VHA Operations Activities That May Constitute Research [Internet]. 2020 [accessed 5/20/2022]. Available from: https://www.research.va.gov/resources/policies/ProgramGuide-1200-21-VHA-Operations-Activities.pdf.

  29. Means AR, Kemp CG, Gwayi-Chore MC, Gimbel S, Soi C, Sherr K, Wagenaar BH, Wasserheit JN, Weiner BJ. Evaluating and optimizing the consolidated framework for implementation research (CFIR) for use in low-and middle-income countries: a systematic review. Implement Sci. 2020;15(1):1–9.

    Article  Google Scholar 

  30. The Consolidated Framework for Implementation Research – Technical Assistance for users of the CFIR framework [Internet]. n.d. [accessed 5/20/2022]. Available from: https://cfirguide.org/.

  31. Jones EL, Dixon-Woods M, Martin GP. Why is reporting quality improvement so hard? A qualitative study in perioperative care. BMJ Open. 2019;9(7):e030269.

    Article  Google Scholar 

  32. Veazie S, Peterson K, Bourne D, Anderson J, Damschroder L, Gunnar W. Implementing high-reliability organization principles into practice: a rapid evidence review. J Patient Saf. 2022;18(1):e320–8.

    Article  Google Scholar 

  33. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–5.

    Article  Google Scholar 

  34. Damschroder L, Reardon CM, Widerquist MA, Lowery JC. The Updated Consolidated Framework for Implementation Research: CFIR 2.0. (under review). n.d.

  35. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):1–4.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the support of our operational partners at the VHA National Center for Health Promotion and Disease Prevention and all of the Learn. Engage. Act. Process. (LEAP) Program participants who participated in the Think Aloud interviews.

Funding

This work was supported by a program grant, Award # QUE 15–286, from the United States Department of Veterans Affairs, the Quality Enhancement Research Initiative.

Author information

Authors and Affiliations

Authors

Contributions

CHR collected the data. CHR and LJD wrote the first draft. CHR and LJD reviewed and commented on subsequent drafts of the manuscript. The author(s) read and approved the final manuscript.

Authors’ information (optional)

The authors have extensive experience applying the CFIR qualitatively across a range of studies. We are researchers embedded within and employed by the United States Veterans Health Administration (VHA), the largest integrated healthcare system in the USA. VHA has over 1000 medical centers, community-based outpatient clinics, and other entities, and serves 9.6 million enrolled US military Veterans. LJD was the lead developer of the CFIR; she has collaborated extensively with research teams across healthcare settings, including dozens of studies outside VHA. With nearly 20 years of experience in management consulting and other non-research settings, LJD brings a practical lens to implementation research. LJD and CHR helped lead development of the LEAP quality improvement learning program that engages frontline teams in hands-on execution of a Plan-Do-Study-Act cycle of change. The earliest forms of context assessment were used with LEAP teams. CHR was one of the first LEAP coaches, working closely with frontline teams. CHR also has 20 years of qualitative analysis experience and led data collection through semi-structured interviews as well as coding and analysis.

Corresponding author

Correspondence to Claire H. Robinson.

Ethics declarations

Ethics approval and consent to participate

This work was developed as a non-research activity (i.e., without IRB approval under the authority of VHA operations) and complies with the guidance about authorization of non-research manuscripts outlined in VHA Program Guide 1200.21: VHA Operations Activities That May Constitute Research [27]. All authors attest that the activities that resulted in the production of this manuscript were conducted as part of the non-research activities conducted under the authority of the VHA National Center for Health Promotion and Disease Prevention.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Robinson, C.H., Damschroder, L.J. A pragmatic context assessment tool (pCAT): using a Think Aloud method to develop an assessment of contextual barriers to change. Implement Sci Commun 4, 3 (2023). https://doi.org/10.1186/s43058-022-00380-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-022-00380-5

Keywords