Skip to main content

Randomized controlled trial of tailored audit with feedback in VHA long-term care settings

Abstract

Background

The Long-Term Care QUERI program supported implementation of the Life-Sustaining Treatment Decisions Initiative in US Veterans Health Administration long-term care settings. The program worked with eleven Community Living Centers (CLCs) and twelve Home-Based Primary Care (HBPC) programs to increase rates of completed templates, using audit with feedback. We distributed monthly feedback reports to site champions showing the number of Veterans with appropriate documentation. Although feedback reports are a common implementation tool, little is known about the most effective ways to design, distribute, and support them. We sought to test tailoring reports with tips using site-specific data, as well as national comparator data.

Methods

We conducted a cluster randomized controlled trial of monthly feedback reports utilizing site-tailored tips and national comparator data compared to our original feedback reports that included only graphical and numerical data. CLC and HBPC team members were invited to participate in brief surveys each quarter to determine if they had received and used the feedback reports. The outcome for CLC residents was the percent with a completed LST template any time prior to the 14th day of their stay. The outcome for HBPC residents was the percent of Veterans with a completed LST template by their second HBPC visit.

Results

The response rate to the survey ranged between 6.8 and 19.3% of staff members across the CLC and HBPC sites with 12.8–25.5% of survey respondents reporting that they had seen the feedback reports. The linear regression models showed no significant association between receiving the enhanced feedback reports and having a higher documentation completion rate.

Conclusions

Receiving feedback reports tailored to sites by including tips based on baseline context assessments and qualitative findings, and reports showing national comparator data, did not have an impact on the number of Veterans with a completed LST template. Having a higher proportion of CLC or HBPC team members view the reports was not associated with an increase in LST template completion. These findings suggest that tailored audit with feedback may not have been effective at the program level, although the proportion of respondents who reported seeing the reports was small.

Peer Review reports

Background

The US Veterans Health Administration (VHA) National Center for Ethics in Health Care released the Life-Sustaining Treatment Decisions Initiative (LSTDI) in 2017, with a mandate for full implementation within 18 months of being released [1]. The LSTDI was created to ensure that Veterans’ goals, values, and preferences for life-sustaining treatments (LSTs) are elicited, documented in the electronic health record, and honored across the VA system. The associated life-sustaining treatment (LST) template and order set is the standardized, durable electronic health record tool used to document goals of care conversations, and orders generated through it are the only method of documenting LST decisions, such as whether the patient desires full cardiac and pulmonary resuscitation in case of arrest [2]. For inpatient settings, this requirement ensures the use of the template.

The Long-Term Care (LTC) Quality Enhancement Research Initiative (QUERI) program was funded in 2015 to support implementation of the LSTDI in VHA long-term care settings. LTC QUERI worked with VA-owned and operated nursing homes called Community Living Centers (CLCs) and Home-Based Primary Care programs (HBPC) across two VHA regional networks called Veterans Integrated Service Networks (VISNs) [3]. We identified site champions within each program who agreed to be the liaison for our work through contacting leadership in LTC at each facility in the two VISNs.

Our implementation strategy for the overall project was audit with feedback [3]. We selected it as our implementation strategy as it has been extensively studied as an approach to modifying behavior among specific groups of providers [4, 5]. Additionally, audit with feedback has been shown to improve the richness of physician documentation and our overall program goal was to support documentation of LST preferences in the electronic medical record [6, 7].

Audit with feedback involves aggregating clinical or other performance data, both over time and, in the case of unit or team feedback, over individual performance, and providing the aggregated data summary to individual practitioners, teams, or healthcare organizations. It has been shown to have a positive but modest absolute effect of increasing the likelihood of achieving a desired behavior change of about 4%, at the median [8]. Despite a strong body of literature, little is known about how to optimize the effectiveness of feedback interventions [9, 10]. Using comparator data and providing specific statements tailored to local conditions have been shown to improve the effectiveness of feedback interventions [8, 11]. Our initial feedback reports were developed through a user-centered design process [12].

Our objective in this sub-study was to test different approaches to tailoring feedback reports.

Methods

In this paper, we report the results of a sub-study cluster randomized controlled trial of tailored feedback reports conducted as part of the Long-Term Care QUERI (LTCQ) program. The protocol papers for the LTCQ program were published previously [3, 13], and the main results for the Community Living Center (nursing home) component of the study were reported previously [14]. The main results for the Home-Based Primary Care component are currently under review.

Overall study

The entire LTCQ program was conducted between October 2015 and August 2020. In April 2018, we started sending quarterly feedback reports to site champions that showed the number of Veterans who had a completed LST template while receiving care in a CLC or HBPC and increased the frequency to monthly in October 2018 (Fig. 1: Timeline of feedback report distribution). The feedback reports were sent by LTCQ staff to the site champion each month via email. The email included a reminder that the feedback reports could be shared with leadership and/or other team members. In these reports, no comparators or other information was provided. A total of eleven CLCs and twelve HBPC programs participated in the LTCQ program over the course of the entire study. Other details about the overall study are available through the previously published papers [3, 13].

Fig. 1
figure 1

Timeline of feedback report distribution

Intervention description for sub-study

We conducted a cluster randomized trial of two types of tailored feedback reports with mid-point cross-over between September 2019 and August 2020. In the first type, we provided tips for overcoming barriers and making use of facilitators identified through previous data collection (see the “Tailoring methods” section below). In the second type, we provided performance comparators based on national data for the type of program, either Community Living Centers (VA nursing homes) or Home-Based Primary Care. We provide an example of each type of tailored feedback report in Additional File 1. All sites participating in the LTCQ feedback report study were included in the randomized trial.

Rationale

Our hypotheses were informed by the Clinical Performance Feedback Intervention Theory (CP-FIT) [15]. For the tip-tailored feedback reports, we hypothesized that feedback reports tailored with tips related to areas of organizational concern differ from non-tailored feedback reports in the following ways: greater distribution of the feedback report, more individuals reading the feedback report, more individuals expressing understanding of the report, more discussion of the report among providers, and, ultimately, increased proportion of Veterans with completed LST templates. We were attempting to increase actionability by providing additional prompts to the clinical champions and other team members to help them overcome identified barriers and use identified facilitators to increase the proportion of documented LST templates among eligible Veterans. We kept the tips short and succinct to limit the cognitive burden of reading and understanding the feedback reports.

For the comparator-tailored feedback reports, we hypothesized that feedback reports providing comparator data provide motivation to the individual or team receiving the report to improve performance if it is below the level of the comparison. Based on CP-FIT, an underlying mechanism is also actionability by communicating room for improvement as well as adding social influence by comparing to others in the same type of setting. The decision to include comparator data was motivated by the theory of planned behavior which emphasizes the potential for social pressure to influence changing professional practice [16].

We were interested in testing the effect of the tip-tailored reports compared to the comparator-tailored reports, and overall, the tailored reports to the non-tailored reports used throughout the study.

Feedback reports were sent by email from the point of contact (one individual) for each VISN in the research team to the champion at each site, and they were encouraged to share the reports with their respective care teams.

The trial lasted 12 months. The sites were divided into two groups as shown in Fig. 2. Group 1 received the tailored feedback reports (intervention) for the first 6 months, while Group 2 functioned as the control group receiving the non-tailored feedback reports (as initially designed for the full study). Within Group 1, each site received the tip-tailored reports for 3 months, and then the comparator-tailored reports for 3 months. We crossed over from intervention to control and vice versa at the mid-point of the trial. We used a crossover design because all sites were participating in the full LTCQ program, and we did not want to create an imbalance between sites. Our hypothesis was that the tailored feedback reports would result in an increase in the number of Veterans with documented LST templates. Our overall program goal was to support implementation of the LSTDI, and we felt it was important for all sites to receive the enhanced feedback reports. We exposed each site to the tip-enhanced reports followed by the comparator reports because we were interested in separating the effect of one type of enhancement from another, if our overall hypothesis that there was a difference between tailored and non-tailored feedback reports was supported.

Fig. 2
figure 2

Trial Design

Tailoring methods

In one of the two VISNs involved in the sub-study, we asked CLC and HBPC team members to complete the Organizational Readiness to Change Assessment (ORCA) survey to serve as a baseline context assessment. In the other VISN, we conducted qualitative interviews based on the Consolidated Framework for Implementation Research (CFIR) for the baseline context assessment. In both cases, these were conducted over the first 18 months of the overall study. We used two different methods based on the strengths and preferences of the research team, which was geographically divided into two groups, one for each VISN.

The ORCA assesses respondents’ beliefs and attitudes about the strength of evidence for a specific evidence-based intervention (EBI), the favorability of the organizational context for change, and the capability to facilitate the implementation of the EBI based on individuals’ perceptions of factors within these domains [17, 18]. The ORCA scales and sub-scales are rated on a scale from 1 (strongly disagree) to 5 (strongly agree) [17, 18]. We analyzed these data descriptively for each site.

We conducted qualitative interviews with the site champions from the second VISN to assess barriers and facilitators to implementing the LSTDI, as well as gauge use and distribution of the feedback reports. We used the CFIR to code the interview data, assessing salient and frequent themes emerging from the interviews, coded into CFIR constructs. The CFIR is one of the most widely used determinant frameworks to assess and describe the context of potential barriers and facilitators to implementation [19,20,21]. CFIR constructs are rated for valence (+ / −) and strength [1, 2] with a + 2 indicating a strong influence on implementation, + 1 a weak to moderate influence on implementation, − 2 a strong negative influence on implementation, and − 1 a weak to moderate negative influence on implementation [22].

We modified the reports in September 2019 to include one version providing tips tailored (tip-tailored feedback reports) to each site based on data either from the ORCA surveys or CFIR interviews, and another version that provided LST template comparator data based on the entire VA (Additional File 1 – Feedback Report Examples). The LTCQ team along with a Co-Investigator who also served as a CLC Director were involved in the tailoring process.

Our feedback reports were tailored to sites by delivering key messages/tips focused on areas of organizational concern based on the baseline ORCA survey results and the qualitative interviews coded using the CFIR (see Additional File 2). Eleven CLC and HBPC programs from one VISN received key messages/tips based on baseline ORCA survey results and twelve CLC and HBPC programs from the other VISN received tips based on qualitative interviews coded using the CFIR. For each site, members of the research team (AES and JK) identified constructs with high vs. low ORCA scores and high vs. low CFIR ratings.

Each tip-tailored report included one positive tip related to areas of organizational concern in which the site scored highly, and one reinforcing tip related to areas of organizational concern in which site scores were lower. A total of seventeen tips were generated for the feedback reports for use across all sites. Each site received six positive and six reinforcing tips over the course of the trial. Positive tips included ORCA and CFIR constructs related to knowledge and beliefs about intervention/compatibility, strong leadership, strong staff culture, strong clinical champions, positive networks and communications, available resources, and self-efficacy. Reinforcing tips included ORCA and CFIR constructs related to lack of leadership support, lack of resources, lack of clinical champion self-efficacy, and lack of staff buy-in.

The comparator-tailored report included national comparator data on LST template completion rates. These reports included a comparator line on the bar chart that showed the median performance for all CLC or HBPC sites nationally across the VA in each month. As the LSTDI was still in the early implementation phase, we decided to use median performance data as the comparator so that lower-performing sites would not disengage from the feedback or view it as unattainable [23]. We did not modify the feedback report delivery mechanism or other components of the intervention during the trial.

Evaluation methods

We used a randomized design with crossover controls to assess the effect of tailoring overall, and we planned to assess differences between tip and comparator tailoring if we found an overall difference between tailored and non-tailored reports.

Every 3 months during the trial we asked CLC and HBPC team members to complete a brief survey (the Feedback Uptake Scale [24]) reporting on receipt and understanding of the feedback report using REDCap. The data from this survey allowed us to assess some measures of implementation fidelity, including whether or not intended recipients (the care teams) received and understood the feedback. The survey asked staff: if they received the report, how much of the report they read, how well they felt they understood the information in the report, how useful they found the report, and whether they discussed the report with any other staff at their unit or facility [10].

Site champions and CLC and HBPC leadership determined who would be invited to participate in the survey by providing the LTCQ team with the list of staff email addresses to invite to complete the survey. Some sites wanted to include all CLC or HBPC team members while others only wanted to include team members who were involved in completing LST templates. The range of CLC team members invited to complete the survey was 13–107 with a median of 48 team members across the eleven programs. The range of HBPC team members invited to complete the survey was 12–60 with a median of 26 team members across the twelve programs.

For both the CLC and HBPC analyses, we aggregated and compared months between September 2019 and February 2020 with March 2020 and August 2020, using segmented regression analysis to conduct interrupted time series methods [25]. LST templates are stored as specific data elements, called LST health factors, in the VA’s Corporate Data Warehouse (CDW). Data for the completed templates from CDW were merged with census data for Veterans from the CLCs and HBPC programs participating in LTC QUERI. The data were aggregated using SAS version 9. The regression models were run in R version 4.0 using the linear model function.

We analyzed data from 11 CLCs; seven CLCs were in Group 1 and four were in Group 2 (Fig. 3). The percent of CLC residents with a complete LST template any time prior to the 14th day of the stay and prior to discharge was modeled as the outcome variable. We used 14 days as the timepoint because CLC residents would have multiple encounters with providers after they are admitted, providing numerous opportunities to complete the LST template.

Fig. 3
figure 3

Consort Diagram of Intervention and Control Sites

We included data from 12 HBPC sites; five HBPC sites were in Group 1 and seven were in Group 2. For HBPC sites, the outcome modeled was the percent of Veterans with a completed LST template at any time until their second HBPC visit. The second visit was chosen as the outcome as the admission visit is often lengthy with numerous assessments, so the LST template may not be addressed until the second visit.

We used a linear model to estimate the effect of these independent variables—an indicator for being in the intervention group, an indicator for the second 6 months of the study, and an interaction term capturing the proportion of survey respondents who had seen the reports during the second 6 months of the study—for both the CLC and HBPC analyses, which were conducted separately.

Results

Table 1 shows the average response rates to the Feedback Uptake Scale survey across each time interval and cluster. Overall, the response rates ranged between 6 and 19% across the CLC and HBPC sites.

Table 1 Response rates to quarterly surveys

Table 2 shows the overall number of people at each site who responded to the surveys, the number and percent who saw the feedback reports, and the average proportion of Veterans who had complete LST templates. The overall number of respondents who reported that they had seen the feedback reports was between 12.8 and 25.5% for CLC and HBPC sites.

Table 2 Respondent reports of receiving feedback reports and proportion of Veterans with LST templates (percentages are based only on reports received)

Figures 4 (CLC Sites) and 5 (HBPC Sites) show the distribution of Veterans with completed LST templates in each 6-month interval in the control and intervention sites group. There was a wide distribution in the completion rate of LST templates across the 11 CLC and 12 HBPC sites. Overall, the proportions of completed LST templates were higher in the second 6-month interval with no significant difference between the intervention and control groups Fig. 5.

Fig. 4
figure 4

CLC Veterans with Completed LST Templates by Intervention and Control Groups 

Fig. 5
figure 5

HBPC Veterans with Completed LST Templates by Intervention and Control Groups 

The linear regression models for both the CLC and HBPC sites modeled the respective outcomes with an indicator for being in the intervention group in the given 6-month interval, an indicator for the second 6 months of the study, and an interaction term capturing the proportion of survey respondents who had seen the reports during the second 6 months of the study; sites in the intervention group in one 6-month interval were in the non-intervention group in the other 6-month interval. Table 3 shows the results for the 2 linear regression models. The models showed no significant association between receiving the tailored feedback reports in a given 6-month interval and having a higher proportion of residents or Veterans with a completed LST templates. There was also no significant association between having a higher proportion with LST templates and the 6-month interval, although both models showed an increase in the proportion for the second 6-month interval. The interaction term for the proportion of survey respondents who had seen the reports in the second 6-month interval was also not significant. Because we found no significant difference overall between the tailored feedback reports and the non-tailored reports, we did not analyze the differences between the two types of tailored reports.

Table 3 Linear regression model results

Discussion

The results of the regression models indicate that neither the enhanced feedback reports nor having a higher proportion of survey respondents viewing reports was associated with any increase in the proportion of Veterans served having completed LST templates. While not statistically significant, it appears that the passage of time increased the proportion of eligible Veterans with completed LST templates, which likely reflects the overall success of the LSTDI implementation at the end of the intervention period. Prior research has found that it takes time to change practice, so if we had has a longer observation period, it may have resulted in a larger impact of the tailored feedback reports on completed LST templates in the targeted CLCs and HBPCs [26]. As we note in our Limitations and Challenges section, we were extremely time constrained.

Prior research has found that feedback is most effective when it is delivered by a supervisor or respected colleague, which is why we identified site champions from each program to be the liaison for the feedback reports [27]. However, we found that the feedback reports were rarely distributed widely by the site champion. From the survey, in general, site champions did not spend a great deal of time with the feedback reports and most did not distribute them to team members or others in the programs. We cannot tell from the surveys why this was the case, but it clearly affected the probability that any action would be taken based on the feedback reports. In some interviews done as part of the larger study, we learned that some site champions were reluctant to share the feedback reports out of concern that they might de-motivate staff if their performance appeared poor. This is in line with prior research that has found feedback perceived to be punitive is often less effective [28]. Others were concerned about taking staff time to review feedback reports given the pressure of their work in general.

We recommend that future projects that plan to utilize site champions to distribute feedback reports assess the site champion’s quality improvement capabilities and beliefs about data, which prior work has found influences engagement with feedback interventions [29]. Though the LTCQ team had been working with the site champions for close to 2 years at the time of this RCT, more time building relationships with the site champions may have mitigated skepticism and enhanced feedback acceptability [30]. The feedback reports were delivered to site champions via email and if we had also provided the opportunity for site champions to discuss the data with LTCQ staff, it may have supported the use of the feedback reports [31]. In our analysis of the overall project findings, a more intensive intervention which included coaching and frequent engagement with site champions and other staff was found to be more effective in increasing the proportion of eligible Veterans with completed LST templates [14].

Challenges and limitations

There are some limitations to this work. Only a small number of CLCs and HBPC sites out of the close to 100 VA CLCS and several hundred HBPC teams nationally were included, and a lack of power may have impacted the ability to identify an effect of the tailored feedback reports. The characteristics of different VAMC sites vary significantly. We only looked at data across 1 year of time that began 2 years after the LTCQ team began sending feedback reports to sites; the feedback reports may have played a larger role in encouraging the use of the LST template in these earlier years. As more Veterans have a completed LST template, there is a declining proportion of the population that needs to be reached, especially in CLCs [14]. We note that the crossover design may have led to “contamination” between control and intervention sites. It is possible that the effects of the enhanced feedback reports received by the first intervention group could have affected the control period if the feedback reports were of more interest to staff after the tailoring period [32]. Based on the limited survey data that showed the reports were not distributed widely to staff, we do not feel this was a factor; we did not see much evidence that staff had access to the reports.

Furthermore, our study time frame was interrupted by the COVID-19 pandemic, which took attention and resources away from tasks such as LST template completion. During this time, many VA CLCs had to change their function to COVID units and/or had many staff members detailed to inpatient COVID units, so CLC staffing levels decreased. HBPC teams were affected by staffing shortages as well as restrictions on in-home visits. We can assume that these disruptions led to a decreased emphasis on the LSTDI and LST template completion. Another limitation of note is how few CLC and HBPC team members completed the quarterly surveys. We can hypothesize that a non-response is indicative of team members not receiving the feedback reports from their site champion.

Our findings with respect to whether the feedback reports actually reached the intended recipients—the care teams providing direct care to Veterans in the two settings—are an important issue of concern. In many reports of feedback intervention studies, there is no discussion of whether or how assessment was made of reports reaching their intended recipients. It is difficult to construct a cogent theory of how feedback can work if the intended recipients do not receive it. Ours is one of a small number of studies of feedback interventions that did attempt to measure receipt of the feedback report, as well as whether it was understood. We suggest that this is an important gap in the audit with feedback literature that should be addressed in future studies.

Conclusions

Our randomized controlled trial of tailored audit with feedback showed that tailored feedback reports did not significantly enhance processes or outcomes compared to simple feedback reports. This was in the context of poor response to surveys assessing whether staff received the feedback reports; when surveys were completed, they showed that it was rare that staff had received the feedback report. We hypothesized that reports tailored to areas of site-specific organizational concern and those that provided comparator data would be distributed more widely to CLC and HBPC staff members than non-tailored feedback reports. The tailored reports did not lead to an increased proportion of Veterans with completed LST templates during the study period. Future research and implementation science teams should consider how to support and encourage greater distribution of tailored audit with feedback to result in the desired behavior change. Measuring whether feedback reports reach their intended audience is critical.

Availability of data and materials

Data will be available from the authors on request.

Abbreviations

CFIR:

Consolidated Framework for Implementation Research

CLC:

Community Living Centers

EBI:

Evidence-Based Intervention

HBPC:

Home-Based Primary Care

LST:

Life-sustaining treatments

LSTDI:

Life-Sustaining Treatment Decisions Initiative

LTC:

Long-term care

LTCQ:

Long-Term Care QUERI

ORCA:

Organizational Readiness to Change Assessment

QUERI:

Quality Enhancement Research Initiative

VISN:

Veterans Integrated Service Networks

References

  1. DoV A. Life-sustaining treatment decisions: eliciting, documenting and honoring patients’ values, goals and preferences. Washington, DC: VHA Handbook; 2017. p. 1004.

    Google Scholar 

  2. Foglia MB, Lowery J, Sharpe VA, Tompkins P, Fox E. A comprehensive approach to eliciting, documenting, and honoring patient wishes for care near the end of life: the Veterans Health Administration’s life-sustaining treatment decisions initiative. Jt Comm J Qual Patient Saf. 2019;45(1):47–56.

    PubMed  Google Scholar 

  3. Sales AE, Ersek M, Intrator OK, Levy C, Carpenter JG, Hogikyan R, et al. Implementing goals of care conversations with veterans in VA long-term care setting: a mixed methods protocol. Implement Sci. 2016;11(1):132.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Colquhoun HL, Brehaut JC, Sales A, Ivers N, Grimshaw J, Michie S, et al. A systematic review of the use of theory in randomized controlled trials of audit and feedback. Implement Sci. 2013;8:66.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Hysong SJ. Meta-analysis: audit and feedback features impact effectiveness on care quality. Med Care. 2009;47(3):356–63.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Lorenzetti DL, Quan H, Lucyk K, Cunningham C, Hennessy D, Jiang J, et al. Strategies for improving physician documentation in the emergency department: a systematic review. BMC Emerg Med. 2018;18(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Hanson JM, Johnson G, Clancy MJ. The effect of audit and feedback on data recording in the accident and emergency department. J Accid Emerg Med. 1994;11(1):45–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:Cd000259.

    Google Scholar 

  9. Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, et al. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. 2014;9(1):14.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Sales AE, Fraser K, Baylon MAB, O’Rourke HM, Gao G, Bucknall T, et al. Understanding feedback report uptake: process evaluation findings from a 13-month feedback intervention in long-term care settings. Implement Sci. 2015;10(1):20.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Brehaut JC, Colquhoun HL, Eva KW, Carroll K, Sales A, Michie S, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164(6):435–41.

    Article  PubMed  Google Scholar 

  12. Landis-Lewis Z, Kononowech J, Scott WJ, Hogikyan RV, Carpenter JG, Periyakoil VS, et al. Designing clinical practice feedback reports: three steps illustrated in Veterans Health Affairs long-term care facilities and programs. Implement Sci. 2020;15(1):7.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Carpenter J, Miller SC, Kolanowski AM, Karel MJ, Periyakoil VS, Lowery J, et al. Partnership to enhance resident outcomes for community living center residents with dementia: description of the protocol and preliminary findings. J Gerontol Nurs. 2019;45(3):21–30.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Carpenter JG, Scott WJ, Kononowech J, Foglia MB, Haverhals LM, Hogikyan R, et al. Evaluating implementation strategies to support documentation of veterans’ care preferences. Health Serv Res. 2022;57(4):734–43.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, et al. Clinical Performance Feedback Intervention Theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci. 2019;14(1):40.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

    Article  Google Scholar 

  17. Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4:38.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care. 1998;7(3):149–58.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  19. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Skolarus TA, Lehmann T, Tabak RG, Harris J, Lecy J, Sales AE. Assessing citation networks for dissemination and implementation research frameworks. Implement Sci. 2017;12(1):97.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8:51.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Locke EA, Latham GP. Building a practically useful theory of goal setting and task motivation. A 35-year odyssey. Am Psychol. 2002;57(9):705–17.

    Article  PubMed  Google Scholar 

  24. Sales AE, Fraser K, Baylon MAB, O’Rourke HM, Gao G, Bucknall T, et al. Understanding feedback report uptake: process evaluation findings from a 13-month feedback intervention in long-term care settings. Implement Sci. 2015;10:20.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299–309.

    Article  CAS  PubMed  Google Scholar 

  26. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Edes T, Shreve S, Casarett D. Increasing access and quality in Department of Veterans Affairs care at the end of life: a lesson in change. J Am Geriatr Soc. 2007;55(10):1645–9.

    Article  PubMed  Google Scholar 

  28. Hysong SJ, Best RG, Pugh JA. Audit and feedback and clinical practice guideline adherence: Making feedback actionable. Implement Sci. 2006;1(1):9.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Desveaux L, Ivers NM, Devotta K, Ramji N, Weyman K, Kiran T. Unpacking the intention to action gap: a qualitative study understanding how physicians engage with audit and feedback. Implement Sci. 2021;16(1):19.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Cooke LJ, Duncan D, Rivera L, Dowling SK, Symonds C, Armson H. The Calgary Audit and Feedback Framework: a practical, evidence-informed approach for the design and implementation of socially constructed learning interventions using audit and group feedback. Implement Sci. 2018;13(1):136.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Desveaux L, Rosenberg-Yunger ZRS, Ivers N. You can lead clinicians to water, but you can’t make them drink: the role of tailoring in clinical performance feedback to improve care quality. BMJ Qual Saf. 2023;32(2):76–80.

    Article  PubMed  Google Scholar 

  32. Reich NG, Milstone AM. Improving efficiency in cluster-randomized study design and implementation: taking advantage of a crossover. Open Access J Clin Trials. 2014;6:11–5.

    Google Scholar 

Download references

Acknowledgements

Views expressed in this paper are those of the authors and not of the Department of Veterans Affairs or the VA Office of Research and Development. We acknowledge our appreciation for the work of all the site coordinators as well as the staff who continue to carry out the important work of caring for Veterans in VA long-term care settings, and who participated in this project. We are also grateful for the thoughtful reviews provided by reviewers, which have significantly strengthened this report.

Funding

This work was supported by the Department of Veterans Affairs Quality Enhancement Research Initiative program (Project ID: QUE 15–288). The views expressed in this presentation are those of the authors and do not necessarily represent the position or policy of the Department of Veterans Affairs or the United States government.

Author information

Authors and Affiliations

Authors

Contributions

JK, WS, ZLL, and AES were all involved in the design, conduct, and analysis of the RCT. All authors revised the manuscript for important intellectual content. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jennifer Kononowech.

Ethics declarations

Ethics approval and consent to participate

The protocol for this work was reviewed by the VA Ann Arbor Healthcare System’s Institutional Review Board and was determined to be Exempt under Category 2.

Consent for publication

Not applicable.

Competing interests

The authors declare that Anne Sales is co-Editor-in-Chief of Implementation Science Communications. She was not involved in the review of this manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Feedback Report Examples.

Additional file 2.

Tips for Tip-Tailored Feedback Reports.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kononowech, J., Scott, W., Landis-Lewis, Z. et al. Randomized controlled trial of tailored audit with feedback in VHA long-term care settings. Implement Sci Commun 4, 129 (2023). https://doi.org/10.1186/s43058-023-00510-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-023-00510-7

Keywords