Skip to main content
  • Systematic review
  • Open access
  • Published:

The effectiveness of champions in implementing innovations in health care: a systematic review

Abstract

Background

Champions have been documented in the literature as an important strategy for implementation, yet their effectiveness has not been well synthesized in the health care literature. The aim of this systematic review was to determine whether champions, tested in isolation from other implementation strategies, are effective at improving innovation use or outcomes in health care.

Methods

The JBI systematic review method guided this study. A peer-reviewed search strategy was applied to eight electronic databases to identify relevant articles. We included all published articles and unpublished theses and dissertations that used a quantitative study design to evaluate the effectiveness of champions in implementing innovations within health care settings. Two researchers independently completed study selection, data extraction, and quality appraisal. We used content analysis and vote counting to synthesize our data.

Results

After screening 7566 records titles and abstracts and 2090 full text articles, we included 35 studies in our review. Most of the studies (71.4%) operationalized the champion strategy by the presence or absence of a champion. In a subset of seven studies, five studies found associations between exposure to champions and increased use of best practices, programs, or technological innovations at an organizational level. In other subsets, the evidence pertaining to use of champions and innovation use by patients or providers, or at improving outcomes was either mixed or scarce.

Conclusions

We identified a small body of literature reporting an association between use of champions and increased instrumental use of innovations by organizations. However, more research is needed to determine causal relationship between champions and innovation use and outcomes. Even though there are no reported adverse effects in using champions, opportunity costs may be associated with their use. Until more evidence becomes available about the effectiveness of champions at increasing innovation use and outcomes, the decision to deploy champions should consider the needs and resources of the organization and include an evaluation plan. To further our understanding of champions’ effectiveness, future studies should (1) use experimental study designs in conjunction with process evaluations, (2) describe champions and their activities and (3) rigorously evaluate the effectiveness of champions’ activities.

Registration

Open Science Framework (https://osf.io/ba3d2). Registered on November 15, 2020.

Peer Review reports

Introduction

Evidence-based practice (EBP) refers to the development and provision of health services according to best research evidence, health care providers’ expertise and patients’ values and preferences [1]. Adoption of EBP by organizations can create safer practices, better patient outcomes and decrease health care costs [2]. Best practice and technology can be defined as an innovation [3, 4]. However, some authors reported that health services and practices are not always based on best evidence [5,6,7,8,9]. Braithwaite and colleagues summarized that 60% of health services in the USA, England and Australia follow best practice guidelines; about 30% of health services are of low value; and 10% of patients globally experience iatrogenic harm [10].

To implement innovations, research evidence must be synthesized, adapted and applied in a specific health care context, and this adoption must be evaluated [11]. The adoption of innovations is improved when devoted individuals, often referred to as champions, facilitate implementation [3, 12, 13]. Champions are individuals (health care providers, management [14, 15], or lay persons [16, 17]) who volunteer or are appointed to enthusiastically promote and facilitate implementation of an innovation [13, 18, 19]. There is confusion and overlap between the concept of champion and other concepts, such as opinion leaders, facilitators, linking agents, change agents [19, 20], coaches and knowledge brokers [19]. Some studies have attempted to clarify these different roles that are intended to facilitate implementation [19, 20]. Despite this, these terms are sometimes used synonymously, while at other times treated as different concepts [19, 21]. Hence, we sought to only examine champions in this study.

There are at least four recent published reviews that reported on the effectiveness of champions [21,22,23,24]. In 2016, Shea and Belden [24] performed a scoping review (n = 42) to collate the characteristics and impacts of health information technology champions. They reported that in a subset of studies (24 qualitative and three quantitative), 23 of the 27 studies reported that champions had a positive impact during the implementation of health information technology [24]. In 2018, Miech and colleagues [21] conducted an integrative review (n = 199) of the literature on champions in health care. They reported a subset of 11 quantitative studies (four studies that randomly allocated the presence and absence of champions and seven studies that reported an odds ratio) that evaluated the effectiveness of champions [21]. They reported that despite some mixed findings in the subset of studies, use of champions was reported to generally influence adoption of innovations [21]. In 2020, Wood and colleagues [23] conducted a systematic review (n = 13) on the role and efficacy of clinical champions in facilitating implementation of EBPs in settings providing drug and alcohol addiction and mental health services. They reported that champions influenced health care providers use of best practices or evidence-based resources in four qualitative studies [23]. In 2021, Hall and colleagues [22] performed a systematic review and metanalysis of randomized controlled trials (RCT; n = 12) that evaluated the effectiveness of champions, as a part of a multicomponent intervention, at improving guideline adherence in long-term care homes. They concluded from three RCTs that there is low certainty evidence suggesting that the use of champions may improve staff adherence to guidelines in long-term care settings [22].

According to Tufanaru and colleagues [25], synthesizing the effectiveness of an intervention requires the summary of quantitative studies using a systematic process. As described above, two of the previous reviews discussing champions’ effectiveness were primarily composed of qualitative studies [23, 24]. Synthesizing qualitative studies may highlight relationships that exist between champions and aspects of implementation, but does not inform champions’ effectiveness based on the definition outlined by Tufanaru and colleagues [25]. Furthermore, some of the previous reviews examining champions’ effectiveness were limited to the following: (1) types of innovations (i.e. health information technology [24]); (2) setting (i.e. long-term care settings [22] or health care settings providing mental health and addiction treatment [23]); or study design/effect size (i.e. only including experimental design studies [21, 22] or studies reporting odd ratios [21]). Moreover, as some of the previous reviews sought to examine other aspects pertaining to champions in addition to champions’ effectiveness, they utilized study designs (i.e. integrative review [21], scoping review [24]) that did not require the performance of some conventional steps for systematic reviews as outlined by the JBI manual [25] and the Cochrane handbook [26]. For example, grey literature was not included, or the methodological quality of included studies was not appraised in the two cited reviews [21, 24].

To build on the four reviews describing champions’ effectiveness [21,22,23,24], we conducted a systematic review to determine whether the use of champions, tested in isolation from other implementation strategies, are effective at increasing the use of innovations across health care settings and innovation types. Our review is rooted in a post-positivist paradigm [27] because it focused on the relationships between measurable components of champions and implementation and emphasized the rigour attributed to study design (e.g. experimental studies are more rigorous than quasi-experimental and observational studies). The research questions were as follows: (1) How are champions described and operationalized in the articles that evaluates their effectiveness? (2) What are the effects of champions on the uptake of innovations (knowledge use) by patients, providers and systems/facilities? (3) What are the effects of champions on patient, provider and system/facility outcomes?

Methods

The research team followed the JBI approach to conducting systematic review of effectiveness [25] and used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [28] and the Synthesis without meta-analysis (SWiM) in systematic reviews reporting guidelines [29]. The research team registered the review in Open Science Framework as part of a broader scoping review [30]. See Additional files 1 and 2 for the PRISMA and the SWiM checklists respectively.

Search strategy and study selection

Search strategy

WJS devised a search strategy with a health sciences librarian for a larger scoping review that aimed to describe champions in health care. A second health science librarian assessed the search strategy using the Peer Review of the Electronic Search Strategy (PRESS) checklist [31]. The search strategy (outlined in Additional file 3) consisted of Boolean phrases and medical subject headings (MESH) terms for the following concepts and their related terms: champions, implementation and health care/health care context. Eight electronic databases (Business Source Complete, CINAHL, EMBASE, Medline, Nursing and Allied Health, PsycINFO, ProQuest Thesis and Dissertations, and Scopus) were searched from inception to October 26, 2020, to identify relevant articles. Further, WJS identified and assessed additional records retrieved from the reference lists of included studies and synthesis studies that were captured by the search strategy and from forward citation searches of included studies.

Eligibility criteria

Inclusion

We included all published studies and unpublished theses and dissertations that used a quantitative study design to evaluate the effectiveness of individuals explicitly referred to as champions at either increasing the use of innovation or improving patient, provider, or system/facility outcomes within a health care setting. English language articles were included regardless of date of publication or type of quantitative study design.

Exclusion

Synthesis studies, qualitative studies, study protocols, conference abstracts, editorials and opinion papers, case studies, studies not published in English, articles without a full text available, and articles that are not about knowledge translation or EBP were excluded.

Study selection

We used Covidence [32] to deduplicate records; WJS and MDV independently assessed the title and abstract of these deduplicated records. Records were included if the title and abstract mentioned champions within health care. All potentially relevant articles and articles that had insufficient information were included for full text screening. WJS and MDV independently assessed the inclusion of full text articles in accordance with the eligibility criteria detailed above. Discrepancies were resolved through consensus or if necessary, through consultation of a third senior research team member (ML, IDG, JES). WJS and MDV piloted the eligibility criteria on 100 records and 50 full text articles.

Data extraction

WJS and MDV extracted data in duplicate using a standardized and piloted extraction form created by the research team in DistillerSR [33]. The following data were extracted: (1) study characteristics: first author, year of publication, study design, country, setting, details on the innovation being implemented, study limitations, funding, and conflict of interest; (2) study participant demographics: sample size, age, sex and gender identity, and professional role; (3) champion demographics: number of champions, age, sex and gender identity, and professional role; (4) operationalization of champions: quantitative measures relative to the presence or influence of champions and the reliability and validity of these measures; and (5) study outcome: the dependent variable evaluated with use of champions, method of measurement of dependent variable, reliability and validity of measure(s), statistical analysis/approach undertaken, and statistical results and significance of results at p-value of 0.05 or less. WJS and MDV resolved discrepancies through discussion or through consultation of a senior research team member. WJS contacted authors for missing data integral to the analysis (e.g. to clarify statistical test results when integers are not reported in a figure in an article).

Quality appraisal

WJS and MDV independently appraised study methodological quality using five JBI critical appraisal tools for (1) case–control studies [34], (2) cohort studies [34], (3) cross-sectional studies [34], (4) quasi-experimental (non-randomized experimental) studies [25] and (5) randomized control trials [25]. Non-controlled before and after studies and interrupted time series were assessed using the critical appraisal tool for quasi-experimental studies [25]. Discrepancies were resolved through consensus. Each question response was attributed a score according to a scoring system from a recently published JBI systematic review [35] (Yes = 2; Unclear = 1; and No = 0). A quality score between 0 and 1 was calculated for each included study by dividing the total score with the total number of available scores. According to this quality score, the research team classified each study as either weak (quality score < 0.5), moderate (quality score between 0.5 and 0.74), or strong (quality score between 0.75 and 1) [36]. Studies were included in the data synthesis regardless of the quality score. We also examined the total percentage of “Yes” responses for each critical appraisal question to determine the areas of variability in quality between studies with the same study design.

Data synthesis

Through visually examining the data in tables, we found methodological and topic heterogeneity amongst the included studies (apparent from the varying types of innovations and study outcomes), which warranted the need for a narrative synthesis of the data. WJS used both inductive and deductive content analysis [37] to aggregate study outcomes into categories as detailed below. Two senior research team members (IDG and JES) evaluated and confirmed the accuracy of the performed categorization. WJS deductively categorized each extracted study outcome as either innovation use or as outcomes as described by Straus and colleagues [38]. We specifically defined innovation use in this study as comprising (1) conceptual innovation use: an improvement in knowledge (enlightenment) or attitude towards an innovation (best practices, research use, or technology) (often referred to as conceptual knowledge use [38]); and (2) the use or adoption of an innovation (instrumental knowledge use [38]). WJS further categorized study outcomes as either patient, provider and system/facility outcomes. Examples of patient outcomes included changes in patient’s health status and quality of life. Provider outcomes included provider satisfaction with practice. System/facility outcomes included system-level indicators such as readmission rates, length of stays and access to training [38]. Differing from Straus and colleagues [38], we also stratified innovation use into patient, provider and system/facility innovation use according to the level of analysis and intended target for change in the study. Patient and provider innovation use was defined as uptake of an innovation by patients and providers [38]. System/facility innovation use was defined as the adoption of an innovation throughout a whole system or facility; this included, for example, adoption of programs which entailed changes in work culture, policies and workflows [39,40,41]. Moreover, WJS used inductive content analysis to further categorize study outcomes within their respective category of innovation use or outcome according to the type of practice or technology being implemented. For example, the implementation of transfer boards, hip protectors and technology were grouped together, as these innovations pertain to the introduction of new equipment in clinical practice. Study outcomes that could not be coded according to the above classifications were grouped into an “other outcomes” category (e.g. whether formal evaluations were more likely to be conducted).

To answer research question 1, we inductively organized the measures used to identify exposure to champions into three categories: (1) studies that used a single dichotomous (“Yes or No”) or Likert scale, (2) studies that appointed a champion for their study and (3) studies that used more nuanced measures for champion exposure. To answer research questions 2 and 3, we used a predetermined set of vote-counting rules used in published systematic reviews [42,43,44] as outlined on Table 1. As previously suggested by Grimshaw and colleagues [45], we reported the study design, sample sizes, significant positive, significant negative and non-significant relationships, and the magnitude of effect (if reported by the study) for all the studies. We also performed a sensitivity analysis to determine whether the number of categories for which a conclusion can be made, or the conclusion for any category would change when weak studies are removed from the analysis [43, 46]. Lastly, we conducted a sub-group analysis of the data to evaluate whether our conclusions would change, or if there are differences in conclusions, between studies that used a dichotomous (presence/absence) measure and studies that appointed champions or used more nuanced measures of the champion strategy.

Table 1 Vote-counting rules

Results

Search results

As demonstrated in the flowchart (Fig. 1), the database search identified 6435 records and the additional citation analysis identified 3946 records. Duplicates (n = 2815) were removed using Covidence [32], leaving 7566 articles to screen. After titles and abstracts were screened, 2097 articles were identified to potentially met eligibility criteria. The full text of these 2090 articles was reviewed (seven articles could not be retrieved), with 35 studies (37 articles) [39,40,41, 47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80] meeting all the inclusion criteria (Additional file 4 lists excluded full text articles and reasons for exclusion).

Fig. 1
figure 1

PRISMA 2020 flow diagram

Characteristics of included studies

The included studies in our systematic review were primarily conducted in the last 10 years (2010–2020), with the highest proportion of studies conducted in North America (n = 28) and in acute care/tertiary settings (n = 20). The number of health care settings per article ranged from one to 1174 settings and sample sizes ranged from 80 to 6648 study participants. Table 2 summarizes study characteristics, and Table 3 provides more detailed descriptions of each study.

Table 2 Summary of included studies (n = 35)
Table 3 Description of included articles

Methodological quality

Of the 35 included studies, 19 (54.3%) were rated as strong [47, 48, 52, 58,59,60,61,62,63,64,65, 67, 68, 73,74,75,76,77,78,79,80], 11 (33.3%) were rated as moderate [39,40,41, 49, 50, 54, 56, 57, 70,71,72] and 5 (13.9%) were rated as weak [51, 53, 55, 66, 69] (See Additional file 5). Lower methodological quality was generally attributed to the lack of description of study participants and setting, lack of reliable and valid measures used to assess exposure to champions and study outcomes and the lack of processes used for random allocation and concealment of participant allocation to groups.

Description and operationalization of champions

Overall, there was a scarcity of demographic information reported on the champions. None of the included studies reported the age of the champions, and only one study reported the sex of the champion [80]. Nine studies identified the profession of the champions as either nursing or medicine [49, 51, 54, 55, 66, 70, 72, 74, 75].

Most studies (n = 25 of 35, 71.4%) operationalized champions as the perceived presence or absence of champions by survey respondents measured by single dichotomous (“Yes/No responses) or Likert items. Tables 5 and 6 detail operationalization of champions for each included study.

Four of the 35 studies (11.4%) described the appointment of champions in their study setting [54, 72, 73, 80]. There was a range of one champion [80] to six champions [54] in these studies [54, 72, 73, 80]. Two of these studies described the activities performed by the champions: (1) training nurses in the Kangaroo mother care and providing educational videos to mothers of neonatal intensive care patients [73] and (2) creating and implementing a protocol related to appropriate urinary catheter use and auditing urinary catheter use [80]. The other two studies detailed the training provided to champions but not their activities [54, 72].

The remaining 6 of 35 studies (17.1%) operationalized champions using five unique subscales (two studies used the same subscale) that assessed the presence of a champion who possessed or performed particular attributes, roles, or activities [50, 59, 61, 68, 77,78,79]. Overall, these measures demonstrate that champions can perform differing roles and activities from enthusiastically promoting or role modelling behaviour towards a particular innovation, to broader leadership roles (e.g. managing or acquiring resources). In 4 of the 6 studies (66.7%) [59, 61, 68, 79], the used champion subscale had acceptable internal consistency (α ≥ 0.70 [97]); one study (16.7%) reported that the used champion subscale had low internal consistency (α = 0.43) [50]. In 2 of the 6 studies (33.3%), the authors conducted an exploratory factor analysis and reported that the champion items loaded highly to a single factor [68, 77, 78]. The champion subscales were part of five larger questionnaires that measured another construct: (1) organizational readiness in adopting electronic health technologies [56, 63]; (2) organizational factors affecting adoption of electronic mail [45], transfer boards [72, 73] and e-health usage [54]; (3) sustainability of pharmacy-based innovations [74]. Furthermore, none of the included studies reported performing an evaluation on whether the champions’ activities were perceived to be helpful by the individuals who were intended to use the innovation. Also, none of the included studies assessed whether there was adequate exposure to champions to produce an effect.

Categorization of study outcomes

Across all 35 studies, we extracted and categorized 66 instances for which the relationships between champions and innovation use or patient, provider, or facility/system outcome were evaluated. Some studies evaluated the relationships between champions and more than one dependent variable. Table 4 presents the relationships between champions and innovation use and/or the resulting impact of innovation use pertaining to patients, providers and systems/facilities for each of the included studies.

Table 4 Summary of champions’ effectiveness in increasing innovation use and improving outcomes

Champions’ effectiveness in increasing innovation use

Twenty-nine studies evaluated the effectiveness of champions in increasing innovation use: five studies evaluated the effectiveness of champions in increasing conceptual innovation use [61, 64, 65, 68, 75, 77, 78], and 25 studies evaluated the effectiveness of champions in increasing instrumental innovation use [39,40,41, 47,48,49,50, 52, 54, 55, 57,58,59, 62, 63, 66, 67, 70,71,72,73,74,75,76, 80]. One study evaluated both conceptual and instrumental innovation use [75]. Based on our vote-counting rules, we were able to draw conclusions between the use of champions and the following three categories: (1) providers’ knowledge and attitudes towards an innovation (conceptual innovation use); (2) providers’ use of an innovation (instrumental innovation use); and (3) system/facility’s establishment of processes that encourages use of best practices, programs and technology throughout an organization (instrumental innovation use). A description of each conclusion relative to these three categories of innovation use is detailed below. We present the study outcomes organized into their respective innovation use categories, the statistical analysis and approach and test statistics and measure of magnitude supporting our conclusions in Table 5.

Table 5 Champions’ effectiveness in increasing patient, provider and system/facility’s innovation use

Champions’ effectiveness in increasing provider conceptual innovation use

Four studies evaluated the effectiveness of champions in improving providers’ attitudes and awareness of new technology or equipment (conceptual innovation use) [61, 64, 65, 68, 77, 78]. One of the 4 studies used a quasi-experimental design [77, 78], while the other three studies were cross-sectional observational studies [61, 68, 77, 78]. Two of the 4 studies (50%) reported that champions were effective in increasing provider conceptual innovation use [61, 64, 65], and 2 of the 4 studies (50%) reported mixed findings regarding the effectiveness of champions in increasing provider conceptual innovation use [68, 77, 78]. Therefore, our findings suggest that the use of champions in these four studies [61, 64, 65, 68, 77, 78] was, overall, not consistently related to providers’ conceptual innovation use of new technology or equipment.

Champions’ effectiveness in increasing provider instrumental innovation use

Seventeen studies evaluated the effectiveness of champions in increasing health care provider use of innovations (instrumental innovation use) [47,48,49, 52, 54, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76, 80]. One of the 17 studies was a clustered RCT, while 2 of the 17 studies used a quasi-experimental design [54, 80], and the remaining 14 studies were observational studies [47,48,49, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76]. Eight of the 17 studies (47.1%) reported that champions were effective in increasing provider’s use of innovations [49, 52, 57, 62, 67, 72, 76, 80]. Six of the 17 studies (35.3%) reported that mixed findings exist regarding the effectiveness of champions in increasing provider’s use of innovations [47, 48, 54, 58, 63, 66]. Two of the 17 (11.8%) studies reported that no relationship exists between champions and providers’ use of innovations [55, 70] and one of the 17 (5.9%) studies reported that champions decreased provider’s use of an innovation [74]. Therefore, our findings suggests that the use of champions in these 17 studies [47,48,49, 52, 54, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76, 80] was overall, not consistently related to providers’ use of best practice or technological innovations.

Champions’ effectiveness in increasing system/facility instrumental innovation use

Seven studies evaluated the effectiveness of champions in increasing systems/facilities’ adoption of technology, best practices and programs (instrumental innovation use) [39,40,41, 50, 59, 71, 75]. One of the 7 studies used a quasi-experimental design [39], while the remaining studies used observational study designs [40, 41, 50, 59, 71, 75]. Five of the 7 (71.4%) studies reported that champions were effective in increasing the formation of policies and processes and increasing uptake of technology at hospitals [59] and nursing homes [39], best practices in public health and pediatric practices [75] and programs in primary care clinics [41, 71]. One of the 7 (14.3%) studies reported that mixed findings exist regarding the effectiveness of champions in increasing the adoption of a depression program in primary care clinics [40] and 1 of the 7 (14.3%) studies reported that champions had no effect in increasing uptake of electronic mail at academic health science centres [50]. Therefore, across these seven studies [39,40,41, 50, 59, 71, 75], the use of champions was overall related to increased use of technological innovations, best practices and programs by systems/facilities.

Champions’ influence on outcomes

Ten studies evaluated the effectiveness of champions at improving outcomes. Six of the 10 studies evaluated the effectiveness of champions in improving patient health status or experience (patient outcomes) [41, 51, 53, 57, 60, 76]. One of the 10 studies evaluated the effectiveness of champions in improving provider’s satisfaction with the innovation [77, 78], and three studies evaluated the effectiveness of champions in improving system/facility-wide outcomes such as quality indicators [56], the establishment of organizational training programs [69], or sustainability of programs [79]. Based on our vote-counting rules, we drew conclusions between the use of champions and patient outcomes (see Table 6).

Table 6 Champions’ effectiveness on patient, provider and system/facility’s outcomes

Champions’ influence on patient outcomes

Six studies evaluated the effectiveness of champions in improving patient outcomes [41, 51, 53, 57, 60, 76]. All six studies used observational study designs. Three of the 6 studies (50%) reported that champions were effective in decreasing adverse patient outcomes [51, 53] or improving patients’ quality of life [60], while the other three studies (50%) reported that champions did not have a significant effect in improving patients’ standardized depression scale scores [41], patients’ laboratory tests and other markers related to diabetes [76] or their satisfaction with health services [57]. Therefore, across these six studies [41, 51, 53, 57, 60, 76], the use of champions was overall, not consistently related to improvements in patient outcomes.

Champions’ effectiveness on innovation use and outcomes

Three of the 35 studies evaluated the effectiveness of champions in increasing both innovation use and outcomes [41, 57, 76]. In these three studies, the use of champions improved health care providers’ use of best practices [57, 76] and the uptake of a depression program by facilities [41] but did not impact patient outcomes.

Sensitivity analysis and sub-group analysis of data

We found that when weaker quality studies were removed, the number of categories that we can make conclusions on, or their respective conclusions, did not change. Further, our conclusions did not change when we examined study findings across studies (n = 25 of 35, 71.4%) that operationalized champions using dichotomous (presence/absence) measures. We could not make conclusions but observed trends across studies that used more nuanced measures or appointed champions for their study (n = 10 of 35, 28.6%), because the categories of innovation use and outcomes in this subset had less than four included studies. In this subset of studies, a positive trend was suggested between use of champions and improvement in provider instrumental innovation use, according to three quasi-experimental studies [54, 72, 80].

Discussion

Summary of study findings

In this review, we aimed to summarize how champions are described and operationalized in studies that evaluate their effectiveness. Secondly, we assessed whether champions are effective at increasing innovation use or improving patient, provider and system/facility outcomes.

Description and operationalization of champions

We found that most studies evaluating the effectiveness of champions operationalized exposure to champions using a single item scale that asked whether participants perceived a presence or absence of a champion. Furthermore, we found that there was minimal demographic information provided regarding the champions in the included studies. Our findings add to the review by Miech and colleagues [21], revealing four additional subscales [50, 59, 77,78,79] measuring champions to the three champion subscales [40, 68, 100] they cited in their review. Our results reinforce Miech and colleagues’ [21] claim that more nuanced measures are needed to examine champions, as our review also only found champion subscales and did not locate a full instrument intended to measure the champion construct.

Champions’ effectiveness

Our review demonstrates that causal relationships between deployment of champions and improvement in innovation use and outcomes in health care settings cannot be drawn from the included studies because of the methodological issues (i.e. lack of description of champions, lack of valid and reliable measures used and use of observational study designs) present in most of these studies. Hall and colleagues also found low confidence evidence pertaining to champions’ effectiveness in guideline implementation in long-term care [22]. When we tried to make sense of the evidence, we found that across seven studies, champions were related to increased use of innovations at an organizational level [39,40,41, 50, 59, 71, 75]. Our findings indicate that champions do not consistently improve provider’s attitudes and knowledge across four studies [61, 64, 65, 68, 77, 78], provider’s use of innovations across 17 studies [47,48,49, 52, 54, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76, 80], or patient outcomes across six studies [41, 51, 53, 57, 60, 76]. We only found one study suggesting that the use of champions is associated with decreased provider instrumental innovation use [74], and none of the studies reported that the use of champions resulted in worse outcomes or harms. Damschroder and colleagues [101] reported that a single champion may be adequate in facilitating the implementation of technological innovations, but a group of champions composed of individuals from different professions may be required to encourage providers to change their practices. Furthermore, the myriad of mixed findings pertaining to the effectiveness of champions could be related to the lack of (1) description of the champions; (2) fidelity of the champion strategy; (3) evaluation of champion’s activities and level of exposure to champions; and (4) assessments of confounding contextual factors affecting champions’ performance. According to Shaw and colleagues [102], champions can undertake many roles and activities and that the assumption that champions operate in the similar manner may make comparisons difficult if these distinctions are not clarified.

Our results draw similar conclusions from the four cited published reviews on champions [21,22,23,24]. However, as detailed in this ‘Discussion’ section, our review (1) synthesized more quantitative evidence across varying health care settings and innovation types to reinforce the conclusions made from the past reviews; (2) highlighted areas where adequate research is conducted around champions and innovation use and outcomes; (3) identified four additional scales used in champion effectiveness studies not previously cited in previous reviews; and (4) provided implications of our findings in research and deployment of champions.

Implications of study findings

One implication of our study findings is that it provides a summary of 35 studies that evaluate the effectiveness of champions across varied health care settings and innovation types. Furthermore, we identified areas for which the effectiveness of champions was not well examined: (1) patients’ innovation use, (2) organizational conceptual innovation use, (3) provider outcomes, (4) and system/facility outcomes. In addition, our findings suggest that individuals who are thinking of mobilizing champions should begin by reflecting on their intended implementation goal (innovation use or outcomes by patients, providers, or by systems/facilities). If the goal is to increase organizational use of innovations, then there is some evidence to support the position that the use of champions may be beneficial. However, if the goal is to improve innovation use by providers and patients, or improve outcomes, individuals implementing EBP should be mindful when using champions until more conclusive evidence exists to support their effectiveness pertaining to these goals. Although there is a lack of evidence suggesting that the use of champions can be harmful, there are opportunity costs that come with deploying champions (e.g. clinician time and sometimes monetary costs) that may be better used to deploy a different implementation strategy. Furthermore, our findings imply that future effectiveness studies should examine whether champions perform distinct roles or activities depending on the innovation type or level of implementation (i.e. system/facility or individual (providers or patients)). To differentiate between several types of champions present in implementation requires future studies to provide more detailed descriptions of the champion strategy. One way to achieve this objective is through the development and use of valid, reliable and pragmatic tools that evaluate champions’ activities and exposure to champions. A second means is through the conduct of process evaluations in conjunction with experimental studies. Strauss and colleagues [38] defined process evaluations as qualitative or mixed method studies that are intended to describe the process of implementation, and the experiences of the individuals involved in implementation. Michie and colleagues [103] also highlighted that triangulation of qualitative data with findings of experimental studies would increase the validity of conclusions that an observed change is due to the applied knowledge translation strategy. Lastly, process evaluations may also help inform the optimal dose of champions required to achieve an implementation goal [38].

Limitations

Limitations of our review

Apart from theses and dissertations, we did not consider other grey literature in this study. Moreover, our eligibility criteria excluded studies that were not written in English. Further, our conclusions, made through vote counting, does not consider the effect size and the sensitivity of each individual study in estimating these effect sizes [45]. We tried to mitigate this limitation by reporting both the effect sizes and the sample sizes for each study. Moreover, as we only included studies that explicitly called the individual a champion, our review excluded other studies that deployed an individual that could have performed similar roles or activities as a champion but was not labelled a champion.

Limitations in the primary studies

The methodological, outcome measure and topic heterogeneity across the included studies did not allow us to conduct a meta-analysis to calculate the magnitude of champions’ effectiveness. The lack of description or evaluation of the champions’ attributes, roles and activities in most of the studies makes it difficult to decipher why the effectiveness of champions was found primarily mixed. In addition, the minimal use of both experimental research designs and reliable and valid measures to assess exposure to champions across the included studies makes it impossible to draw causal conclusions. Lastly, the included studies were mostly conducted in North American or European countries; hence, these findings may not be pertinent to other continents.

Conclusions

We aimed to evaluate the effectiveness of champions in improving innovation use and patient, provider and system/facility outcomes in health care settings. In 5 of 7 studies, champions and use of innovations by systems/facilities was positively associated. The effectiveness of champions in improving innovation use by providers and patients, or outcomes was either inconclusive or unexamined. There was little evidence that champions were harmful to implementation. To mitigate the uncertainty related to champions’ effectiveness, their deployment should be accompanied by a plan: (1) on how the use of champions will achieve goals or address barriers to implementation; (2) defining and evaluating fidelity of champion’s activities; and (3) evaluating champions’ effectiveness.

Availability of data and materials

The search strategy, the list of excluded articles, the quality assessment and sensitivity analysis are provided as additional files. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

EBP:

Evidence-based practice

MESH:

Medical subject headings

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PRESS:

Peer Review of the Electronic Search Strategy

RCT:

Randomized controlled trial

References

  1. Melnyk B, Fineout-Overholt E. Evidence-based practice in nursing & healthcare: a guide to best practice. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins; 2011.

    Google Scholar 

  2. Varnell G, Haas B, Duke G, Hudson K. Effect of an educational intervention on attitudes toward and implementation of evidence-based practice. Worldviews Evid Based Nurs. 2008;5(4):172–81.

    Article  PubMed  Google Scholar 

  3. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free Press; 2003.

    Google Scholar 

  4. Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11:33.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Schuster MA, McGlynn EA, Brook RH. How good is the quality of health care in the United States? Milbank Q. 1998;76(4):517–63.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Korenstein D, Falk R, Howell EA, Bishop T, Keyhani S. Overuse of health care services in the United States: an understudied problem. Arch Intern Med. 2012;172(2):171–8.

    Article  PubMed  Google Scholar 

  7. Seddon ME, Marshall MN, Campbell SM, Roland MO. Systematic review of studies of quality of clinical care in general practice in the UK, Australia and New Zealand. Qual Health Care. 2001;10(3):152–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Runciman WB, Hunt TD, Hannaford NA, Hibbert PD, Westbrook JI, Coiera EW, et al. CareTrack: assessing the appropriateness of health care delivery in Australia. Med J Aust. 2012;197(2):100–5.

    Article  PubMed  Google Scholar 

  9. Squires JE, Cho-Young D, Aloisio LD, Bell R, Bornstein S, Brien SE, et al. Inappropriate use of clinical practices in Canada: a systematic review. CMAJ. 2022;194(8):E279–96.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Braithwaite J, Glasziou P, Westbrook J. The three numbers you need to know about healthcare: the 60–30-10 challenge. BMC Med. 2020;18:1–8.

    Article  Google Scholar 

  11. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24.

    Article  PubMed  Google Scholar 

  12. Harvey G, Loftus-Hills A, Rycroft-Malone J, Titchen A, Kitson A, McCormack B, et al. Nursing theory and concept development or analysis: getting evidence into practice: the role and function of facilitation. J Adv Nurs. 2002;37(6):577–88.

    Article  PubMed  Google Scholar 

  13. Titler MG, Everett LQ. Translating research into practice. Crit Care Nurs Clin North Am. 2001;13(4):587–604.

    Article  CAS  PubMed  Google Scholar 

  14. Ploeg J, Skelly J, Rowan M, Edwards N, Davies B, Grinspun D, et al. The role of nursing best practice champions in diffusing practice guidelines: a mixed methods study. Worldviews Evid Based Nurs. 2010;7(4):238–51.

    Article  PubMed  Google Scholar 

  15. Hendy J, Barlow J. The role of the organizational champion in achieving health system change. Soc Sci Med. 2012;74(3):348–55.

    Article  PubMed  Google Scholar 

  16. Lee SJC, Higashi RT, Inrig SJ, Sanders JM, Zhu H, Argenbright KE, et al. County-level outcomes of a rural breast cancer screening outreach strategy: a decentralized hub-and-spoke model (BSPAN2). Translational Behavioral Medicine. 2017;7(2):349–57.

    Article  PubMed  Google Scholar 

  17. Jennings G. Introducing learning disability champions in an acute hospital. Nurs Times. 2019;115(4):44–7.

    Google Scholar 

  18. Luz S, Shadmi E, Admi H, Peterfreund I, Drach-Zahavy A. Characteristics and behaviours of formal versus informal nurse champions and their relationship to innovation success. J Adv Nurs. 2019;75(1):85–95.

    Article  PubMed  Google Scholar 

  19. Cranley LA, Cummings GG, Profetto-McGrath J, Toth F, Estabrooks CA. Facilitation roles and characteristics associated with research use by healthcare professionals: a scoping review. BMJ Open. 2017;7(8):1–18.

    Article  Google Scholar 

  20. Thompson GN, Estabrooks CA, Degner LF. Clarifying the concepts in knowledge transfer: a literature review. J Adv Nurs. 2006;53(6):691–701.

    Article  PubMed  Google Scholar 

  21. Miech EJ, Rattray NA, Flanagan ME, Damschroder L, Schmid AA, Damush TM. Inside help: an integrative review of champions in healthcare-related implementation. SAGE Open Med. 2018;6:2050312118773261.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Hall AM, Flodgren GM, Richmond HL, Welsh S, Thompson JY, Furlong BM, et al. Champions for improved adherence to guidelines in long-term care homes: a systematic review. Implement Sci Commun. 2021;2(1):85.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Wood K, Giannopoulos V, Louie E, Baillie A, Uribe G, Lee KS, et al. The role of clinical champions in facilitating the use of evidence-based practice in drug and alcohol and mental health settings: a systematic review. Implement Res Pract. 2020;1:1–11.

    Google Scholar 

  24. Shea CM, Belden CM. What is the extent of research on the characteristics, behaviors, and impacts of health information technology champions? A scoping review. BMC Med Inform Decis Mak. 2016;16:2.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Tufanaru C, Munn Z, Aromataris E, Campbell J, Hopp L. Chapter 3: Systematic reviews of effectiveness JBI Manual for Evidence Synthesis: Joanna Briggs Institute; 2020 [Available from: https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-04.

  26. Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions version 6.2: Cochrane, 2021; 2021 [Available from: www.training.cochrane.org/handbook.

  27. Mackenzie N, Knipe S. Research dilemmas: Paradigms, methods and methodology. Issues Educ Res. 2006;16:1–11.

    Google Scholar 

  28. Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. 2021;372:n160.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368:l6890.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Santos WJ, Demery Varin M, Graham I, Lalonde M, Squires J. Champions as a knowledge translation strategy within a health care context: a scoping review protocol: Open Science Framework; 2020 [Available from: https://osf.io/yjcv2/.

  31. McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–6.

    Article  PubMed  Google Scholar 

  32. Covidence. World-class systematic review management: Covidence; 2020 [Available from: https://www.covidence.org/reviewers.

  33. Evidence Partners. Better, faster systematic reviews: evidence partners; 2020 [Available from: https://www.evidencepartners.com/products/distillersr-systematic-review-software/.

  34. Moola S, Munn Z, Tufanaru C, Aromataris E, Sears K, Sfetcu R, et al. Chapter 7: Systematic reviews of etiology and risk Joanna Briggs Institute: Joanna Briggs Institute; 2020 [Available from: https://wiki.jbi.global/display/MANUAL/Chapter+7%3A+Systematic+reviews+of+etiology+and+risk.

  35. Fernandez R, Ellwood L, Barrett D, Weaver J. Safety and effectiveness of strategies to reduce radiation exposure to proceduralists performing cardiac catheterization procedures: a systematic review. JBI Evid Synth. 2020;19(1):4–33.

    Article  Google Scholar 

  36. Estabrooks CA, Cummings GG, Olivo SA, Squires JE, Giblin C, Simpson N. Effects of shift length on quality of patient care and health provider outcomes: systematic review. Qual Saf Health Care. 2009;18(3):181–8.

    Article  CAS  PubMed  Google Scholar 

  37. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

    Article  PubMed  Google Scholar 

  38. Straus SE, Tetroe J, Bhattacharyya O, Zwarenstein M, Graham ID. Chapter 3.5 Monitoring knowledge use and evaluating outcomes. In: Straus SE, Tetroe J, Graham ID, editors. Knowledge translation in health care: moving from evidence to practice. 2nd ed. Chichester: Wiley BMJIBooks; 2013. p. 227–36.

  39. Sharkey S, Hudak S, Horn SD, Barrett R, Spector W, Limcangco R. Exploratory study of nursing home factors associated with successful implementation of clinical decision support tools for pressure ulcer prevention. Adv Skin Wound Care. 2013;26(2):83–92.

    Article  PubMed  Google Scholar 

  40. Chang E, Rose D, Yano EM, Wells K, Metzger ME, Post EP, et al. Determinants of readiness for primary care-mental health integration (PC-MHI) in the VA health care system. J Gen Intern Med. 2012;28(3):353–62.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Whitebird RR, Solberg LI, Jaeckels NA, Pietruszewski PB, Hadzic S, Unutzer J, et al. Effective implementation of collaborative care for depression: what is needed? Am J Manag Care. 2014;20(9):699–707.

    PubMed  PubMed Central  Google Scholar 

  42. Squires JE, Estabrooks CA, Gustavsson P, Wallin L. Individual determinants of research utilization by nurses: a systematic review update. Implement Sci. 2011;6:1.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Squires JE, Hoben M, Linklater S, Carleton HL, Graham N, Estabrooks CA. Job satisfaction among care aides in residential long-term care: a systematic review of contributing factors, both individual and organizational. Nurs Res Pract. 2015;2015:157924.

    PubMed  PubMed Central  Google Scholar 

  44. Dilig-Ruiz A, MacDonald I, Demery Varin M, Vandyk A, Graham ID, Squires JE. Job satisfaction among critical care nurses: a systematic review. Int J Nurs Stud. 2018;88:123–34.

    Article  PubMed  Google Scholar 

  45. Grimshaw J, McAuley LM, Bero LA, Grilli R, Oxman AD, Ramsay C, et al. Systematic reviews of the effectiveness of quality improvement strategies and programmes. Qual Saf Health Care. 2003;12(4):298–303.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Deeks JJ, Higgins JPT, Altman DGe. Chapter 10: analysing data and undertaking meta-analyses. Cochrane, 2021: Cochrane, 2021; 2021 [Available from: www.training.cochrane.org/handbook.

  47. Albert SM, Nowalk MP, Yonas MA, Zimmerman RK, Ahmed F. Standing orders for influenza and pneumococcal polysaccharide vaccination: correlates identified in a national survey of U.S. Primary care physicians. BMC Fam Pract. 2012;13:22.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Alidina S, Goldhaber-Fiebert SN, Hannenberg AA, Hepner DL, Singer SJ, Neville BA, et al. Factors associated with the use of cognitive aids in operating room crises: a cross-sectional study of US hospitals and ambulatory surgical centers. Implement Sci. 2018;13:1–12.

    Article  Google Scholar 

  49. Anand KJS, Eriksson M, Boyle EM, Avila-Alvarez A, Andersen RD, Sarafidis K, et al. Assessment of continuous pain in newborns admitted to NICUs in 18 European countries. Acta Paediatr. 2017;106(8):1248–59.

    Article  PubMed  Google Scholar 

  50. Ash J, Goslin LN. Factors affecting information technology transfer and innovation diffusion in health care. Portland: Innovation in Technology Management. The Key to Global Leadership. PICMET '97; 1997. p. 751–754.

  51. Ben-David D, Vaturi A, Solter E, Temkin E, Carmeli Y, Schwaber MJ. The association between implementation of second-tier prevention practices and CLABSI incidence: a national survey. Infect Control Hosp Epidemiol. 2019;40(10):1094–9.

    Article  PubMed  Google Scholar 

  52. Bentz CJ, Bayley KB, Bonin KE, Fleming L, Hollis JF, Hunt JS, et al. Provider feedback to improve 5A’s tobacco cessation in primary care: a cluster randomized clinical trial. Nicotine Tob Res. 2007;9(3):341–9.

    Article  PubMed  Google Scholar 

  53. Bradley EH, Curry LA, Spatz ES, Herrin J, Cherlin EJ, Curtis JP, et al. Hospital strategies for reducing risk-standardized mortality rates in acute myocardial infarction. Ann Intern Med. 2012;156(9):618–26.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Campbell J. The effect of nurse champions on compliance with keystone intensive care unit sepsis-screening protocol. Crit Care Nurs Q. 2008;31(3):251–69.

    Article  PubMed  Google Scholar 

  55. Ellerbeck EF, Bhimaraj A, Hall S. Impact of organizational infrastructure on beta-blocker and aspirin therapy for acute myocardial infarction. Am Heart J. 2006;152(3):579–84.

    Article  CAS  PubMed  Google Scholar 

  56. Foster GL, Kenward K, Hines S, Joshi MS. The relationship of engagement in improvement practices to outcome measures in large-scale quality improvement initiatives. Am J Med Qual. 2017;32(4):361–8.

    Article  PubMed  Google Scholar 

  57. Goff SL, Mazor KM, Priya A, Moran M, Pekow PS, Lindenauer PK. Organizational characteristics associated with high performance on quality measures in pediatric primary care: a positive deviance study. Health Care Manage Rev. 2019;46(3):196–205.

    Article  Google Scholar 

  58. Granade CJ, Parker Fiebelkorn A, Black CL, Lutz CS, Srivastav A, Bridges CB, et al. Implementation of the standards for adult immunization practice: a survey of U.S. health care providers. Vaccine. 2020;38(33):5305–12.

    Article  PubMed  Google Scholar 

  59. Hsia TL, Chiang AJ, Wu JH, Teng NNH, Rubin AD. What drives E-Health usage? Integrated institutional forces and top management perspectives. Comput Human Behav. 2019;97:260–70.

    Article  Google Scholar 

  60. Hung DY, Glasgow RE, Dickinson LM, Froshaug DB, Fernald DH, Balasubramanian BA, et al. The chronic care model and relationships to patient health status and health-related quality of life. Am J Prev Med. 2008;35(5 SUPPL.):S398–406.

    Article  PubMed  Google Scholar 

  61. Kabukye JK, de Keizer N, Cornet R. Assessment of organizational readiness to implement an electronic health record system in a low-resource settings cancer hospital: a cross-sectional survey. PLoS One. 2020;15(6):e0234711.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  62. Kenny DJ. Nurses’ use of research in practice at three US Army hospitals. Nurs Leadersh. 2005;18(3):45–67.

    Article  Google Scholar 

  63. Khera N, Mau LW, Denzen EM, Meyer C, Houg K, Lee SJ, et al. Translation of clinical research into practice: an impact assessment of the results from the Blood and Marrow Transplant Clinical Trials Network Protocol 0201 on Unrelated Graft Source Utilization. Biol Blood Marrow Transplant. 2018;24(11):2204–10.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Korall AMB, Godin J, Feldman F, Cameron ID, Leung PM, Sims-Gould J, et al. Validation and psychometric properties of the commitment to hip protectors (C-HiP) index in long-term care providers of British Columbia, Canada: A cross-sectional survey. BMC Geriatr. 2017;17(1):103.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Korall AMB, Loughin TM, Feldman F, Cameron ID, Leung PM, Sims-Gould J, et al. Determinants of staff commitment to hip protectors in long-term care: a cross-sectional survey. Int J Nurs Stud Adv. 2018;82:139–48.

    Article  Google Scholar 

  66. Lago P, Garetti E, Boccuzzo G, Merazzi D, Pirelli A, Pieragostini L, et al. Procedural pain in neonates: the state of the art in the implementation of national guidelines in Italy. Paediatr Anaesth. 2013;23(5):407–14.

    Article  PubMed  Google Scholar 

  67. Papadakis S, Gharib M, Hambleton J, Reid RD, Assi R, Pipe AL. Delivering evidence-based smoking cessation treatment in primary care practice. Can Fam Physician. 2014;60(7):E362–71.

    PubMed  PubMed Central  Google Scholar 

  68. Paré G, Sicotte C, Poba-Nzaou P, Balouzakis G. Clinicians’ perceptions of organizational readiness for change in the context of clinical information system projects: insights from two cross-sectional surveys. Implement Sci. 2011;6:15.

    Article  PubMed  PubMed Central  Google Scholar 

  69. Patton R, O’Hara P. Alcohol: signs of improvement. The 2nd national Emergency Department survey of alcohol identification and intervention activity. Emerg Med J. 2013;30(6):492–5.

    Article  PubMed  Google Scholar 

  70. Shea CM, Reiter KL, Weaver MA, Albritton J. Quality improvement teams, super-users, and nurse champions: a recipe for meaningful use? J Am Med Inform Assoc. 2016;23(6):1195–8.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Sisodia RC, Dankers C, Orav J, Joseph B, Meyers P, Wright P, et al. Factors associated with increased collection of patient-reported outcomes within a large health care system. JAMA Netw Open. 2020;3(4):e202764.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Slaunwhite JM, Smith SM, Fleming MT, Strang R, Lockhart C. Increasing vaccination rates among health care workers using unit “champions” as a motivator. Can J Infect Control. 2009;24(3):159–64.

    PubMed  Google Scholar 

  73. Soni A, Amin A, Patel DV, Fahey N, Shah N, Phatak AG, et al. The presence of physician champions improved Kangaroo Mother Care in rural western India. Acta Paediatr. 2016;105(9):e390–5.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Strasser SM. Smoking cessation counseling for cystic fibrosis patient caregivers and significant others: perceptions of care center directors and nurse coordinators. [PhD Dissertation]. Alabama: The University of Alabama; 2003.

  75. Tierney CD, Yusuf H, McMahon SR, Rusinak D, O’Brien MA, Massoudi MS, et al. Adoption of reminder and recall messages for immunizations by pediatricians and public health clinics. Pediatrics. 2003;112(5):1076–82.

    Article  PubMed  Google Scholar 

  76. Ward MM, Yankey JW, Vaughn TE, BootsMiller BJ, Flach SD, Welke KF, et al. Physician process and patient outcome measures for diabetes care: relationships to organizational characteristics. Med Care. 2004;42(9):840–50.

    Article  PubMed  Google Scholar 

  77. Weiler MR, Lavender SA, Crawford JM, Reichelt PA, Conrad KM, Browne MW. Identification of factors that affect the adoption of an ergonomic intervention among Emergency Medical Service workers. Ergonomics. 2012;55(11):1362–72.

    Article  PubMed  Google Scholar 

  78. Weiler MR, Lavender SA, Crawford JM, Reichelt PA, Conrad KM, Browne MW. A structural equation modelling approach to predicting adoption of a patient-handling intervention developed for EMS providers. Ergonomics. 2013;56(11):1698–707.

    Article  PubMed  Google Scholar 

  79. Westrick SC, Breland ML. Sustainability of pharmacy-based innovations: the case of in-house immunization services. J Am Pharm Assoc. 2009;49(4):500–8.

    Article  Google Scholar 

  80. Zavalkoff S, Korah N, Quach C. Presence of a physician safety champion is associated with a reduction in urinary catheter utilization in the pediatric intensive care unit. PLoS ONE. 2015;10(12):e0144222.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  81. Zmud RW. Diffusion of modern software practices: influence of centralization and formalization. Manage Sci. 1982;28(12):1421–31.

    Article  Google Scholar 

  82. Zmud RW, Apple LE. Measuring technology incorporation/infusion. J Prod Innov Manage. 1992;9(2):148–55.

    Article  Google Scholar 

  83. Yano E, Fleming B, Canelo I, Lanto A, Yee T, Wang M. National survey results for the primary care director module of the VHA clinical practice organizational survey. Sepulveda, CA: VA HSR&D Center for the Study of Healthcare Provider Behavior; 2008.

    Google Scholar 

  84. Centers for Disease Control Prevention. Measuring healthy days: Population assessment of health-related quality of life: Centers for Disease Control Prevention; 2000 [Available from: https://stacks.cdc.gov/view/cdc/6406.

  85. Centers for Disease Control Prevention. Health-related quality of life surveillance: U.S., 1993–2002: Centers for Disease Control Prevention; 2005 [Available from: https://www.cdc.gov/mmwr/preview/mmwrhtml/ss5404a1.htm.

  86. Moriarty DG, Kobau R, Zack MM, Zahran HS. Tracking healthy days—a window on the health of older adults. Prev Chronic Dis. 2005;2(3):A16.

    PubMed  PubMed Central  Google Scholar 

  87. Estabrooks CA. Research utilization in nursing: An examination of formal structure and influencing factors. [PhD Thesis]. Alberta: University of Alberta; 1997.

  88. Anasetti C, Logan BR, Lee SJ, Waller EK, Weisdorf DJ, Wingard JR, et al. Peripheral-blood stem cells versus bone marrow from unrelated donors. N Engl J Med. 2012;367(16):1487–96.

    Article  CAS  PubMed  Google Scholar 

  89. Holt DT, Armenakis AA, Feild HS, Harris SG. Readiness for organizational change: The systematic development of a scale. J Appl Behav Sci. 2007;43(2):232–55.

    Article  Google Scholar 

  90. Eby LT, Adams DM, Russell JE, Gaby SH. Perceptions of organizational readiness for change: factors related to employees’ reactions to the implementation of team-based selling. Hum Relat. 2000;53(3):419–42.

    Article  Google Scholar 

  91. Rafferty A, Simons R, editors. An empirical examination of the relationship between change readiness perceptions and types of change. Proceedings of the Academy of Management Meeting; 2001 Aug 3- 8; Washington: Academy of Management; 2001.

  92. Patton R, Strang J, Birtles C, Crawford M. Alcohol: a missed opportunity. A survey of all accident and emergency departments in England. Emerg Med J. 2007;24(8):529–31.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  93. Dishaw MT, Strong DM. Extending the technology acceptance model with task–technology fit constructs. Inf Manag. 1999;36(1):9–21.

    Article  Google Scholar 

  94. Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inf Syst Res. 1991;2(3):192–222.

    Article  Google Scholar 

  95. Goodman RM, McLeroy KR, Steckler AB, Hoyle RH. Development of level of institutionalization scales for health promotion programs. Health Educ Q. 1993;20(2):161–78.

    Article  CAS  PubMed  Google Scholar 

  96. Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606–13.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  97. Lewis CC, Mettert KD, Stanick CF, Halko HM, Nolen EA, Powell BJ, et al. The psychometric and pragmatic evidence rating scale (PAPERS) for measure development and evaluation. Implementation Res Pract. 2021;2(1):77.

    Google Scholar 

  98. Mullins ME, Kozlowski SW, Schmitt N, Howell AW. The role of the idea champion in innovation: the case of the Internet in the mid-1990s. Comput Human Behav. 2008;24(2):451–67.

    Article  Google Scholar 

  99. Hays CE, Hays SP, Deville JO, Mulhall PF. Capacity for effectiveness: the relationship between coalition structure and community impact. Eval Program Plann. 2000;23(3):373–9.

    Article  Google Scholar 

  100. Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4:38.

    Article  PubMed  PubMed Central  Google Scholar 

  101. Damschroder L, Banaszak-Holl J, Kowalski C, Forman J, Saint S, Krein S. The role of the "champion’’ in infection prevention: results from a multisite qualitative study. Qual Saf Health Care. 2009;18(6):434–40.

    Article  CAS  PubMed  Google Scholar 

  102. Shaw EK, Howard J, West DR, Crabtree BF, Nease DE Jr, Tutt B, et al. The role of the champion in primary care change efforts: from the State Networks of Colorado Ambulatory Practices and Partners (SNOCAP). J Am Board Fam Med. 2012;25(5):676–85.

    Article  PubMed  PubMed Central  Google Scholar 

  103. Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009;4:40.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We would like to thank Marie-Cécile Domecq (health research librarian for the Faculty of Nursing at the University of Ottawa) for providing guidance and advice to WJS regarding the search strategy and Tamara Rader (health research librarian at Canadian Agency For Drugs And Technologies In Health) for peer reviewing the search strategy. IDG is a recipient of a CIHR Foundation Grant (FDN# 143237). JES holds a University Research Chair in Health Evidence Implementation.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

WJS, IDG, ML and JES participated in the conception of the study. All authors contributed to the development of the study design. WJS developed and ran the search strategy. WJS and MDV conducted the screening, data extraction and methodological quality assessments. WJS completed synthesis with input and critical revisions from IDG, JES and ML. WJS drafted the manuscript. All team members critically reviewed and revised the manuscript regarding its content and approved of the version that is to be published and agree to be accountable for all aspects of the work. 

Corresponding author

Correspondence to Janet E. Squires.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests. Janet E Squires is an Associate Editor of Implementation Science; she was not involved in the peer review of this manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

PRISMA 2020 Checklists.

Additional file 2.

Synthesis Without Meta-analysis (SWIM) in Systematic Reviews Reporting Guideline Checklist.

Additional file 3.

All Accessed Databases and Peer Review Assessment.

Additional file 4.

Excluded Articlesand Reasons for Exclusion.

Additional file 5.

Quality Appraisal Assessments.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Santos, W.J., Graham, I.D., Lalonde, M. et al. The effectiveness of champions in implementing innovations in health care: a systematic review. Implement Sci Commun 3, 80 (2022). https://doi.org/10.1186/s43058-022-00315-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-022-00315-0

Keywords