Skip to main content

A landscape assessment of the activities and capacities of evidence-to-policy intermediaries (EPI) in behavioral health

Abstract

Background

A significant gap exists between the production of research evidence and its use in behavioral health policymaking. Organizations providing consulting and support activities for improving policy represent a promising source for strengthening the infrastructure to address this gap. Understanding the characteristics and activities of these evidence-to-policy intermediary (EPI) organizations can inform the development of capacity-building activities, leading to strengthened evidence-to-policy infrastructure and more widespread evidence-based policymaking.

Methods

Online surveys were sent to 51 organizations from English-speaking countries involved in evidence-to-policy activities in behavioral health. The survey was grounded in a rapid evidence review of the academic literature regarding strategies used to influence research use in policymaking. The review identified 17 strategies, which were classified into four activity categories. We administered the surveys via Qualtrics and calculated the descriptive statistics, scales, and internal consistency statistics using R.

Results

A total of 31 individuals completed the surveys from 27 organizations (53% response rate) in four English-speaking countries. EPIs were evenly split between university (49%) and non-university (51%) settings. Nearly all EPIs conducted direct program support (mean = 4.19/5 [sd = 1.25]) and knowledge-building (4.03 [1.17]) activities. However, engagement with traditionally marginalized and non-traditional partners (2.84 [1.39]) and development of evidence reviews using formal critical appraisal methods (2.81 [1.70]) were uncommon. EPIs tend to be specialized, focusing on a group of highly related strategies rather than incorporating multiple evidence-to-policy strategies in their portfolios. Inter-item consistency was moderate to high, with scale α’s ranging from 0.67 to 0.85. Ratings of respondents’ willingness to pay for training in one of three evidence dissemination strategies revealed high interest in program and policy design.

Conclusions

Our results suggest that evidence-to-policy strategies are frequently used by existing EPIs; however, organizations tend to specialize rather than engage in a breadth of strategies. Furthermore, few organizations reported consistently engaging with non-traditional or community partners. Focusing on building capacity for a network of new and existing EPIs could be a promising strategy for growing the infrastructure needed for evidence-informed behavioral health policymaking.

Peer Review reports

Background

The integration of evidence into health-related policymaking is a stated priority for multiple national and local governments and an articulated goal of researchers working to narrow the “evidence-to-practice gap.” A number of centers, collaboratives, and initiatives have emerged over the last decade with the aim of supporting evidence-informed policymaking, including the World Health Organization-supported European Advisory Committee on Health Research [1], Health Evidence Network, and Evidence-informed Policy Network (EVIPNet); the U.S. Commission on Evidence-Based Policymaking [2]; and similar efforts in Norway [3] and Anglophone countries [4,5,6]. Papers have outlined research agendas to expand the field of implementation science (IS) into the policy realm [7,8,9,10]. Some concerns about applying traditional implementation frameworks to health policymaking, raised by those inside and outside of IS, focus on the irreducible complexity and non-linearity of policymaking [11] and the prohibitive costs of implementing behavioral health innovations with the infrastructure required to maintain fidelity [12]. Translation is needed between evidence production and its use in policymaking [13], but there is very little discussion of the types of organizations already positioned to play this role in local, state, and national governments.

In this paper, we investigate the distinct role that evidence-to-policy intermediaries (EPI) play compared to other types of intermediary organizations focused on mental and substance use services (hereafter termed behavioral health) as described in the field [14, 15] particularly in the strategies used to narrow the evidence-to-policy gap while ensuring the application of evidence is responsive to local political conditions. We present cross-sectional data from a survey of 40 behavioral health EPIs to provide the field with a snapshot of these organizations, including translational techniques currently in use, sources of funding and financial stability, and the perceived value of IS to their work.

A substantial gap remains between the production of research evidence and the inclusion of evidence in policymaking. This gap is particularly acute with regard to behavioral health research [16, 17], where state-level use of research in policymaking and funding of evidence-based treatments has declined in recent years in the USA [18]. Effective translation of research into policy at this level is critical, as states represent the primary funders of behavioral health services in the USA, and thus exercise tremendous influence over the standards and implementation requirements of behavioral health programs and practices [19].

The substantial time lag between when research is published and the uptake of findings into policy is well known [20]. This lag is generally attributed to the misaligned incentives between researchers (narrow view, long timeline) and policymakers (broad view, short timeline) [11], the lack of (general) policymaker capacity to independently gather and apply scientific findings to policy [21], and political influences on what knowledge to prioritize or consider [22]. Given this, long-term and mutually trusting relationship between researchers and policymakers is the most cited facilitator of the use of evidence in policy. The need for organizations to play a skilled role in the translational of research evidence as part of a system ecology of translation is also noted by multiple IS and knowledge translation frameworks [23]. The Interactive Systems Framework is one example of a model that calls for competencies in the translation of evidence and facilitation of its use at the decision-making level [24].

Several papers have explored and begun to catalog the competencies necessary to effectively translate research evidence [25, 26], but little is known about their prevalence in the wider ecology of political decision-making. “Intermediary” is a commonly used term to describe organizations that bridge the worlds of research-based knowledge and real-world systems [14, 15]. In behavioral health, existing efforts to describe the competencies of these organizations have focused on their ability to select, adapt, and support the flexible implementation of programs [14, 15]. In contrast, little is known about the activities of organizations disseminating evidence with the aim of improving or influencing behavioral health policy, in particular [27,28,29]. In this article, we use the term evidence-to-policy intermediaries (EPIs) to refer to this role as part of or distinct from activities taken by other types of behavioral health intermediary organizations.

EPIs already play a critical role in the knowledge production-to-application ecology. A better understanding of how these organizations operate, their current capacities, and funding infrastructure is a step towards understanding how to support better evidence-informed policy with policymakers and community partners [30].

Current study

The current study surveyed leaders of EPI organizations to identify common strategies used to integrate research evidence into behavioral health policy and system-level decision-making. The study sought to quantify the degree to which EPIs use any of seventeen specific strategies, identified in the knowledge translation literature. The authors were also interested in the perceptions of EPI leadership regarding whether the academic literature on evidence translation helps EPI organizations advance their translational work.

Information from this study will help behavioral health policy researchers understand how EPIs provide support and guidance to policymakers and healthcare systems. Findings may elevate the role intermediary organizations play in influencing policy while highlighting the areas where targeted capacity building may enable such organizations to avoid overspecialization in a particular set of skills in favor of developing a suite of tools with which to engage policymakers to address complex needs.

Methods

Sample population

The sample included leaders within evidence-to-policy intermediary (EPI) organizations operating in English-speaking countries including the USA, Canada, the UK, and Australia. To guide the initial search, we defined EPIs as organizations providing behavioral health information to system leaders and policymakers. We then searched for EPIs through Internet searches, via the Society for Implementation Research Collaboration listserv, and by word of mouth. Internet searches used combinations of the terms evidence-based, policy, center, support, implementation, mental health, and behavioral health. Organizations were included in the sample pool if descriptions of center activities included providing system leaders and policymakers with evidence syntheses, policy analyses, reports with policy/system recommendations, or stakeholder engagement activities for the purpose of informing policy. For eligible organizations, we sought contact information for individuals likely to be knowledgeable about policy-supporting activities within that organization. Individuals in leadership positions including executive leadership, program directors, and implementation specialists were intentionally selected. Conversely, when Internet searches returned information about specific individuals, additional searches were conducted to locate that individual’s academic or research “home base” to determine if that agency qualified as an EPI. Upon compiling the above list of organizations, up to two key leaders per center were identified based on Internet searches of the organization’s website. This approach yielded 83 individuals from 44 intermediary organizations. Snowball sampling identified leaders representing seven additional organizations, for a total of 51 organizations and 90 individuals. The majority of the centers identified may be characterized as “arms-length” intermediaries, which operate outside the realm of both national government infrastructure and existing service delivery systems [27].

Data collection

Surveys were sent via email using the Qualtrics platform. Prospective participants were notified that each completed survey would result in a $5 donation from the research team to a U.S. national non-profit organization serving homeless youth. Recruited individuals indicated an interest in participating by clicking a web link in the recruitment email, which began the online consent process, followed by the survey. Prospective participants were contacted up to three times, with email follow-up occurring 1 week and 2 weeks after the initial survey distribution to individuals yet to complete the survey. The study was approved by the University of Washington Institutional Review Board.

Survey development

To generate a list of evidence-to-policy intermediary activities, the research team conducted a literature review and a review of conference presentations of the Academy Health Annual Conference on the Science of Dissemination and Implementation to inform search terms. The literature review followed the PRIMSA-SCr guidelines [31]. Search terms included combinations of policy AND research use, evidence use, research dissemination, and knowledge brokering. Databases used in this search included Web of Science, PubMed, and Academic Search Complete.

Database searches were conducted in August 2020 and returned 581 titles for review. To be included in the final papers, one of the authors reviewed all of the titles and, when needed, abstracts to select publications for further review. Articles that described specific strategies taken to increase the use of evidence in decision-making were included. Fifty titles met the search criteria predetermined by the authors. Of these, one [32] was particularly useful as a source of specific evidence translation strategies. Two authors (LA and SCW) reviewed the documents and coded unique strategies which were then reviewed until the authors achieved consensus. The third author reviewed and agreed with the final list of strategies prior to survey development. Seventeen evidence-to-policy strategies were identified and included in the survey. Three strategies with modestly stronger empirical support, as judged by the authors (e.g., tailored evidence synthesis, researcher brokering, design-focused policymaking), were selected for more in-depth questioning.

Measures

The survey included three major sections: (1) questions regarding the funding portfolios supporting evidence-to-policy activities; (2) the degree to which intermediary organizations used strategies from the published literature in their evidence-to-policy work, perceived benefits from these strategies, and were willing to pay for training on these strategies; and (3) participant perceptions regarding the relevance of the field of IS to their organization’s work.

Funding portfolios

Participants were asked to report the proportion of their organization’s funding portfolio among six categories, including (1) national government research grants, (2) foundation research grants, (3) government research contracts, (4) government service contracts, (5) foundation service contracts, and (6) organizational fundraising and private sources of funding. The mean proportions of the funding portfolio were calculated for each category, along with the number of funding sources identified per organization.

Evidence-to-policy strategies

The 17 unique evidence translation strategies identified from the literature review were classified into four categories: (1) evidence synthesis, (2) evidence dissemination, (3) activities connecting researchers to decision-makers, and (4) capacity building and implementation support. One author drafted the initial survey questions, which were iteratively refined with each co-author until consensus on language was achieved.

Each category was converted into a scale, with the identified strategies serving as indicators within each scale. Inter-item consistency was moderate to high across the four scales, with Cronbach’s alpha (α) ranging from 0.67 to 0.85. However, due to low inter-item consistency within the fourth category scale (capacity building and implementation support), the training policymakers to critically appraise research item was dropped. The scale α subsequently increased to 0.72. The evidence synthesis scale consisted of four strategies, evidence dissemination included three strategies, the research connecting activities scale comprised six strategy indicators, and the capacity building scale included three strategies. Cronbach’s alpha scores were calculated for each scale. One indicator strategy, “Training policymakers to critically appraise research,” was dropped from the capacity building scale due to poor inter-item consistency.

Participants were asked to indicate how central each of the 17 strategies, classified into the four categories identified above, was to the organization’s mission. Responses were recorded via 5-point Likert scales, ranging from “1 = not at all” to “5 = very much” for each item. The evidence synthesis category (1) consisted of four items, with an example strategy item reading, “Summarizing research from multiple studies, like a formal evidence review.” The evidence dissemination Sect. (2) included three strategies, an example of which read, “Conducting and providing a review of the research evidence following the request of a decision-maker or policymaker.” The connecting researchers to decision-makers Sect. (3) included seven items, an example of which read, “Collaborating with traditionally marginalized and non-traditional partners to conduct research or translate evidence.” Finally, the capacity building and implementation support Sect. (4) included four items to be rated, an example of which read, “Facilitating co-design of new programs, systems, or processes using evidence.”

Participants were then asked to estimate the likelihood that funders would support each of the four categories of strategies identified above. Responses were coded via 5-point Likert scales ranging from “1 = not at all” to “5 = very likely.”

To obtain a greater depth of information about evidence-to-policy strategies, while keeping the survey as brief as possible, the authors selected three evidence translation strategies for additional examination. Strategies were chosen based on the judgment of the authors (two of whom are active researchers in behavioral health policy and dissemination) regarding strategies that are more well-represented in the research literature. These included (1) request-driven evidence reviews, (2) connecting researchers to policymakers, and (3) structured policy and program design. Participants were provided a detailed paragraph defining each of these activities and asked to rate the degree to which (i) their organization conducted the activity regularly, (ii) the method was highly effective at influencing policymaking, and (iii) the participant was interested in learning more about that method. Responses were coded via 5-point Likert scales ranging from “1 = strongly disagree” to “5 = strongly agree.” Participants were additionally asked to estimate how much money their organization would be willing to pay to have a staff member attend a training on the method, ranging from $0 to $500. Willingness to pay for training served as an indicator of how much value the respondent placed on the skill set.

Perceptions of implementation science (IS)

Four questions assessed participants’ perceptions of the field of IS on a 0–100 scale. These included (1) how familiar are you with the field of IS with regard to your organization’s policy translation work, (2) how relevant is IS to your day-to-day policy translation work, (3) how actionable are the research findings from the field of IS to your policy translation work, and (4) how relevant is IS to your policy translation work.

Qualitative items

Respondents were asked to provide additional comments and feedback regarding the three in-depth evidence-to-policy strategies (request-driven evidence reviews, connecting research relationships, and structured policy and program design): Please provide any additional thoughts or comments related to [strategy]. In the IS use section, participants were asked to provide an open-ended response to the question: What aspect(s) IS has/have been most useful to your practical work in influencing policymaking?

Analytic strategy

Quantitative analysis

Descriptive statistics, scales, significance tests, and internal consistency statistics were calculated using R. Independent sample t-tests and chi-square tests were calculated to compare the results between U.S. and international-based organizations, as well as university-based versus non-university-based organizations.

We used an open coding, inductive content analysis to extract themes from qualitative data [33]. Two authors (X and X) examined the qualitative responses separately to compare codes and organizing themes. The authors discussed coding until they reached a consensus. The qualitative coding was then shared with the third author who reviewed and provided feedback on the codes. Extracted items were discussed with co-authors and iteratively refined until consensus was met regarding how to describe each unique code and theme.

Results

Participants

Thirty-one participants from 27 agencies responded to the survey (participant response rate = 37%; agency response rate = 53%). Half of all respondents (52%) represented directors or executive leadership of their agency. Nearly one-quarter (23%) were employed as managers or program directors, and one-quarter (26%) served as program officers, implementation specialists, or professors.

EPI characteristics

Representation was evenly split between EPIs based at universities (48%) and those without a university affiliation (52%). More than two-thirds (71%) of agencies were based in the USA, with additional organizations representing Canada (13%), Australia (10%), and the UK (7%). Seven organizations (23%) identified as research centers, four (13%) identified as training or best practices support centers, and three agencies (10%) served as consultation and/or information management support centers. Two EPIs (6%) were identified as both research and training support centers, while four organizations (13%) served as both training and consultation support center. Five agencies (16%) identified as a combination of research, training, and consultation centers, while six EPIs (19%) self-identified outside of these categories (e.g., conveners of policymakers).

Agency funding

Funding mechanisms were diverse across EPIs. On average, government service contracts made up the largest share (29%) of EPI funding portfolios, followed by private fundraising (19%) and national government research grants (18%). Foundation research grants (7%) were the least frequently cited sources of funding. While funding was diverse between organizations (Table 1), EPIs tended to rely heavily on one or two funding sources for their operations. Overall, EPIs were supported by an average of 2.2 funding sources, with up to four sources reported by a small handful of agencies.

Table 1 Funding resources

While national government research grants were the third most frequent source of revenue overall, they were present in the portfolios of university-based agencies at three times the rate of non-university agencies (27.3% vs. 9.1%), a statistically significant finding (p < 0.01). Non-university-based agencies reported more government and foundation service contracts than university-based centers, but this did not reach statistical significance. Non-U.S.-based organizations were funded by foundation service contracts at roughly double the rate of U.S.-based organizations (21.7% vs. 11.5%) and had a higher proportion of government research contracts (17.5% vs. 6.9%, p = 0.04). Private fundraising made up a greater share of non-U.S. organization budgets as well (22.5% vs. 16.9%). Meanwhile, university-based (27.3 vs. 9.1, p < 0.01) and U.S.-based (21.7% vs. 10.0%, p = 0.04) organizations were more likely to be supported by national government research grants. U.S. agencies were more likely to be funded by non-national government service contracts (35.0% vs. 16.3%, p < 0.01). Foundation research grants, while comprising relatively small proportions of EPI budgets overall, made up a higher percentage of U.S.-based organization budgets (9.5%) than non-U.S. organizations (2.0%, p = 0.05).

Likelihood of funding for EPI strategies

Participants were asked to estimate the degree to which funders would financially support each of the four evidence use scales. Capacity-building and implementation support had the highest score (mean 3.77 out of 5), although none of the categories exceeded “likely.” EPIs based outside of universities had higher mean scores for each category, but only one comparison reached statistical significance: compared to their university-based counterparts (2.40), non-university EPIs were more likely to believe that activities in the evidence synthesis category could be funded (3.81, p < 0.01) (see Table 1).

Evidence-to-policy items and scales

We observed moderate variation (range of 1.79, 5-point scale) in the reported frequency of use among the individual items. Four items scored at 4.0 or above (“very much—this is a core activity of ours”). These core strategies included (1) direct service improvement activities, m = 4.19; (2) facilitating co-design of new programs, systems, or processes, m = 4.03; (3) training and supporting leadership planning skills to acquire and implement new knowledge, m = 4.03; and (4) conducting interactive workshops designed for specific audiences to facilitate the use of evidence in planning, m = 4.00 (see Table 2).

Table 2 Evidence translation strategies

Conversely, four strategies scored below a threshold of 3.0 (“somewhat—we do this a little”). These least employed strategies by EPIs were (1) maintaining a database of research-based information, m = 2.40; (2) developing evidence reviews using formal critical appraisal methods, m = 2.68; (3) collaborating with traditionally marginalized and non-traditional partners to conduct research or translate evidence, m = 2.84; and (4) training and coaching policymakers to do a critical appraisal of the research literature, m = 2.87.

We observed a large range in correlations among the four evidence use scales (e.g., r =  − 0.02–0.70), suggesting it was uncommon for organizations to be involved in all four types of evidence-to-policy translation. We observed nearly zero correlation (− 0.02) between the capacity building and evidence synthesis scales and between the capacity building and evidence dissemination categories (0.09). This suggests that organizations involved in predominately evidence synthesis and dissemination activities (e.g., developing and sharing evidence reviews) are highly unlikely to be engaged in capacity-building activities (e.g., codesigning policies, working with marginalized populations). We observed a moderate correlation (0.37) between the connecting researchers to policymakers and evidence synthesis scales, as well as a moderate correlation between the capacity building and connecting researchers to policymakers scales (0.34). Evidence synthesis and evidence dissemination were also moderately correlated (0.64). The strongest correlation was observed between evidence dissemination and connecting researchers to policymakers (0.70) (Table 3).

Table 3 Scale scores

Perceived value of strategies

Non-university-based organizations viewed request-driven evidence reviews (RDERS) as more effective at influencing policy- than university-based organizations (3.94 vs. 3.40, p < 0.05). However, university-based EPIs expressed a greater willingness to pay for training in request-driven evidence reviews (RDER) ($262 vs. $128, p < 0.05). U.S. organizations expressed a stronger interest in learning about RDERs (mean 3.74 vs. 2.17, p < 0.05) compared to their international counterparts. As will be reviewed below, the qualitative results suggest that this likely due to non-U.S. organizations having more expertise in conducting RDERs than U.S. organizations. Other comparisons did not reveal meaningfully or statistically significant differences between U.S. and non-U.S. or university and non-university organizations (see Table 4).

Table 4 Perceptions of implementation science

Perceptions of implementation science

EPIs expressed generally positive views towards the discipline of IS. Out of a possible score of 100, three items scored above 80: (1) familiarity with IS regarding the organization’s policy translation work, (2) relevance of IS to day-to-day policy translation activities, and (3) relevance of IS to overall policy translation work. Non-U.S. organizations held more positive views of IS compared to U.S. organizations on all four indicators. The lowest scored item in this Sect. (62.5/100) was the prompt IS research findings are actionable to policy translation work (Table 4).

Qualitative results

Request-Driven Evidence Reviews (RDERs)

Qualitative analyses were viewed as exploratory prompts to guide the development of hypotheses for future studies about how evidence-to-policy strategies are currently used and viewed among real-world EPIs. As the prompts for the qualitative questions were neutral (e.g., please add any additional thoughts), we view comments about the value of these strategies or concerns to be meaningful signifiers of respondents’ experiences in the field. Eleven participants responded to the open-ended question about the value of RDERs. The timing of incorporating RDERs into an EPI strategy was a prominent theme among respondents. Three participants viewed RDERs as a valuable strategy for influencing policymaking in the immediate term, noting that RDERs represented a core function of their agency’s activities. Two participants noted that they were not currently conducting RDERs but thought they might adopt this strategy in the future.

Funding was a concern raised by two participants, who noted that both current budgets and potential funders constrain the ability or willingness to center RDERs among an EPI’s core activities. Additionally, one respondent expressed a concern that RDERs are only one piece of a complex policymaking puzzle, limiting their independent effectiveness. Another respondent suggested that the impact of RDERs may only be relevant to policymakers who “have already bought into the idea of evidence-informed decision making and that are part of the process of the review.”

Two participants expressed interest in comparing and improving the methods of producing RDERs, such as “staying on top of improvements and changes in these methods,” and mentioned that they “would be interested in learning how others field policymaker [RDER] requests in an efficient and rigorous manner.” A desire to meet with other organizations to share best practices and contrast RDER methods was also noted.

Connecting researchers to policymakers

Ten participants provided comments about the connecting researchers to policymakers strategies. Multiple agencies reported this strategy as a core element of their work (n = 4), with one participant noting, “This is a key area for our business, as our customers (government and health and human services agencies) frequently find it difficult to communicate with academic researchers, but are motivated to understand and incorporate research findings into policy and practice.” Meanwhile, others noted that such connecting activities are engaged informally or might be something that “happens organically at events we organize on specific topics.”

Responses suggested that researcher-policymaking connections occurred because they were required by funders or happened unpredictably. One organization reported such activities being a requirement of their EPI’s funding source, while another noted the strategy was utilized based on the relational networks and connections to researchers, policy designers, and implementation specialists cultivated by employees of the organization. Two participants noted that existing relationships with policymakers were also beneficial for informing research ideas.

A concern about researcher connecting activities included the risk of bias. One participant noted the potential for politicization of such work. Another response highlighted the subjectivity of relational networks, which may complicate attempts to establish a centralized list of stakeholders from which to connect researchers and policymakers. An additional comment expressed a caveat around the strategic effectiveness of connecting activities, noting, “making these connections is a good step, but [I am] not sure if it’s highly effective in the fabric of complex decision-making.”

Structured policy and program design

Seven participants commented on structured policy and program design. Two respondents noted that these activities were woven into their existing methods and organizational practices. However, they reported that these activities lack formal structure. One respondent commented on their efforts to translate research evidence into policy recommendations, “it would have been helpful to actually learn from policymakers what was most helpful to them.”

Respondents expressed interest in learning more about how other similarly situated centers were attempting policy design. One respondent noted, “I would be very interested in how to influence policy in a more structured, systematic way and relying less on the informal interactions.” Another participant noted a desire to compare their use of this strategy with that of other stakeholders.

Practical use of implementation science in policymaking

Seventeen participants (55.8%) provided comments regarding the most useful facets of IS to their practical work influencing policymaking. A core theme emerging from the responses was the strength of IS frameworks for facilitating knowledge translation. One respondent wrote, “implementation science…allows for a more thoughtful and successful translation of research into practice.” Another commented, “findings of implementation science have been helpful to us in prioritizing our [policy] recommendations.”

Additional comments highlighted the usefulness of systems thinking and systematic approaches to knowledge translation (n = 5), strategies for effective program implementation (n = 3), practical implementation studies and tools (n = 5), and the inclusion of multiple voices in the policymaking space (n = 3). In contrast, one participant saw IS as not sufficiently addressing real-world conditions, contending “research sometimes focuses more on describing characteristics of successful implementation efforts that are not necessarily highly changeable.”

Discussion

EPIs play a role distinct from other types of behavioral health intermediary organizations described in the field. This is particularly true regarding the methods used to narrow the evidence-to-policy gap while ensuring the application of evidence is responsive to local political conditions. We aimed to illuminate the EPI landscape with a snapshot of activities undertaken to influence behavioral health policy. A key finding is that EPI organizations did not engage in activities across the range of translation categories. Instead, EPIs tended to specialize in either synthesis/dissemination activities or system improvement/capacity-building activities.

Few organizations offer the range of research translation services envisioned by implementation models, such as the Interactive Systems Framework (ISF) [24]. The ISF, for example, argues for the need for three “systems” that “optimally work together for successful dissemination and implementation of prevention innovations” (pp. 178). An example of an effort with overlapping capacities is the Rapid Synthesis and Translation Process (RSTP) supported by the Division of Violence Prevention at the Centers for Disease Control and Prevention. The RSTP process is conceptualized as a synthesis activity, but with significant stakeholder (practitioner, system level partners, and policy) engagement. System practitioners select the topics chosen for research synthesis as well as the presentation of findings (actionable, concrete, brief). Service leaders and practitioners may then apply review findings to their work (e.g., worksheets, tools, and resources used to improve practice) [34].

The two least endorsed EPI activities in our study were evidence synthesis and collaborating with traditionally marginalized communities. Most EPIs in our survey appeared to be working in service improvement and capacity building and likely had a primary audience of policymakers and service providers. This has at least two implications for policy development. Increasingly, the field of health services research is concerned with the need to attend to equity in the processes of conducting research and translating research findings. The National Academy of Medicine, for example, recently published a white paper revisiting existing recommendations for improvement and calling for more community engagement and partnership [35]. As argued by the paper, community and patient engagement is critical because ownership in the policymaking process engenders better ideas for service improvement and trust in the healthcare system. EPIs can play a critical role for policymakers by engaging community into policy formation and implementation [36], as well as supporting the integration of information arising from the research evidence base with community perspectives [37, 38]. Integrating this information can be facilitated by an increased capacity to conduct synthesis activities in which core principles and elements can be extracted from the research literature and applied to fit local needs and priorities [39].

The need for interconnected supports and the limited value of any single evidence-to-policy activity was echoed by comments in the qualitative portion of the study. Respondents generally viewed responsive evidence reviews positively but felt that their impact was contingent on external conditions. Similarly, respondents felt that connecting researchers to policymakers had bidirectional benefits but could be problematic if used to support political positions. “Arms-length” EPIs may be more successfully positioned to maintain intellectual independence than other types of embedded or informal intermediary relationships [27].

Funding availability is a key factor affecting which activities EPIs undertake. A topic not often addressed in the IS literature is the real-world market for behavioral health intermediaries in general, and policy intermediaries in particular. Franks and Bory’s [40] review of intermediaries found most centers began as service training entities (a service most governments recognize as valuable), gradually building long-term relationships with government organizations that provided a pathway for more influence on policy formation. In the present study, most EPIs (73.5%) secured funding from more than one source, with one-third (32.3%) of agencies incorporating three or more of the six funding sources within their portfolio. Notably, EPI organizations and activities are unlikely to be funded by national government research grants. This is consistent with research demonstrating the low frequency of NIH grants focused on behavioral health policy [41]. EPI activities are more often supported by government service grants and contracts, with some support from foundations. We observed a meaningful difference between U.S. and non-U.S. funding sources, with more private fundraising and government contracts (non-competitive support) outside of the U.S. This is consistent with the growing literature on intermediaries operating outside the U.S., which face acute challenges of scarce or unpredictable funding [42]. Optimism among respondents for funding any of the EPI activities was low, particularly for evidence synthesis, suggesting there may be more difficulty funding activities that support policy formation compared to service improvement. This may be because governments and policymakers feel urgency around action and have a lower appetite for funding planning activities [43].

Overall, the findings provide a reason to be optimistic about the state of the “synthesis” and “capacity” building systems for behavioral health policymaking. We found that among our sample, the frequency of EPI activities was reasonably high. This provides a solid foundation for building a community of practice among entities doing IS-informed work. The high endorsement of IS in general suggests these organizations are active users of the research literature and are likely to adopt methods found to be effective for evidence-informed policymaking. However, respondents felt that IS could provide more meaningful guidance in this specific practice area.

Finally, there is a business case to be made for EPIs to establish a diverse portfolio of evidence translation strategies. For example, both university-based and non-university agencies were willing to pay for training. EPIs incorporating these methods may open an untapped revenue stream that diversifies their funding portfolio while expanding their sphere of influence simultaneously.

Limitations

The study is limited to agencies identified through Internet searches, listserv dissemination, and snowball sampling. The sample almost certainly underrepresents organizations doing similar work. A post hoc estimate of respondents suggests about 70–80% came from direct requests following an Internet search or because the authors were already aware of the organization, and consequently, the results overrepresent “arms-length” intermediaries that are more likely to be connected to implementation science or health service research. We recognize that the definition of EPI is porous and future research might examine how these types of organizations differ and overlap with researcher-practice partnerships, university-community partnerships, and other similarly oriented efforts. Furthermore, as respondents exclusively came from Anglophone countries, we cannot generalize findings to the strategies or activities undertaken outside of these political environments. Other types of organizations may have different perspectives regarding what works to successfully integrate evidence-based information into policy and may have differing levels of interest in being part of a practice community.

Conclusion

EPIs play a critical role in the knowledge production-to-application ecology of behavioral health policymaking. However, while EPIs are employing the research translation strategies found in the IS literature, such use remains highly specialized, as do the funding portfolios of these agencies. Polyspecialization is likely needed to harness translational impact. A particular need exists for EPIs to work with communities affected by policies, especially around behavioral health and social welfare. Additionally, EPIs use IS to translate research into policy; however, the perceived actionability of IS for agencies’ own policy translation methods was limited. Finally, there is a business case to be made for translating IS methods that could be income-generating for training organizations seeking to build the capacity of EPIs.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available in the Open Science Framework repository, https://doi.org/10.17605/OSF.IO/Z683E.

Abbreviations

IS:

Implementation science

EPI:

Evidence-to-policy intermediary

U.S.:

United States

RDER:

Request-driven evidence review

ISF:

Interactive systems framework

RSTP:

Rapid synthesis and translation process

References

  1. World Health Organization. European Advisory Committee on Health Research (EACHR) Copenhagen 2022. https://www.euro.who.int/en/data-and-evidence/evidence-informed-policy-making/european-advisory-committee-on-health-research-eachr.

  2. Evidence-Based Policymaking Commission Act of 2016, H.R.1831. 2016. https://www.dol.gov/sites/dolgov/files/ETA/wioa/pdfs/Evidence-Based_Policymaking_Commission-brief.pdf.

  3. Malterud K, Bjelland AK, Elvbakken KT. Evidence-based medicine - an appropriate tool for evidence-based health policy? A case study from Norway. Health Res Policy Syst. 2016;14(1):1–9.

    Google Scholar 

  4. Tangney P, Howes M. The politics of evidence-based policy: a comparative analysis of climate adaptation in Australia and the UK. Eviron Plann C Gov Policy. 2016;34(6):1115–34.

    Google Scholar 

  5. Sax Institute. CIPHER 2022. https://www.saxinstitute.org.au/our-work/cipher/.

  6. Global Commission on Evidence to Address Societal Challenges. The evidence commission report: a wake-up call and path forward for decision-makers, evidence intermediaries, and impact-oriented evidence producers. McMaster Health Forum; 2022.

  7. Emmons KM, Chambers DA. Policy implementation science - an unexplored strategy to address social determinants of health. Ethn Dis. 2021;31(1):133–8.

    PubMed  PubMed Central  Google Scholar 

  8. Nilsen P, Ståhl C, Roback K, Cairney P. Never the twain shall meet? A comparison of implementation science and policy implementation research. Implement Sci. 2013;8:1–12.

    Google Scholar 

  9. Nutley SM, Walter I, Davies HT. Using evidence: how research can inform public services. 2007.

    Google Scholar 

  10. Bullock HL, Lavis JN, Wilson MG, Mulvale G, Miatello A. Understanding the implementation of evidence-informed policies and practices from a policy perspective: a critical interpretive synthesis. Implement Sci. 2021;16(1):1–24.

    Google Scholar 

  11. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):1–11.

    Google Scholar 

  12. Massey OT, Vroom EB. The role of implementation science in behavioral health. Foundations of Behavioral Health. 2020. p. 101–18.

    Google Scholar 

  13. Van Enst W, Driessen PP, Runhaar HA. Towards productive science-policy interfaces: a research agenda. JEAPM. 2014;16(1):1450007.

    Google Scholar 

  14. Metz A, Bartley L. Active implementation frameworks for program success. Zero to Three. 2012;32(4):11–8.

    Google Scholar 

  15. Proctor E, Hooley C, Morse A, McCrary S, Kim H, Kohl PL. Intermediary/purveyor organizations for evidence-based interventions in the US child mental health: characteristics and implementation strategies. Implement Sci. 2019;14(1):1–14.

    Google Scholar 

  16. Purtle J, Lewis M. Mapping, “trauma-informed” legislative proposals in U S Congress. Adm Policy Ment Health. 2017;44(6):867–76.

    PubMed  PubMed Central  Google Scholar 

  17. Purtle J, Nelson KL, Horwitz SM, McKay MM, Hoagwood KE. Determinants of using children’s mental health research in policymaking: variation by type of research use and phase of policy process. Implement Sci. 2021;16(1):1–15.

    Google Scholar 

  18. Bruns EJ, Kerns SE, Pullmann MD, Hensley SW, Lutterman T, Hoagwood KE. Research, data, and evidence-based treatment use in state behavioral health systems, 2001–2012. Psychiatr Serv. 2016;67(5):496–503.

    PubMed  Google Scholar 

  19. Bumbarger B, Campbell E. A state agency-university partnership for translational research and the dissemination of evidence-based prevention and intervention. Adm Policy Ment Health. 2012;39(4):268–77.

    PubMed  Google Scholar 

  20. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

    PubMed  PubMed Central  Google Scholar 

  21. Haynes A, Rowbotham SJ, Redman S, Brennan S, Williamson A, Moore G. What can we learn from interventions that aim to increase policy-makers’ capacity to use research? A realist scoping review. Health Res Policy Syst. 2018;16(1):1–27.

    Google Scholar 

  22. Purtle J, Dodson EA, Nelson K, Meisel ZF, Brownson RC. Legislators’ sources of behavioral health research and preferences for dissemination: variations by political party. Psychiatr Serv. 2018;69(10):1105–8.

    PubMed  PubMed Central  Google Scholar 

  23. Kreuter MW, Bernhardt JM. Reframing the dissemination challenge: a marketing and distribution perspective. Am J Public Health. 2009;99(12):2123–7.

    PubMed  PubMed Central  Google Scholar 

  24. Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;41(3):171–81.

    PubMed  Google Scholar 

  25. Mallidou AA, Atherton P, Chan L, Frisch N, Glegg S, Scarrow G. Core knowledge translation competencies: a scoping review. BMC Health Serv Res. 2018;18(1):1–15.

    Google Scholar 

  26. Yeung E, Scodras S, Salbach NM, Kothari A, Graham ID. Identifying competencies for integrated knowledge translation: a Delphi study. BMC Health Serv Res. 2021;21(1):1–18.

    Google Scholar 

  27. Bullock HL, Lavis JN. Understanding the supports needed for policy implementation: a comparative analysis of the placement of intermediaries across three mental health systems. Health Res Policy Sys. 2019;17(1):1–13.

    Google Scholar 

  28. Purtle J, Nelson KL, Bruns EJ, Hoagwood KE. Dissemination strategies to accelerate the policy impact of children’s mental health services research. Psychiatr Serv. 2020;71(11):1170–8.

    PubMed  PubMed Central  Google Scholar 

  29. Hoagwood KE, Purtle J, Spandorfer J, Peth-Pierce R, Horwitz SM. Aligning dissemination and implementation science with health policies to improve children’s mental health. Am Psychol. 2020;75(8):1130–45.

    PubMed  PubMed Central  Google Scholar 

  30. MacKillop E, Quarmby S, Downe J. Does knowledge brokering facilitate evidence-based policy? A review of existing knowledge and an agenda for future research. Policy Polit. 2020;48(2):335–53.

    Google Scholar 

  31. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

    PubMed  Google Scholar 

  32. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):1–12.

    Google Scholar 

  33. Hsieh H, Shannon S. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

    PubMed  Google Scholar 

  34. Thigpen S, Puddy RW, Singer HH, Hall DM. Moving knowledge into action: developing the rapid synthesis and translation process within the interactive systems framework. Am J Community Psychol. 2012;50(3):285–94.

    PubMed  PubMed Central  Google Scholar 

  35. O’Kane M, Agrawal S, Binder L, Dzau V, Gandhi TK, Harrington R, et al. An equity agenda for the field of health care quality improvement. NAM Perspect. 2021;2021:1–20.

  36. Cusworth Walker S, Vick K, Gubner NR, Herting JR, Palinkas LA. Accelerating the conceptual use of behavioral health research in juvenile court decision-making: study protocol. Implement Sci Commun. 2021;2(1):1–10.

    Google Scholar 

  37. Bruns EJ, Walker JS, Bernstein A, Daleiden E, Pullmann MD, Chorpita BF. Family voice with informed choice: coordinating wraparound with research-based treatment for children and adolescents. J Clin Child Adolesc Psychol. 2014;43(2):256–69.

  38. Mendel P, Meredith LS, Schoenbaum M, Sherbourne CD, Wells KB. Interventions in organizational and community context: a framework for building evidence on dissemination and implementation in health services research. Adm Policy Mental Health. 2008;35(1–2):21–37.

    Google Scholar 

  39. Greenhalgh T, Robert G, Bate P, Macfarlane F, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

    PubMed  PubMed Central  Google Scholar 

  40. Franks RP, Bory CT. Who supports the successful implementation and sustainability of evidence-based practices? Defining and understanding the roles of intermediary and purveyor organizations. New Dir Child Adolesc Dev. 2015;149:41–56.

    Google Scholar 

  41. Purtle J, Peters R, Brownson RC. A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007–2014. Implement Sci. 2016;11(2):1–8.

    PubMed  PubMed Central  Google Scholar 

  42. Partridge ACR, Mansilla C, Randhawa H, Lavis JN, El-Jardali F, Sewankambo NK. Lessons learned from descriptions and evaluations of knowledge translation platforms supporting evidence-informed policy-making in low-and middle-income countries: a systematic review. Health Res Policy Syst. 2020;18(1):1–22.

    Google Scholar 

  43. Hammond KR. Human judgment and social policy: irreducible uncertainty, inevitable error, unavoidable injustice. Oxford University Press; 1996.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Ronnie Rubin and the individuals who participated in this survey for their assistance with the study.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

LA drafted the article, collected and assembled the data, and contributed to the analysis and interpretation of the data. SCW drafted the article, was responsible for the conception and design of the study, collected and assembled the data, and contributed to the analysis and interpretation of the data. JP was responsible for the conception and design of the study as well as the analysis and interpretation of the data. All authors provided critical revisions for important intellectual content and approved the final manuscript.

Corresponding author

Correspondence to Lars Almquist.

Ethics declarations

Ethics approval and consent to participate

The study (#00011657) was approved by the University of Washington Institutional Review Board.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Almquist, L., Walker, S.C. & Purtle, J. A landscape assessment of the activities and capacities of evidence-to-policy intermediaries (EPI) in behavioral health. Implement Sci Commun 4, 55 (2023). https://doi.org/10.1186/s43058-023-00432-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-023-00432-4

Keywords