Skip to main content

A scoping review of the evaluation and effectiveness of technical assistance

Abstract

Background

Although the benefits of evidence-based practices (EBPs) for advancing community outcomes are well-recognized, challenges with the uptake of EBPs are considerable. Technical assistance (TA) is a core capacity building strategy that has been widely used to support EBP implementation and other community development and improvement efforts. Yet despite growing reliance on TA, no reviews have systematically examined the evaluation of TA across varying implementation contexts and capacity building aims. This study draws on two decades of peer-reviewed publications to summarize the evidence on the evaluation and effectiveness of TA.

Methods

Guided by Arksey and O’Malley’s six-stage methodological framework, we used a scoping review methodology to map research on TA evaluation. We included peer-reviewed articles published in English between 2000 and 2020. Our search involved five databases: Business Source Complete, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Education Resources Information Center (ERIC), PsycInfo, and PubMed.

Results

A total of 125 evaluation research studies met the study criteria. Findings indicate that publications have increased over the last two decades, signaling a growth in the recognition and reporting of TA. Technical assistance is being implemented across diverse settings, often serving socially vulnerable and under-resourced populations. Most evaluation research studies involved summative evaluations, with TA outcomes mostly reported at the organizational level. Only 5% of the studies examined sustainability of TA outcomes. This review also demonstrates that there is a lack of consistent standards regarding the definition of TA and the level of reporting across relevant TA evaluation categories (e.g., cadence of contact, and directionality).

Conclusions

Advances in the science and practice of TA hinge on understanding what aspects of TA are effective and when, how, and for whom these aspects of TA are effective. Addressing these core questions requires (i) a standard definition for TA; (ii) more robust and rigorous evaluation research designs that involve comparison groups and assessment of direct, indirect, and longitudinal outcomes; (iii) increased use of reliable and objective TA measures; and (iv) development of reporting standards. We view this scoping review as a foundation for improving the state of the science and practice of evaluating TA.

Peer Review reports

Introduction

Although the benefits of evidence-based practices (EBPs) for advancing community outcomes are well-recognized, there are considerable challenges to the use of EBPs in practice, including inaccessible EBP research and publications, resource scarcity, inadequate organizational or leadership support, and limited staff capacity or motivation to engage in EBP efforts [1,2,3,4,5]. Consequently, many EBPs are poorly disseminated, implemented, and sustained across organizational and community settings [2, 6,7,8,9]. Recent efforts to reduce barriers to EBPs highlight the critical role of active, collaborative approaches in supporting EBP dissemination and implementation efforts. Technical assistance (TA) is one such approach used worldwide in both public and private sectors [10,11,12].

Technical assistance refers to an individualized, hands-on approach to capacity building in organizations and communities [13, 14]. This approach involves the provision of tailored guidance by a TA specialist to meet the specific needs of a site(s) through collaborative communication between the TA provider and site(s) or TA recipient(s) [15]. TA services often include a combination of activities such as coaching, consulting, modeling, facilitation, professional development, site visits, and referral to informational resources [16, 17]. The delivery format can vary along multiple dimensions: individualized–group, onsite–virtual, active (high intensity)–passive (low intensity), and peer-to-peer–directed [17]. In addition to supporting the implementation or improvement of an innovation, such as an EBP program, practice, or policy, TA can enhance overall system capacities by empowering staff and improving general organizational or systems processes [13, 18, 19]. As a predominant approach to organizational and community improvement, it is also a global strategy for addressing larger-scale, longstanding, and emerging social issues [20], particularly in child welfare, youth development, education, and community health improvement.

Despite its widespread use, identifying and measuring the impacts of TA is challenging due to a lack of consensus regarding the essential features of TA, inherent variability of tailored services, and minimal use of a framework to systematically plan, implement, and evaluate TA [13, 16, 17, 21]. Variations in setting and population characteristics and differences in recipient organizational goals further complicate measuring TA outcomes. Evaluation studies on the impact of TA are sparse relative to the prevalence of TA use, and findings on the effects of TA on program and system-level outcomes are mixed [22].

While previous reviews have examined important links between TA practices and setting outcomes, they are often limited to a particular domain (e.g., global health [21]) or implementation goal (e.g., uptake of EBP [23]). West and colleagues reviewed the scientific literature on evaluations of TA between 2000 and 2010 to examine its effectiveness in furthering global health [21]. Based on a synthesis of 23 articles, they reported an increasing number of scholarly evaluations of TA but limited evidence of TA effectiveness. The review identified challenges associated with TA provision related to cost effectiveness, managing the growing amount of scientific and technical knowledge, and sustaining global TA supports. The authors concluded that evaluating the quality, process, cost-effectiveness, and impact of TA is an integral component of TA and encouraged more rigorous evaluations of TA efforts. Dunst and colleagues [23] conducted a quantitative analysis to examine the effects of TA on adopting evidence-based and evidence-informed practices. Inclusive of 25 studies and evaluations, their review focused on relating 25 core TA elements (e.g., decision-making, TA resources, and provider feedback) to evaluation outcomes (e.g., adoption and use of targeted practice). They only included TA literature with between-groups or between-condition comparisons to permit effect size calculations. Broadly, results showed that a subset of core TA elements was related to between-group and between-condition differences in effect sizes for TA outcomes. More intensive TA had more robust effects on targeted outcomes compared to less intensive TA. Evaluations that monitored fidelity of both TA practices and intervention practices had larger effect sizes than those that were less attentive to those two core elements.

Though prior studies have contributed valuable insights to examining the field of TA, to our knowledge, no review has comprehensively examined outcomes of TA across varying implementation contexts and capacity building aims. Further, no reviews have systematically synthesized how evaluators conduct evaluations of TA (e.g., formative versus process versus summative evaluation). The increasing number of TA evaluation studies calls for scoping reviews that summarize TA practices and knowledge as well as illuminate trends.

The aims of our scoping review are to (i) document the methodology of evaluation research about TA and (ii) summarize findings associated with TA. Through this review, we seek to identify practical opportunities for improving the implementation, evaluation, and study of TA. Additionally, this scoping review provides important concepts and evidence for furthering capacity building in implementation science frameworks. For example, TA is a key mechanism in the Interactive Systems Framework for Dissemination and Implementation, which reflects the role of a support system in building the capacity of the delivery system [24]. TA is also a core element in the Evidence-Based System for Innovation Support (EBSIS) framework, which emphasizes the need for support to be evidence-based to effectively achieve targeted implementation outcomes [10]. We view the current study as a foundation to improving the state of the science and practice of evaluating TA.

Methods

We use a scoping review methodology to map existing research on TA evaluation. A scoping review is designed to identify knowledge gaps, describe the body of literature, clarify concepts, or investigate research conduct [25]. Like a systematic review, a scoping review involves a structured, predefined process that is systematic, transparent, and reproducible, which includes steps to reduce error and increase the reliability of findings [25].

Our review was guided by Arksey and O’Malley’s [26] methodological framework, which includes six stages: (1) identifying the research question; (2) identifying relevant studies; (3) selecting studies; (4) charting the data; (5) collating, summarizing, and reporting results; and (6) consulting with relevant stakeholders. Additionally, we incorporated suggested methodological enhancements to the six-stage framework (e.g., using an iterative team-based approach to select studies and extract data, incorporating a quantitative and qualitative summary of data, and employing consultation throughout the review process [27, 28]). The study protocol is available via the corresponding author.

Stage 1: Identifying the research question

The development of our research questions began with a collaborative dialog among our research team members, who are TA providers and researchers with expertise in TA and implementation science. We used an iterative process to formulate and refine research questions based on research literature and practice-based experience. We identified the following research questions:

Research question 1 (RQ1): How has TA been evaluated in the scientific literature? (1a and 1b)

RQ1a: What measurement approaches have been used to assess TA?

RQ1b: How have TA outputs and outcomes been conceptualized, and what are notable trends?

Research question 2 (RQ2): To what extent has TA provision resulted in sustainable improvements in organizations and communities?

Stage 2: Identifying relevant studies

Databases and search strategy

The research team generated an initial set of keyword searches based on the research questions and the research team’s collective experience with TA literature. We piloted the initial set of keywords using two databases, PubMed and PsycInfo. This pilot search was limited to (i) English-only articles (the fluent language of the researchers), (ii) publication time frame (January 2000 to June 2020), and (iii) peer-reviewed articles. We examined titles, abstracts, and index terminology to refine the search terms and ensure that we captured relevant literature for review. This process produced the final search terms: “technical assistance” AND “assessment” OR “effectiveness” OR “evaluat*” OR “impact” OR “measurement” OR “outcome*” OR “output*” OR “questionnaire” OR “result*” OR “scale” OR “tool.” Then, we entered the final search terms into three additional databases relevant to the evaluation of TA: Education Resources Information Center (ERIC), Business Source Complete, and Cumulative Index to Nursing and Allied Health Literature (CINAHL).

Eligibility criteria

We used the Population-Concept-Context (PCC) framework for scoping reviews [29] to establish the eligibility criteria (see Table 1). The PCC framework is an adaptation for non-experimental research conceptually rooted in the PICO (population, intervention, comparison, outcome) [30] framework for identifying components of clinical evidence in systematic reviews.

Table 1 Eligibility criteria

Stage 3: Selecting studies

Our literature search strategy used the three-phase process outlined by the Joanna Briggs Institute [28]. First, we finalized the search strings and eligibility criteria. Then, we utilized Microsoft Excel to organize, deduplicate, and code articles. We employed a reference manager (EndNote X9) to extract and convert abstracts of relevant articles into a Microsoft Excel database. For study selection, research team members pilot screened 2% of article titles and abstracts from the five identified databases. During this process, two reviewers independently coded articles as “include,” exclude,” or “unsure, send to full-text review” using the eligibility criteria. The overall inter-rater reliability (IRR) was 0.90. Unresolved inter-rater discrepancies were presented to the research team for consensus coding. We used this initial pilot screening process to develop three screening questions:

  • Does the study objective indicate an evaluation of TA directly or of a program involving TA?

  • Does the article include TA-specific outputs or outcomes?

  • Does the article reflect the use of TA for systems-level capacity building/improvement?

We then used the screening questions to identify the final list of articles.

Stage 4: Charting the data

We referred to the Preferred Reporting Items for Systematic and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScRP): Checklist and Explanations guide [31] and the JBI qualitative data extraction instrument [28] to develop a standardized instrument for extracting information in accordance with the study research questions. Table 2 provides the categories that guided the coding of each article. We used an iterative process that involved piloting and refining the standardized form during the review of full-text articles. Four researchers (VS, ZJ, AM, and JK) reviewed, coded, and compared 10% of the articles to ensure coding consistency across dyads. Pairs of researchers (VS and ZJ; AM and JK) then independently reviewed and coded full-text articles using the eligibility criteria. Articles with discrepant dyad ratings were brought to the larger research team for a final decision. We used the PRISMA flow diagram model (Fig. 1) to report the final study inclusion and exclusion numbers.

Table 2 Data charting form: sample attributes
Fig. 1
figure 1

PRISMA flow diagram

Stage 5: Collating, summarizing, and reporting the results

In this stage, we prepared a quantitative and qualitative summary of data. The quantitative summary specifies the number of studies according to variables of interest (e.g., number or percentage of articles reporting TA outputs versus TA outcomes and number of articles utilizing each evaluation method identified). The qualitative summary is organized by the research questions. It includes an overview of concepts, describes the types of evidence available, and identifies themes and trends.

Stage 6: Consulting with relevant stakeholders

The final stage of the scoping review involves consulting with relevant stakeholders to inform and validate the study findings. We utilized the consultative approach suggested by Peters and colleagues [28] to elicit feedback from experts and stakeholders throughout the study. Specifically, we discussed various topics throughout the scoping review, including the research questions, search terms, search criteria, target databases, data extraction variables, results, and study implications. Five subject matter experts, along with TA providers from the American Institute of Research (national TA center) and a Center of Excellence (Community Anti-Drugs Coalitions of America; CADCA) gave consultative feedback.

Results

Study characteristics

This scoping review includes 125 peer-reviewed articles published between January 2000 and June 2020 (see Fig. 2 and Additional file 1). The USA was the predominant study setting, representing 89% (n=112) of included articles. Study sample sizes ranged from 3 to 865,370, reflecting the number of participating individuals, programs, organizations, community coalitions, states, or countries. Approximately half of the studies (52%) used a descriptive research design. Other research designs included quasi-experimental (21%), experimental (13%), and correlational designs (13%). About 12% of studies explicitly defined technical assistance (see Table 3 for the definitions of TA provided).

Fig. 2
figure 2

Trend line of TA articles published between January 2000 and June 2020

Table 3 Definitions of technical assistance (TA) used in TA evaluation research studies

Applications of TA

Overall, the reasons for implementing TA were diverse (see Table 4). The most common reason for TA was to support the implementation of evidence-based practice or initiatives (41%). One-fifth (20%) of the articles indicated a combination of reasons. Evaluation capacity building (7%), coalition building (4%), improvement (4%), and workforce development (3%) were the next most cited reasons for TA. We aggregated less commonly noted reasons into a category entitled  “other” (21%), which included objectives such as needs assessment, knowledge sharing/dissemination, and tool development. TA was used in multiple areas of practice, with substance use, mental health, child welfare and youth development, public education, HIV prevention, and healthcare improvement most frequently noted. 

Table 4 Reasons for and Frequency of Applications of TA

Concerning the type of TA provided, nearly half of the studies (49%) involved a combination of TA activities (e.g., individual coaching, training, webinars, communities of practice). One-third of studies (32%) involved a singular TA activity (e.g., coaching, training, or other), and 18% of studies did not specify the type of TA provided.

Research question 1: How has TA been evaluated in the scientific literature?

We examined Research question 1 through two questions, one regarding the methods used to measure TA and the other regarding the nature of TA outputs and outcomes. In the following section, we summarize the results from our scoping review.

RQ1a: What measurement approaches have been used to assess TA?

The majority of evaluation research studies were summative evaluations (72%). Process and formative evaluations were less common, comprising 15% of the studies jointly. Slightly over a tenth (13%) of studies employed a combination of the three types of evaluation.

A range of data collection methods was reported for measuring TA, including survey (26%), document review (16%), interview (15%), and observation (2%). The most common approach involved a combination of measurement methods (e.g., survey and document review) (38%).

Quantitative data were reported more frequently than qualitative data, 51% and 22%, respectively. A quarter of studies (26%) reported using both quantitative and qualitative TA data. Concerning data perspective, subjective data—such as respondents rating TA outcomes—were reported more frequently (42%) than objective data (21%, e.g., number of TA visits, availability of a comprehensive plan to address a need). Approximately two-fifths (37%) of studies reported both data perspectives. See Table 5 for a detailed summary of TA measurement approaches.

Table 5 Frequency of measurement approaches for assessing technical assistance

RQ1b: How have TA outputs and outcomes been conceptualized, and what are associated trends?

Outputs reflect the implementation of program activities that are directly salient to process and formative evaluation. In our scoping review, TA outputs were the activities or mechanics of TA delivery. The most frequently reported TA outputs were reach and modality. Available in 78% of studies, reach measures the number of units (e.g., individuals, organizations, etc) receiving TA.

Modality is the medium for TA delivery. Slightly over half of studies (54%) provided TA using a combination of mediums (e.g., in-person, phone, and virtual). In-person-only mediums (17%) were more common than phone/virtual exclusive modalities (6%).

Cadence of contact refers to the schedule of TA services (e.g., routine, and as-needed, fixed number) and was reported in 73% of the studies. A quarter of TA services were provided through a blended schedule involving routine and as-needed support. Aside from the blended schedule, as-needed (22%) service provision was more common than a routine (8%) or fixed-number service schedule (17%).

Duration of engagement reflects the total period of TA services, which is a broad indicator of dosage. As reported in 66% of studies, the duration of engagement ranged widely—from 2 days to 6 years. Directionality describes the source initiating TA contact (i.e., provider, recipient, or bi-directional). TA services were largely provider-initiated (21% proactive TA) or bi-directional (20%), and only 9% were recipient-initiated (reactive TA). Notably, half of the studies (50%) did not report directionality.

Lastly, satisfaction refers to feelings of fulfillment with TA and was reported in 18% of studies. Overall, respondents reported moderately high to high satisfaction with TA. In a small handful of studies where satisfaction was lower, recipients noted inadequate provider subject matter expertise, insufficient knowledge about the target setting, or inappropriate length of TA services (e.g., sessions too long or short). See Table 6 for a detailed summary of the TA outputs.

Table 6 Frequency of technical assistance output variables

TA outcomes refer to the effect(s) or result(s) of TA services. TA outcomes were reported at the individual, programmatic/organizational, and community levels, and they included the use of both qualitative and quantitative data. Individual-level outcomes primarily related to behavioral change (19%), impact on knowledge (11%), and impact on skills (7%). All of the studies examining impact on knowledge reported that TA increased or improved recipient knowledge (e.g., [56,57,58]). Eighty-nine percent of studies examining impact on skills reported increased recipient skills associated with TA (e.g., [46, 59, 60]). Sixty-three percent of the studies examining behavior change (15 of 24 articles) reported a positive impact of TA (e.g., [61,62,63]). Other less frequently noted individual-level outcomes pertained to change in self-efficacy (4%), attitudes (2%), and motivation (2%).

Organizational-level outcomes were represented in 54% of studies, with 17% of these focused on particular programs within the organization. Overall, studies indicate a positive association between the use of TA and organizational-level outcomes, particularly concerning performance or service delivery quality (e.g., [64,65,66]), program/EBP implementation (e.g., [18, 56, 57, 62, 67,68,69]), evaluation capacity [70,71,72,73,74], and collaboration among stakeholders [46, 57, 75].

Studies reporting on the differential impact of TA attributed variations to organizational size, age, staff experience, staff buy-in, and availability of financial incentives for participation. For example, one study indicated that larger firms are more likely to report increased market share, sales, and profits due to TA compared to smaller firms [76]. Another study reported that better healthcare quality was associated with healthcare providers who did not receive financial incentives and TA compared to an incentivized group [77], raising questions about the value of supplementing TA with extrinsic rewards.

Several studies examined the relationship between TA dosage (number of TA hours or calls) and organizational-level outcomes. Most of these studies reported positive findings (e.g., [66, 78, 79]). However, two studies reported no association [42, 80]. One study [81] reported both significant and nonsignificant associations, which varied by capacity areas examined (e.g., evaluation, sustainability).

Community-level interventions pertain to capacity building efforts in a geographically defined area(s) such as a city, county, region, state, providence, or country. Community-level outcomes were reported in 16% of the studies, most relating to child welfare and HIV prevention. Sample outcomes included associations between TA dose and pandemic preparedness [82], community readiness and levels of collaboration [34], TA and collaboration level or team functioning [22, 83, 84], and TA and service or program quality [64, 85]. Results largely reflected partial gains in community capacity (e.g., public health preparedness, development of a plan of collaborative agreement, access to resources, and partnerships). Commonly cited limitations within community-level studies were small sample size, limited generalizability, and lack of a control group. These articles also tended to be scarce in descriptions about TA (e.g., activities and reach). See Table 7 for a detailed summary of TA outcomes.

Table 7 Frequency of individual, organizational, and community level outcomes

Research question 2: To what extent has TA provision resulted in sustainable improvements in organizations and communities?

We defined “sustainable improvements” as positive changes resulting from TA that were maintained beyond the period of TA services. The degree to which gains associated with TA are sustained over time was reported in 5% of studies. In these cases, improvements associated with TA were largely not sustained, with the effects of TA disappearing after a period of time (e.g., 1 year). One experimental study [86] found that gains associated with TA did not sustain except for the group that received the greatest dose of implementation support (i.e., general training and TA). Leadership engagement and staff commitment were identified as critical to sustaining gains associated with TA [87]. Additionally, recipients noted the importance of ongoing TA for sustaining improvements.

Discussion

This scoping review draws on two decades of peer-reviewed publications to summarize the evidence on the evaluation and effectiveness of TA. Findings suggest that TA can effectively build system capacity across diverse settings to enhance implementation. As a capacity building strategy, TA is often delivered to organizations serving socially vulnerable (e.g., persons with serious mental illness, addiction, and HIV) and under-resourced populations. TA delivery to programs supporting vulnerable populations holds promise for advancing health equity and social justice. The increasing number of published articles per year over the two decades signals a growing recognition, application, and reporting of TA.

Knowing how well TA is implemented, which features of TA are most successful for capacity building, and the overall effectiveness of TA relies on quality evaluation research. Although a critical appraisal of the quality of evaluative research on TA was not a focus of this review, we would be remiss if we did not acknowledge overarching methodological gaps that limit our ability to draw meaningful insights across TA literature. Findings from our scoping review support assertions that TA delivery rarely involves systematic planning, implementation, and evaluation methods [13, 23]. Further, we encountered a general lack of definitional clarity, rigorous evaluation research designs, and effective reporting standards in the literature. Increasing transparency and reporting quality of TA research is essential for maximizing impact. In the following sections, we reflect on four aspects core to enhancing the evaluation of TA: defining, designing, measuring, and reporting TA. See Table 8 for a summary of recommendations for enhancing each of these four areas.

Table 8 Summary of recommendations to advance the evaluation and effectiveness of TA

Main insight 1: A need for a standard definition of TA

Our synthesis indicates two significant definitional limitations in evaluation studies of TA. First, studies rarely include an explicit TA definition (only 12% of examined studies). Second, among studies that do include TA definitions, definitions are highly variable, reflecting different understandings of TA’s purpose, process, and provision. For example, some definitions reflect a general aim for TA (e.g., “to build the capacity of individuals or organizations” [32]), while other definitions offer a more specific aim (e.g., “to facilitate knowledge and skill acquisition” [44]). In terms of implementation, some TA definitions encompass a variety of processes or activities (e.g., a “multi-tiered approach” [32]; “different types of activities including community-friendly manuals, on-site consultation, regional workshops, train-the-trainers models, and interactive Web-based systems” [43]). Others use less specific language (e.g., “support to help…” [36]; “tailored or targeted support to…” [38]). Relatively few definitions reference who is providing TA (e.g., “external expertise” [12]; “an outside entity” [52]). While these differences may appear semantic or inconsequential, lacking a consistent definition of TA creates challenges for identifying relevant research and best-practices, and it reduces comparability across studies.

Perhaps the most significant challenge is simply identifying reliable standards of what is and is not TA. Specifically, what are the necessary and sufficient conditions (purposes, activities, and processes) that allow researchers or practitioners to claim TA practice? For example, is TA practice inclusive of training? If so, in what instances and why? Relatedly, what do we mean by “tailored” and “targeted” services? And when non-tailored resources (e.g., informational websites and, guides) are provided to all recipients - sometimes referred to as “universal” services - is it appropriate to conceptualize those activities as part of TA? Additionally, when is it appropriate to evoke TA terminology over alternative terminologies such as coaching, consulting, or counseling which also address capacity building? Are members internal to an organization who provide capacity building supports considered TA providers, or are TA providers inherently external to an organization? These questions are foundational to reliable and valid TA measurement.

To disentangle these complex questions and establish a standard definition for TA, we suggest a consensus method, such as the Delphi technique [88], with a panel of expert TA practitioners, researchers, and TA recipients. It may be useful to identify overlapping features across existing definitions that can serve as a foundation for future consensus. Drawing from the 15 definitions offered in our synthesis, we observed the following defining features of TA:

  • Aim is to increase capacity

  • Services target the systems-level (e.g., organization, community)

  • Supports are individualized (i.e., targeted/tailored)

  • Supports are provided by a subject matter expert or specialist

These characteristics can serve as a starting place for developing a reliable, standard definition for TA.

Main insight 2: A need for more robust and rigorous evaluation research designs

More robust evaluation research designs are needed to (i) establish causal relationships between TA implementation and outcomes; (ii) understand the sustainability of TA outcomes, including what contributes to sustained outcomes; and (iii) elucidate the direct and indirect impact of TA. In our scoping review, we observed a reliance on descriptive methodologies (52%) and modest use of experimental designs (13%). While descriptive studies have merit, particularly in explaining the process of TA delivery, experimental designs can identify causal links between TA implementation and outcomes. For example, one relationship that has been examined but remains inconclusive is between TA intensity (i.e., dose and degree of tailoring) and gains in capacity. Some studies have reported a positive relationship (e.g., [66, 78, 79]), while other studies indicate no significant relationship (e.g., [42, 80]). The relationship between TA intensity and outcomes warrants further research and is best examined using experimental approaches to establish causality.

We found that only 5% of the scoping review studies examined the sustainability of TA outcomes. Longitudinal study designs, including baseline measures of dependent variables, are essential to understanding which TA outcomes are sustainable over time and for how long. This understanding is essential for funders, researchers, and practitioners evaluating expected returns on current or future TA investments. However, several threats to validity warrant particular attention when measuring impact over time, including maturation, effects of history, instrumentation, selection, attrition, and regression to the mean. We encourage future research to pursue longitudinal studies, including control groups or matching techniques, to compare TA outcomes over time.

Lastly, the majority of evaluation studies have been designed to examine the direct impact of TA on recipient systems, such as changes in organizational capacity for implementing an evidence-based practice or staff capacity. The downstream impact of TA has received less attention, perhaps due to inherent measurement challenges associated with conducting evaluation research in complex settings. According to the Interactive Systems Framework for Dissemination and Implementation (ISF), TA is a support system element designed to build delivery system capacity, which, in turn, enhances implementation toward a set of desired outcomes [24, 89]. As such, intervention (e.g., programmatic) outcomes are most appropriately monitored and measured in relation to the delivery system—that is, the setting/system receiving direct TA services. However, evaluation research designs that examine both direct and indirect outcomes of TA are needed to better understand both immediate benefits of TA to the capacity of the delivery system and downstream benefits of TA, including what intervention outcomes can be appropriately attributed to TA. Design research may be a useful approach for systematically examining downstream effects of TA when TA involves multiple activities (e.g., coaching, training, and communities of practice). Originating in the field of education, this approach involves developing formative experiments to test and refine interventions occurring in real-world rather than controlled settings [90,91,92]. Unlike hypothesis testing, which targets a limited number of variables, design research examines all aspects of an intervention to develop a profile of the intervention in practice. In this way, the most effective components or characteristics of TA for a particular setting and target population can be determined.

Main insight 3:  A need for more reliable and objective measures of TA processes and outcomes

The scoping review revealed that subjective data (e.g., self-ratings of change in knowledge resulting from TA) was reported roughly twice as often as objective data (e.g., knowledge-based assessment). Further, less than half of the studies included subjective and objective TA data. Subjective data are valuable for understanding recipient engagement, which can serve as one indicator of increased capacity [70]. Self-report data can also offer ease and efficiency for assessing TA outcomes [76]. However, self-report data are subject to social desirability and reference bias and may not reflect an actual change in knowledge or skills.

We recommend that researchers utilize self-report measures to assess recipient attitudes and beliefs, particularly regarding TA satisfaction, self-efficacy, and commitment to change. For outcomes about knowledge, skills, behavior change, and change in system-level policies and practices, we suggest prioritizing objective data, such as a knowledge assessment, demonstration of skill-based competencies, or tangible observations of practice change. When feasible, a mixed-methods approach that captures subjective and objective data is optimal, allowing for data triangulation. For instance, Clark et al. [67] utilized a mixed-methods approach to measuring TA outcomes by employing observational assessments of TA recipients’ teaching skills and using structured interviews. Similarly, Chinman et al. [80] measured the adherence, quality, and dosage of TA and used a program performance interview.

Specifically, in relation to surveys, we observed the absence of a widely used, psychometrically validated, and reliable instrument for assessing TA implementation and effectiveness. These measures exist for related capacity building strategies (e.g., training and communities of practice). Developing psychometrically sound instruments for assessing TA is a critical step toward enhancing measurement validity and reliability. An instrument to assess TA effectiveness might reflect two broad process constructs: TA techniques (e.g., responsive, client-centric, and proactive) and the TA relationship (e.g., trust, collaboration, communication [13]), and also include items assessing TA outputs and outcomes.

Main insight 4: A need for reporting standards

Widespread variation in the reporting of TA implementation, measurement, and outcomes constrained our ability to draw insights across studies. Our findings suggest that a majority of reported TA outcomes cannot be directly attributed to a specific TA activity or hierarchy of activities. Nearly half of the studies we examined included two or more TA activities (e.g., individualized coaching and training; process frameworks such as Getting To Outcomes®), often in a single measure of TA. Another 18% did not describe the activities that constituted TA. This failure to consistently describe and isolate TA activities makes it difficult to determine how particular TA practices produce positive outcomes. Undoubtedly, the heterogeneity of the reporting of TA is a byproduct of the diverse definitions for TA—linking back to Main insight 1. As a result, practitioners may be overinvesting in ineffective activities or underinvesting in effective activities (e.g., providing individual coaching when expert training is sufficient to produce outcomes). Moreover, a lack of reporting clarity severely limits practitioners’ ability to replicate positive findings. Studies that fail to meaningfully describe a TA intervention (e.g., modality, dosage, cadence, duration, and reach) may prohibit the scaling of effective interventions. For the TA research literature to meaningfully contribute to effective TA practice, it must articulate a clear explanation of which TA activities make an impact, how and when this happens, and for how long the impact occurs. We have developed a Logic Model for TA Effectiveness that we use in our practice as a skeletal frame to guide TA planning, implementation, and evaluation (available via the corresponding author). This logic model specifies the theory of change for a set of TA activities using the domains of inputs, processes, outputs, and outcomes. The Logic Model for TA Effectiveness may be a valuable tool for developing reporting standards.

Lastly, we recommend collective investment from funders, authors, reviewers, and editors in developing minimum reporting standards for TA evaluation research studies and that TA recipients and providers participate in the process. We offer the following reporting checklist as a starting point:

  • Provide an explicit conceptual and operational definition for TA, and upon availability, utilize a standard definition.

  • State the specific aim(s) and targeted direct and indirect outcomes for utilizing TA (e.g., to implement an evidence-based practice/intervention, coalition building, and workforce development).

  • Provide detailed descriptions of TA activities (e.g., coaching, training, and tools, or a combination of these), including data relating to core mechanics of TA (e.g., modality, reach, duration of engagement, directionality, and frequency of contact). Additionally, describe the methods of measuring TA activities (e.g., measurement tools and procedures).

  • Where possible, report (i) the effect of specific TA activities to disaggregate attributions, in addition to the total effect; (ii) both direct and indirect outcomes of TA; and (iii) longitudinal outcomes.

Consistent reporting of TA interventions and outcomes will help build the theory of change for TA.

Study limitations

While we sought to be comprehensive with this review, our search parameters may have missed evaluation research articles. Our search strategy included five interdisciplinary databases where TA literature is commonly published. A search of other bibliographic databases may yield other relevant studies. Further, we limited searches within each database to peer-reviewed articles, potentially skewing data toward academic research and away from practice. We conducted a pilot search to establish 12 search terms to identify relevant studies. Although we used an iterative process to determine the final set of search terms, other key terms may exist that are linked to articles not identified in our search. Additionally, we reported that most evaluation research studies involved TA delivered in the USA. This trend could be a byproduct of limiting articles to English (the language of the research team). Including articles published in other languages would plausibly reveal a broader set of studies. Lastly, we coded only outcomes that were explicitly associated with TA. Articles that bundled TA outcomes with other capacity building outcomes were excluded when clear attributions to TA outcomes could not be delineated. As such, this scoping review is a conservative representation of the number of evaluation studies involving TA.

This study aimed to describe the TA evaluation literature rather than formally assess the quality of TA evaluation. Like research across any topic, the breadth and depth of methodological descriptions were highly variable across studies. We summarize TA methods and findings as they were reported in each article, regardless of reporting quality. We did not seek out new information or clarification from the authors. As such, study methods may be more robust in practice than they appear in our results. We encourage authors to adhere to reporting standards that will advance the study, practice, and theory of TA.

Although this scoping review includes two decades of evaluation research, it primarily reflects findings from studies published prior to the COVID-19 pandemic. Pandemic responses may have a lasting impact on TA provision. In fact, telework is forecasted to remain a sustained fixture across industries [93, 94]. As such, we anticipate that exclusively virtual TA will play a more prominent role in the immediate future of TA than findings from this review may suggest.

Conclusion

TA is a time and resource-intensive approach to organizational and community capacity building that has grown in use across diverse settings over the past two decades. Advances in the science and practice of TA hinge on understanding which aspects of TA are effective, and when, how, and for whom these aspects of TA are effective. Addressing these core questions requires (i) a widely adopted standard definition for TA; (ii) more robust and rigorous evaluation research designs that involve comparison groups and assessment of direct, indirect, and longitudinal outcomes; (iii) increased use of reliable and objective measures of TA outcomes; and (iv) the development of reporting standards. We view this scoping review as a foundation for improving the state of the science and practice of evaluating TA.

Availability of data and materials

The datasets used during the current study are available from the corresponding author on request.

Abbreviations

TA:

Technical assistance

EBP:

Evidence-based practice

EBSIS:

Evidence-based System for Innovation Support

RQ:

Research question

PCC:

Population-Concept-Context

PICO:

Population, Intervention, Comparison, Outcome

IRR:

Inter-rater reliability

PRISMA-ScRP:

Preferred Reporting Items for Systematic and Meta-Analyses Extension for Scoping Reviews

CQI:

Continuous Quality Improvement

ISF:

Interactive Systems Framework for Dissemination and Implementation

References

  1. Kneale D, Rojas-Garcia A, Raine R, Thomas J. The use of evidence in English local public health decision-making: a systematic scoping review. Implement Sci. 2017;12:53.

    PubMed  PubMed Central  Google Scholar 

  2. Hailemariam M, Montgomery TB, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14:57.

    PubMed  PubMed Central  Google Scholar 

  3. Green AE, Aarons GA. A comparison of policy and direct practice stakeholder perceptions of factors affecting evidence-based practice implementation using concept mapping. Implement Sci. 2011;6:104.

    PubMed  PubMed Central  Google Scholar 

  4. Peterson JC, Rogers EM, Cunningham-Sabo L, Davis SM. A framework for research utilization applied to seven case studies. Am J Prev Med. 2007;33:21–34.

    Google Scholar 

  5. Scaccia JP, Cook BS, Lamont A, Wandersman A, Castellow J, Katz J, et al. A practical implementation science heuristic for organizational readiness: R=MC2. J Community Psychol. 2015;43:484–501.

    PubMed  PubMed Central  Google Scholar 

  6. Stirman SW, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17.

    Google Scholar 

  7. Holt CL, Chambers DA. Opportunities and challenges in conducting community-engaged dissemination/implementation research. Transl Behav Med. 2017;7(3):389–92.

    PubMed  PubMed Central  Google Scholar 

  8. August GJ, Bloomquist ML, Lee SS, Realmuto GM, Hektner JM. Can evidence-based prevention programs be sustained in community practice settings? The Early Risers’ advanced-stage effectiveness trial. Prev Sci. 2006;7:151–65.

    PubMed  Google Scholar 

  9. Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to- effectiveness transition. Am J Public Health. 2003;93:1261–7.

    PubMed  PubMed Central  Google Scholar 

  10. Wandersman A, Chien V, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: tools, training, technical assistance, and quality assurance/quality improvement. Am J Community Psychol. 2012;50:445–59.

    PubMed  Google Scholar 

  11. Brownsen RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the Push of Research. Ann Rev Public Health. 2018;39:27–53.

    Google Scholar 

  12. Forman S, Olin S, Hoagwood K, Crowe M, Saka N. Evidence-based interventions in schools: developers’ views of implementation barriers and facilitators. School Mental Health. 2009;1:26–36.

    Google Scholar 

  13. Katz J, Wandersman A. Technical assistance to enhance prevention capacity: a research synthesis of the evidence base. Prev Sci. 2016;17:417–28.

    PubMed  PubMed Central  Google Scholar 

  14. Chinman M, Hannah G, Wandersman A, et al. Developing a community science research agenda for building community capacity for effective prevention interventions. Am J Community Psychol. 2005;35:143–57.

    PubMed  Google Scholar 

  15. CDC Healthy Schools: Training and Professional Development. Centers for Disease Control and Prevention. (2019). https://www.cdc.gov/healthyschools/trainingtools.htm. Accessed 23 Feb 2022.

  16. Dunst CJ, Annas K, Wikie H, Hamby D. Scoping review of the core elements of technical assistance and frameworks. World J Educ. 2019;9:109–22.

    Google Scholar 

  17. Baumgartner S, Cohen A, Meckstroth A. Providing TA to local programs and communities: lessons from a scan of initiatives offering TA to human services programs. Washington, DC: US Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation; 2018.

    Google Scholar 

  18. Olson JR, Coldiron JS, Parigoris RM, Zabel MD, Matarrese M, Bruns EJ. Developing an evidence-based technical assistance model: a process evaluation of the national training and technical assistance center for children, youth, and family mental health. J Behav Health Serv Res. 2020;47:312–30.

    PubMed  PubMed Central  Google Scholar 

  19. Mitchell RE, Stone-Wiggins B, Stevenson JF, Florin P. Cultivating capacity: outcomes of a statewide support system for prevention coalitions. J Prev Interv Community. 2004;27(2):67–87.

    Google Scholar 

  20. Lyons J, Dunleavy Hoag S, Orfield C, Streeter S. Designing technical-assistance programs: considerations for funders and lessons learned. Foundation Rev. 2016;8:68–78.

    Google Scholar 

  21. West GR, Clapp SP, Averill EM, Cates W Jr. Defining and assessing evidence for the effectiveness of technical assistance in furthering global health. Global Public Health. 2012;7(9):915–30.

    PubMed  PubMed Central  Google Scholar 

  22. Chilenski SM, Perkins DF, Olson J, Hoffman L, Feinberg ME, Greenberg M, et al. The power of a collaborative relationship between technical assistance providers and community prevention teams: a correlational and longitudinal study. Eval Program Plann. 2016;4:19–29.

    Google Scholar 

  23. Dunst CJ, Annas K, Wilkie H, Hamby D. Review of the effects of technical assistance on program, organization and system change. Intern J Eval Res Educ. 2019;8:330–43.

    Google Scholar 

  24. Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;1(3):171–81.

    Google Scholar 

  25. Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18:143.

    PubMed  PubMed Central  Google Scholar 

  26. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32.

    Google Scholar 

  27. Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5:1–9.

    Google Scholar 

  28. Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H. Chapter 11: Scoping reviews (2020 version). In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis, JBI; 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-12.

    Chapter  Google Scholar 

  29. Peters MD. In no uncertain terms: The importance of a defined objective in scoping reviews. JBI Database System Rev Implement Rep. 2016;14:1–4.

    PubMed  Google Scholar 

  30. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club. 1995;23(3):A12–3.

    Google Scholar 

  31. Tricco AC, Lillie E, Zarin W, O'Brien KK, et al. PRISMA extension for scoping reviews (PRISMAScR): checklist and explanation. Ann Intern Med. 2018;169:467–73.

    PubMed  Google Scholar 

  32. Bonney T, Welter C, Jarpe-Ratner E, Conroy LM. Understanding the role of academic partners as technical assistance providers: results from an exploratory study to address precarious work. Intern J Environ Res Public Health. 2019;16(20):3903.

    Google Scholar 

  33. Fixsen D, Blase K, Horner R, Sugai G. Scaling up evidence-based practices in education. SISEP scaling up brief; 2009.

  34. Chilenski SM, Welsh J, Olson J, Hoffman L, Perkins DF, Feinberg ME. Examining the highs and lows of the collaborative relationship between technical assistance providers and prevention implementers. Prev Sci. 2018;19(2):250–9.

    PubMed  PubMed Central  Google Scholar 

  35. Cerully JL, Collins RL, Wong EC, Yu J. The Mental Health Association of San Francisco Partner Organizations Meet Their Goals in Stigma Reduction Efforts: Results of a Qualitative Evaluation of the Technical Assistance Process. Rand Health Quarterly. 2016;5(3).

  36. Mitchell RE, Florin P, Stevenson JF. Supporting community-based prevention and health promotion initiatives: developing effective technical assistance systems. Health Educ Behav. 2002;29(5):620–39.

    PubMed  Google Scholar 

  37. Stevenson JF, Florin P, Mills DS, Andrade M. Building evaluation capacity in human service organizations: A case study. Evaluation and Program Planning. 2002;25(3):233-43.

  38. Chiappone A, Smith TM, Estabrooks PA, Rasmussen CG, Blaser C, Yaroch AL. Technical assistance and changes in nutrition and physical activity practices in the National Early Care and Education Learning Collaboratives Project, 2015–2016. Prev Chronic Dis. 2018;15:E47.

    PubMed  PubMed Central  Google Scholar 

  39. Report of the NAEYC on early childhood education professional development: training and technical assistance glossary. Washington, DC: National Association for the Education of Young Children (NAEYC) and National Association of Child Care Resource and Referral Agencies; 2011.

  40. Wolff T. A practitioner's guide to successful coalitions. Am J Community Psychol. 2001;29(2):173-91.

  41. Wandersman A, Florin P. Community interventions and effective prevention. Am Psychol. 2003;58(6-7):441.

  42. Duffy JL, Prince MS, Johnson EE, Alton FL, Flynn S, Faye AM, et al. Enhancing teen pregnancy prevention in local communities: capacity building using the interactive systems framework. Am J Community Psychol. 2012;50(3):370–85.

    PubMed  Google Scholar 

  43. Hunter SB, Chinman M, Ebener P, Imm P, Wandersman A, Ryan GW. Technical assistance as a prevention capacity building tool: a demonstration using the Getting to Outcomes® framework. Health Ed Behav. 2009;36(5):810–28.

    Google Scholar 

  44. Livet M, Yannayon M, Sheppard K, Kocher K, Upright J, McMillen J. Exploring provider use of a digital implementation support system for school mental health: a pilot study. Adm Policy Ment Health. 2018;45(3):362–80.

    PubMed  Google Scholar 

  45. Leeman J, Calancie L, Hartman MA, Escoffery CT, Herrmann AK, Tague LE, Moore AA, Wilson KM, Schreiner M, Samuel-Hodge C. What strategies are used to build practitioners’ capacity to implement community based interventions and are they effective?: a systematic review. Implementation Science. 2015;10(1):1–5.

  46. Moreland-Russell S, Adsul P, Nasir S, Fernandez ME, Walker TJ, Brandt HM, et al. Evaluating centralized technical assistance as an implementation strategy to improve cancer prevention and control. Cancer Causes Control. 2018;29(12):1221–30.

    PubMed  PubMed Central  Google Scholar 

  47. Chinman M, Hunter SB, Ebener P, Paddock SM, Stillman L, Imm P, Wandersman A. The getting to outcomes demonstration and evaluation: an illustration of the prevention support system. Am J Community Psychol. 2008;41(3):206-24.

  48. Segre LS, O’Hara MW, Fisher SD. Perinatal depression screening in Healthy Start: an evaluation of the acceptability of technical assistance consultation. Community mental health journal. 2013;49(4):407-11.

  49. Sullivan WP. Technical assistance in community mental health: A model for social work consultants. Research on Social Work Practice. 1991;1(3):289-305.

  50. Spadaro AJ, Grunbaum JA, Dawkins NU, Wright DS, Rubel SK, Green DC, et al. Training and technical assistance to enhance capacity building between Prevention Research Centers and their partners. Prev Chronic Dis. 2011;8(3).

  51. Anderson LA, Bruner LA, Satterfield D. Diabetes control programs: new directions. The Diabetes Educator. 1995;21(5):432-8.

  52. Rushovich BR, Bartley LH, Steward RK, Bright CL. Technical assistance: a comparison between providers and recipients. Human Service Organizations Manag Leadership Governance. 2015;39(4):362–79.

    Google Scholar 

  53. Sokol DD, Stiegert KW. Exporting knowledge through technical assistance and capacity building. J Compet Law Econ. 2010;6(2):233-51.

  54. Yazejian N, Iruka IU. Associations among tiered quality rating and improvement system supports and quality improvement. Early Childhood Research Quarterly. 2015;30:255-65.

  55. Young BR, Leeks KD, Bish CL, Mihas P, Marcelin RA, Kline J, Ulin BF. Community-University Partnership Characteristics for Translation: Evidence From CDC's Prevention Research Centers. Frontiers in Public Health. 2020;8:79.

  56. Coleman K, Phillips KE, Van Borkulo N, Daniel DM, Johnson KE, Wagner EH, et al. Unlocking the black box: supporting practices to become patient-centered medical homes. Medical Care. 2014;52:S11–7.

    PubMed  Google Scholar 

  57. Kegler MC, Redmon PB. Using technical assistance to strengthen tobacco control capacity: evaluation findings from the tobacco technical assistance consortium. Public Health Rep. 2006;121(5):547–56.

    PubMed  PubMed Central  Google Scholar 

  58. Rogers SJ, Ahmed M, Hamdallah M, Little S. Garnering grantee buy-in on a national cross-site evaluation: the case of ConnectHIV. Am J Eval. 2010;31(4):447–62.

    Google Scholar 

  59. Furukawa MF, King J, Patel V. Physician attitudes on ease of use of EHR functionalities related to meaningful use. Am J Manag Care. 2015;21(12):e684–92.

    PubMed  Google Scholar 

  60. Mayberry RM, Daniels P, Yancey EM, Akintobi TH, Berry J, Clark N, et al. Enhancing community-based organizations’ capacity for HIV/AIDS education and prevention. Eval Program Plan. 2009;32(3):213–20.

    Google Scholar 

  61. Farrell AF, Collier-Meek MA, Furman MJ. Supporting out-of-school time staff in low resource communities: a professional development approach. Am J Community Psychol. 2019;3(3-4):378–90.

    Google Scholar 

  62. Jadwin-Cakmak L, Bauermeister JA, Cutler JM, Loveluck J, Sirdenis TK, Fessler KB, et al. The health access initiative: a training and technical assistance program to improve health care for sexual and gender minority youth. J Adolescent Health. 2020;67(1):115–22.

    Google Scholar 

  63. Ruzek JI, Landes SJ, McGee-Vincent P, Rosen CS, Crowley J, Calhoun PS, et al. Creating a practice-based implementation network: facilitating practice change across health care systems. The J Behav Health Serv Res. 2020;47(4):449–63.

    PubMed  Google Scholar 

  64. Li Y, Spector WD, Glance LG, Mukamel DB. State “technical assistance programs” for nursing home quality improvement: variations and potential implications. J Aging Soc Policy. 2012;24(4):349–67.

    PubMed  PubMed Central  Google Scholar 

  65. Reibstein R. Does providing technical assistance for toxics use reduction really work? A program evaluation utilizing toxics use reduction act data to measure pollution prevention performance. J Cleaner Product. 2008;16(14):1494–506.

    Google Scholar 

  66. Ryan AM, Bishop TF, Shih S, Casalino LP. Small physician practices in New York needed sustained help to realize gains in quality from use of electronic health records. Health Affairs. 2013;32(1):53–62.

    PubMed  Google Scholar 

  67. Clark NM, Cushing LS, Kennedy CH. An intensive onsite technical assistance model to promote inclusive educational practices for students with disabilities in middle school and high school. Res Practice Persons Severe Disabilities. 2004;29(4):253–62.

    Google Scholar 

  68. Lee JG, Ranney LM, Goldstein AO, McCullough A, Fulton-Smith SM, Collins NO. Successful implementation of a wellness and tobacco cessation curriculum in psychosocial rehabilitation clubhouses. BMC Public Health. 2011;11(1):1–1.

    Google Scholar 

  69. Sugarman JR, Phillips KE, Wagner EH, Coleman K, Abrams MK. The safety net medical home initiative: transforming care for vulnerable populations. Medical Care. 2014;1:S1.

    Google Scholar 

  70. Beach LB, Reidy E, Marro R, Johnson AK, Lindeman P, Phillips G, et al. Application of a multisite empowerment evaluation approach to increase evaluation capacity among HIV services providers: results from Project Pride in Chicago. AIDS Educ Prev. 2020;32(2):137–S5.

    PubMed  Google Scholar 

  71. Compton DW, MacDonald G, Baizerman M, Schooley M, Zhang L. Using evaluation capacity building (ECB) to interpret evaluation strategy and practice in the United States National Tobacco Control Program (NTCP): A preliminary study. Can J Program Eval. 2008;23(3):199.

    Google Scholar 

  72. Dancy-Scott N, Williams-Livingston A, Plumer A, Dutcher GA, Siegel ER. Enhancing the capacity of community organizations to evaluate HIV/AIDS information outreach: a pilot experiment in expert consultation. Inform Serv Use. 2016;36(3-4):217–30.

    Google Scholar 

  73. Gibbs DA, Hawkins SR, Clinton-Sherrod AM, Noonan RK. Empowering programs with evaluation technical assistance. Health Promot Pract. 2009;10(1_suppl):38S–44S.

    PubMed  Google Scholar 

  74. Treiber J, Cassady D, Kipke R, Kwon N, Satterlund T. Building the evaluation capacity of California’s local tobacco control programs. Health Promot Pract. 2011;12(6_suppl_2):118S–24S.

    PubMed  Google Scholar 

  75. Kauff JF, Clary E, Lupfer KS, Fischer PJ. An evaluation of SOAR: implementation and outcomes of an effort to improve access to SSI and SSDI. Psychiatric Serv. 2016;67(10):1098–102.

    Google Scholar 

  76. Solomon G, Perry VG. Looking out for the little guy: the effects of technical assistance on small business financial performance. J Market Dev Competitive. 2011;5(4):21–31.

    Google Scholar 

  77. Ryan AM, McCullough CM, Shih SC, Wang JJ, Ryan MS, Casalino LP. The intended and unintended consequences of quality improvement interventions for small practices in a community-based electronic health record implementation project. Med Care. 2014;52(9):826-32.

  78. Leake R, Green S, Marquez C, Vanderburg J, Guillaume S, Gardner VA. Evaluating the capacity of faith-based programs in Colorado. Res Soc Work Practice. 2007;17(2):216–28.

    Google Scholar 

  79. Oliva G, Rienks J, Chavez GF. Evaluating a program to build data capacity for core public health functions in local maternal child and adolescent health programs in California. Maternal Child Health J. 2007;11(1):1.

    Google Scholar 

  80. Chinman M, Acosta J, Ebener P, Malone PS, Slaughter ME. Can implementation support help community-based settings better deliver evidence-based sexual health promotion programs? A randomized trial of Getting To Outcomes®. Implement Sci. 2015;11(1):1–6.

    Google Scholar 

  81. Chinman M, Acosta J, Ebener P, Burkhart Q, Malone PS, Paddock SM, et al. Intervening with practitioners to improve the quality of prevention: one-year findings from a randomized trial of assets-getting to outcomes. J Primary Prev. 2013;34(3):173–91.

    Google Scholar 

  82. Johnson LE, Clará W, Gambhir M, Fuentes RC, Marín-Correa C, Jara J, et al. Improvements in pandemic preparedness in 8 Central American countries, 2008-2012. BMC Health Serv Res. 2014;14(1):1–9.

    Google Scholar 

  83. Gross JM, McCarthy CF, Verani AR, Iliffe J, Kelley MA, Hepburn KW, et al. Evaluation of the impact of the ARC program on national nursing and midwifery regulations, leadership, and organizational capacity in East, Central, and Southern Africa. BMC Health Serv Res. 2018;18(1):1–1.

    CAS  Google Scholar 

  84. Gothro A, Hanno ES, Bradley MC. Challenges and solutions in evaluation technical assistance during design and early implementation. Eval Rev. 2020;46(1):10–31.

    PubMed  Google Scholar 

  85. Matsuoka S, Obara H, Nagai M, Murakami H, Chan LR. Performance-based financing with GAVI health system strengthening funding in rural Cambodia: a brief assessment of the impact. Health Policy Plan. 2014;29(4):456–65.

    PubMed  Google Scholar 

  86. Valdivia M. Business training plus for female entrepreneurship? Short and medium-term experimental evidence from Peru. J Dev Econ. 2015;113:33–51.

    Google Scholar 

  87. Chinman M, Hannah G, McCarthy S. Lessons learned from a quality improvement intervention with homeless veteran services. J Health Care Poor Underserved. 2012;23(3):210–24.

    PubMed  Google Scholar 

  88. Vernon W. The Delphi technique: a review. Intern J Ther Rehabil. 2009;16(2):69–76.

    Google Scholar 

  89. Domlyn AM, Scott V, Livet M, Lamont A, Watson A, Kenworthy T, et al. R= MC2 readiness building process: a practical approach to support implementation in local, state, and national settings. J Community Psychol. 2021;49(5):1228–48.

    PubMed  Google Scholar 

  90. Brown A. Design experiments: theoretical and methodological challenges in creating complex interventions. J Learn Sci. 1992;2:141–78.

    Google Scholar 

  91. Collins A. Toward a design science of education. In: Scanlon E, O’Shea T, editors. New Directions in Educational Technology. Berlin: Springer-Verlag; 1992.

    Google Scholar 

  92. Collins A, Joseph D, Bielaczyc K. Design research: theoretical and methodological issues; Design-based research: Clarifying the terms: Introduction to the learning sciences methodology strand. J Learn Sci. 2004;13(1):15–42.

    Google Scholar 

  93. Bennett R. A year into the COVID-19 pandemic: what have we learned about workplaces and what does the future hold? [Internet]. The National Law Review. 2022 [cited 11 March 2022]. Available from: https://www.natlawreview.com/article/year-covid-19-pandemic-what-have-we-learned-about-workplaces-and-what-does-future#google_vignette

  94. Lund S, Madgavkar A, Manyika J, Smit S, Ellingrud K, Robinson O. The future of work after COVID-19 [Internet]. McKinsey & Company. 2021 [cited 11 March 2022]. Available from: https://www.mckinsey.com/featured-insights/future-of-work/the-future-of-work-after-covid-19

Download references

Acknowledgements

This scoping review was made possible with the consultative support of researchers and TA providers from the Wandersman Center, CADCA, and American Institutes for Research. In particular, we gratefully acknowledge Dr. Rohit Ramaswamy and Pam Imm for their review of the manuscript and guidance on the scoping review process.

Funding

This material is based upon work supported in part by the Office of National Drug Control Policy. Opinions, points, objectives expressed in this document are those of the authors and do not necessarily reflect the official position of or a position endorsed by the Office of National Drug Control Policy.

Author information

Authors and Affiliations

Authors

Contributions

VS, AM, ZJ, and JG contributed to the concept, design, analysis, and synthesis of this study. AW and VS facilitated stage 6 of the scoping review. All authors contributed to the review and approval of the manuscript.

Corresponding author

Correspondence to Victoria C. Scott.

Ethics declarations

Ethics approval and consent to participate

This study was a scoping review, whereby unit samples were articles rather than living subjects. As such, the study did not involve ethics approval and participant consent.

Consent for publication

All authors consent to the publication of this article.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Summary of articles included in scoping review.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Scott, V.C., Jillani, Z., Malpert, A. et al. A scoping review of the evaluation and effectiveness of technical assistance. Implement Sci Commun 3, 70 (2022). https://doi.org/10.1186/s43058-022-00314-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-022-00314-1

Keywords