Skip to main content

Development of the Technical Assistance Engagement Scale: a modified Delphi study

Abstract

Background

Technical assistance (TA) is a tailored approach to capacity building that is commonly used to support implementation of evidence-based interventions. Despite its widespread applications, measurement tools for assessing critical components of TA are scant. In particular, the field lacks an expert-informed measure for examining relationship quality between TA providers and recipients. TA relationships are central to TA and significantly associated with program implementation outcomes. The current study seeks to address the gap in TA measurement tools by providing a scale for assessing TA relationships.

Methods

We utilized a modified Delphi approach involving two rounds of Delphi surveys and a panel discussion with TA experts to garner feedback and consensus on the domains and items that compose the TA Engagement Scale.

Results

TA experts represented various U.S. organizations and TA roles (e.g., provider, recipient, researcher) with 25 respondents in the first survey and 26 respondents in the second survey. The modified Delphi process resulted in a scale composed of six domains and 22 items relevant and important to TA relationships between providers and recipients.

Conclusion

The TA Engagement Scale is a formative evaluation tool intended to offer TA providers the ability to identify strengths and areas for growth in the provider-recipient relationship and to communicate about ongoing needs. As a standard measurement tool, it lends a step toward more systematic collection of TA data, the ability to generate a more coherent body of TA evidence, and enables comparisons of TA relationships across settings.

Peer Review reports

Background

Evidence-based interventions and practices (EBIs) are critical for advancing population health and community well-being. However, both uptake and quality implementation of EBIs are a persistent challenge to improved health outcomes globally. A host of contextual factors contribute to implementation-related outcomes, including stakeholder perceptions of the EBI, setting capacity for EBI adoption, and general functioning and climate of the implementation setting [1,2,3]. A growing body of literature provides evidence that technical assistance improves implementation outcomes [4,5,6,7,8].

Technical Assistance (TA) is a tailored approach to organizational and community capacity building that is chiefly used to support implementation of EBIs [9]. TA involves tailored guidance by a TA specialist (or TA organization) to members of a setting (organization, community) regarding a specific practice area(s) (e.g., needs assessment, program monitoring, etc.). TA delivery frequently entails an assortment of activities (e.g., coaching, consultation, resource sharing; [10]), that vary by recipient need, with direct TA provider-recipient interactions as a hallmark feature. Thus, the relationship between a TA provider and recipient(s) is essential for successful TA [11,12,13], particularly for intensive models of TA [13]. A provider’s ability to effectively build relationships with recipients is recognized as a core TA competency [14, 15].

In a research synthesis of the TA evidence base, TA relationships (defined broadly as “human encounters between TA providers and recipients”) were discussed in approximately 50% of articles ( [16], p.418). The synthesis affirmed the significance of provider-recipient relationships to TA, noting trust, collaboration, and a strengths-based orientation as most commonly reported relationship attributes. When TA providers establish rapport with recipients, recipients view providers as trusting, respectful, patient, and motivating, underscoring the importance of the recipient-provider relationship [17,18,19]. Collaborative TA relationships are positively associated with implementation-related outcomes including implementation adherence [19, 20], and high-quality team functioning—a proximal outcome linked to implementation effectiveness [21].

The value of soliciting client feedback on a professional client-provider relationship is an established best practice across a variety of professional fields (e.g. clinical therapy/counseling, coaching, consulting). Most providers have access to robust measurement scales for this purpose. For example, clinicians, consultants, and coaches can select from an assortment of field-tested measures to get patient feedback regarding the working relationship with their clients (e.g., Therapeutic Bonds Scale [22], Consulting Effectiveness Survey [23], Executive Coaching Survey [24]. It is similarly beneficial to assess TA provider-recipient relationships to monitor and improve TA quality. However, TA providers and centers have sparse options for measuring relationship quality; as such, those interested in measuring relational elements of TA have resorted to developing their own measures. For example, Chilenski and colleagues [21] developed a 7-item instrument to measure collaboration—one specific and important feature of TA relationships. The field of TA is in need of an expert-informed measure of TA provider-recipient relationship quality, particularly an instrument that assesses the multiple dimensions of TA relationships. The purpose of the current study was to fill this gap by obtaining subject matter expert input to develop the TA Engagement Scale, which assesses the quality of engagement (relationship) between TA providers and recipients.

Methods

Initial scale development

We began with a literature review to determine how previous research in TA and related fields (i.e. clinical therapy/counseling, consulting, coaching) measured provider–client relationship quality. We reviewed measures at the domain and item level and retained the most common domains and associated items across each field. The review generated an initial set of domains and items, which we categorized using the International Coaching Federation (ICF) Framework. We used the ICF framework to organize the domains and items due to similarities between coaching and TA; no equivalent framework exists for TA [25, 26].

Next, we solicited TA subject matter expert (henceforth, experts) input through four meetings. Experts were TA providers and researchers from three organizations identified via convenience sampling. These initial discussions with TA experts focused on the adequacy of the domains (e.g., did domains adequately reflect the most salient features of TA relationship quality?). As we obtained feedback, we revised the pool of domains and items accordingly. Altogether, the literature review and initial expert input led to a preliminary, comprehensive set of 14 domains and 75 items. In what follows, we describe our approach to obtaining expert input and consensus on the domains and items on the TA Engagement Scale using a modified Delphi process. We used a combination of literature review, preliminary expert input, and Delphi process to develop a TA scale that is grounded in TA research and practice.

Participants

The TA research team involved a university faculty member PI (VS) and two doctoral students (JT, ZJ). We utilized convenience and snowball sampling, wherein TA experts across the United States acquainted with the PI were invited to participate in the Delphi study. Additionally, we asked prospective experts to share contact information for any TA provider, recipient, or researcher who might be interested in participating. Participant inclusion criteria included: i) having a minimum of one year experience with TA, and ii) English speaking. TA providers, researchers, and recipients from six organizations participated in the Delphi process. The TA research team did not participate in the Delphi surveys and were not included in the data analysis.

Procedures

The Delphi method is a systematic approach for eliciting and aggregating opinion on a topic from a panel of experts [27]. This method has commonly been used to identify the current state of knowledge on a subject, come to a resolution on controversial topics, and develop measurement and indicator tools [28,29,30]. Typically, respondents engage in several rounds of surveys in which they share feedback on a series of questions. Between rounds of surveys, the researcher analyzes experts’ feedback and consolidates it for the following round so that items with more consensus proceed to the next round of surveys and items with less consensus are eliminated. In prior literature, consensus has been defined as 50–97% or more of subject matter experts in agreement about a subject matter, with a 75% median threshold for defining agreement [29]; however, there is no definitive agreed upon consensus threshold regarding Delphi studies [31].

The TA research team utilized the Accurate Consensus Reporting Document (ACCORD) for a Modified Delphi process [32] to ensure accurate and systematic reporting of the Delphi method. The completed ACCORD document is provided in supplementary material. In the current study, the Delphi consensus building process consisted of two surveys administered to subject matter experts (i.e., TA providers, recipients, and researchers) using Qualtrics, a web-based survey platform. Surveys were not piloted prior to administration to TA Delphi experts, however feedback about the survey content was received during discussions with TA experts (see Initial Scale Development section). The two-round survey design was informed by the research team’s pre-existing work (preliminary feedback from TA experts). Before administration of the first Delphi survey, the team hosted two orientation sessions with prospective experts to introduce the goals of the national study and communicate expectations for participation. An orientation session was recorded and shared with prospective experts who were unable to attend a live session. Each round of the TA Engagement Scale survey was emailed to interested experts (described below), who then consented to participate in the study. For Survey Round #1 and #2, we asked experts, “To what extent are the following domains (and items) and their respective definitions relevant to interactions between TA provider and recipient?”. Response options ranged from 1 (not at all relevant) to 4 (completely relevant). The consensus threshold was held at 70%, in which items were retained if 70% or more of experts agreed items were relevant. The 70% consensus threshold was discussed with experts in the orientation session and experts agreed 70% was reasonable and aligned with prior Delphi study methods [29, 33, 34]. After each domain and item, we provided a comment box for experts to leave an open-ended response regarding any comments, suggestions, and concerns related to each domain and item. Additionally, an open-comment box was included at the end of the survey for any other input (e.g. comments about the measure overall, any suggested items, any general questions). The inclusion of open-ended text boxes is a common practice in Delphi surveys [35]. Survey responses were confidential but not anonymous. The first survey was administered over a three week period from August 21st, 2023 to September 8, 2023 and reminder emails were sent on a weekly basis. After we received feedback on survey #1, the TA research team summarized survey results into a report that was shared with the experts. We then held a one hour discussion with the panel of TA experts to discuss their Survey Round #1 feedback and to clarify questions on September 22, 2023.

The second TA Engagement Scale survey was administered to the experts over the course of two weeks from October 6, 2023 to October 20th, 2023. Reminder emails were sent on a weekly basis. In Survey Round #2, we again asked experts to rate the relevance of the refined scale domains and items. Additionally, we asked experts to prioritize the items within each domain based on relative importance to the domain reflecting TA relationships. We elevated the consensus threshold to 85% in Survey Round #2 to increase confidence and agreement in survey items and domains. The 15% increase in threshold was established based on practices reported in existing Delphi studies [35, 36]. This national TA Delphi study was approved by the University of North Carolina at Charlotte Institutional Review Board (IRB-23-0463). A study protocol for this research is unregistered.

Data analysis

Data for Survey Round #1 was analyzed in September of 2023. We followed a four step approach to analyze the quantitative and qualitative data for Round #1 of survey input. First, we used a 70% agreement threshold to determine which domains and items we retained versus removed: if 70% or more respondents indicated that a domain/item was mostly or completely relevant, then the specific domain/item was retained. Domains and items below the 70% threshold were removed. The second and third steps were based on qualitative feedback provided by experts in the open-ended responses. Open-ended responses were analyzed thematically as done in other Delphi studies [28, 33]. In the second step, we assessed if items needed to be relocated to another domain based on expert input (i.e., experts stated in open-ended responses that an item was best represented in another domain). In the third step, we reviewed domains and items that were flagged by experts as redundant (items with similar wording or attributes). If a domain was redundant with another domain, then we merged them and revised the domain definition if needed. If an item shared redundancy with another item, we kept the item with the higher agreement. Lastly, after considering the last three steps, we determined whether an item was retained, removed, or relocated.

Data for Survey Round #2 was analyzed between October to December 2023. In the second round of survey input, we followed a five step approach. First, we used an 85% agreement threshold to strengthen the consensus criteria. If domains or items did not meet the 85% threshold for agreement, then the domain and item was removed. Second, we determined whether the item needed to be relocated based on qualitative feedback. Third, we determined whether the domains or items shared redundancy with other domains and items, respectively; if they did, we merged the domains and retained items with the higher agreement. Fourth, we included a rank ordering system so that respondents could indicate the order of each item’s relative importance from least to most important. The average ranking of these items were used to determine the rank order. The lowest ranked items in a domain were removed. Finally, based on prior steps, we determined whether an item was retained, removed, or relocated.

Results

At the beginning of the modified Delphi process, the TA Engagement Scale included 75 items classified across 14 domains. After the expert input and consensus process, the final scale was reduced to 22 items across six domains. A summary of scale modifications across the modified Delphi survey rounds is available in Fig. 1. A detailed description of the results for each survey round follows.

Fig. 1
figure 1

Scale modifications across modified Delphi survey rounds. Note. Decisions made during Delphi panel discussions are reflected under Delphi Round 1

Survey round #1

We sent the first survey to 32 TA experts. Twenty-five experts responded to Survey Round #1 (78% response rate). We asked experts to indicate their role in TA (i.e., provider, recipient, researcher, other), with the option to select multiple roles. Largely, respondents were TA providers (n = 24), researchers (n = 12), and recipients (n = 6). See Table 1 for expert characteristics.

Table 1 Delphi participant characteristics

Survey Round #1 invited feedback on 14 domains and 75 items. All domains reached the 70% consensus threshold. However, qualitative expert feedback indicated redundancy between some domains: Mutual Affirmation and Empathy, Mutual Investment and Collaboration, Responsiveness and Effective Communication, and Focused Facilitation and Accountability domains. In response, we merged these domains and revised their definitions (see Table 2).

Table 2 Round #1 domain revisions

At the item-level, we first removed items that did not achieve a 70% consensus threshold (n = 4). Then, an additional 34 items were removed due to qualitative feedback indicating the items were redundant with other items, were less clear compared to similar items, and/or were rated lower in comparison to duplicate items. Seven items were relocated as a result of domain-level revisions and qualitative feedback. Finally, three new items were added to better reflect aspects of the Contextually-Minded, Proactive, and Trust domains; experts indicated that the full range of components of the domain were not captured in the original set of items. We ultimately retained 37 of the original 75 items and added three new items for a total of 40 items (see Table 3 for detailed information about the number of domains and items removed, retained, modified, or added in round one).

Table 3 Round #1 item revisions

We shared a report summarizing findings from Survey #1 and invited experts to a one hour panel discussion to review Survey #1 results and to discuss questions emerging from expert feedback. A total of 17 experts attended the panel discussion. As a result of the expert panel discussion, we removed the Client-Centered domain. Experts noted that the Client-Centered domain was more appropriately represented across domains rather than separately (that is, nearly every item pertained to the TA provider being client-centered). Additionally, the definition for Professionalism and its items were revised to better measure the issue of privacy in TA relationships.

After completion of Survey Round #1 and the discussion panel, the TA Engagement Scale included nine domains and 40 items.

Survey round #2

We sent the second survey to 32 experts. Twenty-six experts responded to Survey Round #2 (81% response rate). Similar to the first round of respondents, experts were providers (n = 22), researchers (n = 9), and recipients (n = 5), with some respondents indicating more than one TA role.

Survey Round #2 included nine domains and 40 items. All domains reached the 85% inclusion threshold. Based on qualitative expert feedback, we merged Tailored, Contextually-Minded, and Proactive into a single domain, resulting in six domains: Professionalism, Trust, Collaboration, Communication, Tailored, and Accountability (see Table 4).

Table 4 Round #2 domain revisions

At the item-level, Survey Round #2 invited respondents to rank order and rate the importance of the items within each domain. We retained 22 of the 40 items, removing 18 due to failure to meet 85% consensus, item redundancyFootnote 1, or low rank (see Table 5 for additional detail). We shared a report of Survey Round #2 findings with the experts. At the conclusion of the Delphi process, we retained 6 domains and 22 items (see Table 6 for final scale).

Table 5 Round #2 item revisions
Table 6 Final domains and items on the TA engagement scale

Discussion

In a seminal paper featuring the support system, Wandersman and colleagues [18] present a model for strengthening the science and practice of implementation support (i.e., Evidence-based System for Innovation Support (EBSIS; [18]). Their work rests on the premise that it is not only important to be evidence-based about community health interventions (e.g., EBIs); it is also important to be evidence-based about the approaches used to support implementation of EBIs, such as TA. Research on the support system is underdeveloped and modest relative to research of the delivery system [37, 38], and tools and methods (e.g. scales, frameworks) to assess TA quality and effectiveness are limited and critically needed [9]. This study contributes to implementation research and practice by providing an expert-informed measurement tool to assess TA relational quality.

We used a modified Delphi approach to develop the Technical Assistance (TA) Engagement Scale, a 22-item formative evaluation tool designed to assess TA provider-recipient relationships. Through the Delphi study, we retained six domains: Professionalism, Trust, Collaboration, Communication, Tailored, and Accountability. Five of these domains resemble the relational domains reported in the TA literature synthesis by Katz & Wandersman [18], reinforcing a core set of qualities important to TA relationships: Professionalism (Respect), Trust (Trust), Collaboration (Collaboration), Tailored (Adjusting to Readiness), and Accountability (Roles/Responsibilities). A relational domain that emerged as salient but that was not noted in Katz & Wandersman’s [18] synthesis is Communication. TA experts were consistently high in consensus about the relevance of Communication to TA relationships (96%-100% agreement at the item and domain level), suggesting an area of interpersonal relationships for TA providers to particularly attend to. Aligned with Delphi expert input, effective communication is listed as a key practice for effective TA [10].

The TA Engagement Scale critically advances the practice of TA by providing TA providers and recipients with an expert-informed instrument for monitoring TA engagement quality. It enables TA providers and recipients to examine and develop their relationship collaboratively and intentionally. The measure increases TA provider ability to make data-informed, mid-course adjustments to TA delivery. Further, use of the instrument can signal the provider’s high regard for the TA relationship and thereby bolster relationship quality.

In addition to advances in TA practice, the TA Engagement Scale can contribute to developments in the science of TA. A standard TA measurement tool is an advancement toward more systematic collection of TA data and is essential to generating a coherent body of evidence. The consistent use of a TA measurement scale across studies will allow TA researchers and evaluators to better compare TA relationships in a variety of settings and to examine correlates of TA relationships with targeted outcomes.

Use of the TA engagement scale & considerations for research and practice

The TA Engagement Scale is intended for administration by TA providers to TA recipients on a periodic basis (e.g. monthly, quarterly, semi-annually) to monitor and improve provider-recipient relationship quality. When the instrument is administered, TA recipients complete the scale by rating the extent to which each scale item is present in their relationship with the TA provider using a 5-point frequency scale (5-Always, 4-Often, 3-Sometimes, 2-Rarely, 1- Never). The TA provider reviews the recipients’ responses to identify relational strengths and areas for improvement. The TA provider is encouraged to discuss the recipients’ feedback with TA colleagues and/or TA recipients. Importantly, this tool is for the purpose of TA relationship monitoring and improvement (formative evaluation). It is not intended as a performance assessment, or as a measure of a TA provider’s performance for a workplace employee evaluation.

We designed the TA Engagement Scale with several goals in mind: i) to provide an expert-informed measure that captures multiple dimensions (domains) of TA relationships, ii) to bridge the science and practice of TA through a scale development process involving an in-depth cross-walk of research literature and TA expert input, and iii) to create a user-friendly measure of TA engagement that serves as a practical implementation tool. Given the scale’s relative briefness, TA providers can administer the scale regularly with little time burden on the recipients (< 12 min to complete), making it a practical tool for regularly assessing engagement over time and aligning with calls for more pragmatic approaches to implementation monitoring and tailoring [39, 40].

With modifications, the TA Engagement Scale can be used for group TA (i.e., TA involving one or more TA providers and more than one TA recipient). Adaptation of the scale is minor, including revision to the scale instructions to reflect group TA and revising the subject at the scale item-level; for example, revising “My TA provider is responsive to my expressed needs.” to “My TA provider(s) are responsive to our expressed needs.” In collaboration with a national TA center, we have begun to pilot use of the TA Engagement Scale in group TA formats. Of note, our modified Delphi study focused primarily on the development of this scale for dyadic provider-recipient relationships. There may be important relationship dynamics in group TA settings uncaptured by the current version of the TA Engagement Scale. Research on the use of the TA Engagement Scale in group TA is needed to discern if other relational domains beyond the six identified in the scale are important for group TA.

While the TA Engagement Scale can be used to assess TA relationship quality across in-person, virtual, and hybrid modes of TA, it may have increasing value for virtual modes of TA. The COVID-19 pandemic catalyzed an acceleration in the provision of remote TA, where TA provider-recipient meetings and trainings shifted from in-person to online modalities to accommodate social distancing mandates and travel restrictions [41, 42]. The increase in reliance on remote TA engenders new questions about how TA provider-recipient relationships are formed and maintained in virtual spaces, including how virtual TA relationships compare to hybrid (in-person/virtua)l and in-person exclusive TA relationships. It is known that there are unique considerations associated with remote TA. For example, virtual settings can present more distractions (e.g. email, social media, multitasking, place-based disruptions) and technological challenges. Specialized preparation by professionals who provide remote services, such as TA providers, is necessary to effectively hold virtual spaces and to engage remote TA recipients [41, 43]. However, the influence of remote TA on TA relationships is less understood. This association merits research as remote TA has become a common practice.

Study limitations and future directions

Though this scale is informed by TA experts using a multi-stage approach, it has yet to be psychometrically validated. Additional research is needed to establish the measurement’s ability (e.g. test–retest reliability, internal consistency). A next step in the scale development process is to administer the scale in practice and conduct a confirmatory factor analysis to ensure that the items that we have included are measuring the constructs that we intend to measure [44]. We utilized convenience sampling for recruiting the experts whose feedback we used in this study, in which the majority of respondents were TA providers. It is possible that TA recipient opinions and perceptions are underrepresented as the subset of TA recipients was smaller relative to the other groups (TA providers, researchers). However, given that experts were geographically spread and from multiple organizations and backgrounds, we expect that the results of our Delphi process included general and diverse perspectives on what aspects of TA relationships are most central.

The main purpose of the modified Delphi TA study was development of a TA provider-recipient relationship measurement scale. The Delphi study identified six domains highly relevant to TA relationships. We did not seek expert input about the relative importance of these domains over the life course of a TA relationship or across stages of program implementation; the salience of these domains may vary over time. For example, trust may require time to cultivate and thus be positively correlated with relationship length. Collaboration may dwindle over time as TA recipients become more capable and self-reliant. In fact, an association has been reported between the salience of collaboration and implementation stage [16]. Systematic research is needed to better understand the relative importance of each of the six relational domains over the life course of TA engagement.

Conclusion

The quality of a TA provider-recipient relationship is central to TA and positively associated with program implementation outcomes. Developed through a modified Delphi approach, the TA Engagement Scale is a research and expert-informed formative evaluation measurement tool designed to advance the science and practice of TA. It offers TA providers a practitioner-friendly measure for monitoring and improving their relationships with TA recipients. As a standard TA measurement tool, it enables more systematic collection of TA data and thereby, the ability to generate a more coherent body of evidence. The TA Engagement Scale can be used to assess relationship quality across multiple (virtual, in-person, and hybrid) TA delivery modalities.

Availability of data and materials

The dataset generated and analyzed during the current study are not publicly available due to confidentiality reasons but are available from the corresponding author upon reasonable request.

Notes

  1. If two items were redundant, we kept the item with the higher rank and agreement.

Abbreviations

ACCORD:

Accurate Consensus Reporting Document

EBI:

Evidence Based Intervention

ICF:

International Coaching Federation

RAND:

Research and Development Corporation

TA:

Technical Assistance

References

  1. Shrestha R, Karki P, Altice FL, Dubov O, Fraenkel L, Huedo-Medina T, et al. Measuring acceptability and Preferences for Implementation of Pre-Exposure Prophylaxis (PrEP) using conjoint analysis: an application to primary HIV prevention among high risk drug users. AIDS Behav. 2018;22(4):1228–38.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Scaccia JP, Cook BS, Lamont A, Wandersman A, Castellow J, Katz J, et al. A practical implementation science heuristic for organizational readiness: R = MC2. J Community Psychol. 2015;43(4):484–501.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Scott VC, Gold SB, Kenworthy T, Snapper L, Gilchrist EC, Kirchner S, et al. Assessing cross-sector stakeholder readiness to advance and sustain statewide behavioral integration beyond a State Innovation Model (SIM) initiative. Transl Behav Med. 2021;11(7):1420–9.

    Article  PubMed  Google Scholar 

  4. Olsen AA, Wolcott MD, Haines ST, Janke KK, McLaughlin JE. How to use the Delphi method to aid in decision making and build consensus in pharmacy education. Curr Pharm Teach Learn. 2021;13(10):1376–85.

    Article  PubMed  Google Scholar 

  5. Kegler MC, Redmon PB. Using technical assistance to strengthen tobacco control capacity: evaluation findings from the tobacco technical assistance consortium. Public Health Rep. 2006;121(5):547–56.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Jadwin-Cakmak L, Bauermeister JA, Cutler JM, Loveluck J, Kazaleh Sirdenis T, Fessler KB, et al. The health access initiative: a training and technical assistance program to improve health care for sexual and gender minority youth. J Adolesc Health Off Publ Soc Adolesc Med. 2020;67(1):115–22.

  7. Sugarman JR, Phillips KE, Wagner EH, Coleman K, Abrams MK. The safety net medical home initiative: transforming care for vulnerable populations. Med Care. 2014;52(11 Suppl 4):S1–10.

    Article  PubMed  Google Scholar 

  8. Leeman J, Calancie L, Hartman MA, Escoffery CT, Herrmann AK, Tague LE, Moore AA, Wilson KM, Schreiner M, Samuel-Hodge C. What strategies are used to build practitioners’ capacity to implement community-based interventions and are they effective?: a systematic review. Implement Sci. 2015;29(10):80. https://doi.org/10.1186/s13012-015-0272-7. PMID:26018220;PMCID:PMC4449971.

    Article  Google Scholar 

  9. Scott VC, Jillani Z, Malpert A, Kolodny-Goetz J, Wandersman A. A scoping review of the evaluation and effectiveness of technical assistance. Implement Sci Commun. 2022;3(1):70.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Dunst CJ, Annas K, Wilkie H, Hamby DW. Scoping review of the core elements of technical assistance models and frameworks. World J Educ. 2019;9(2):109–22.

    Article  Google Scholar 

  11. Skelton SM. Situating my positionality as a Black woman with a dis/ability in the provision of equity-focused technical assistance: a personal reflection. Int J Qual Stud Educ. 2019;32(3):225–42.

    Article  Google Scholar 

  12. Mitchell RE, Florin P, Stevenson JF. Supporting community-based prevention and health promotion initiatives: developing effective technical assistance systems. Health Educ Behav Off Publ Soc Public Health Educ. 2002;29(5):620–39.

    Google Scholar 

  13. Fixen D, Blase K, Horner R, Sugai G. Intensive technical assistance. Chapel Hill, NC: FPG Child Development Institute, University of North Carolina at Chapel Hill. Intensive Technical Assistance; 2009.

  14. Labas L, Lavallee S, Downs J, Gallik P, (Eds.). Technical Assistance Competencies for Maine’s Early Childhood Workforce. Orono: University of Maine Center for Community Inclusion and Disability Studies; 2017.

  15. Implementation Support Practitioner Profile | NIRN. Available from: https://nirn.fpg.unc.edu/resources/implementation-support-practitioner-profile. [cited 2024 Mar 14].

  16. Katz J, Wandersman A. Technical assistance to enhance prevention capacity: a research synthesis of the evidence base. Prev Sci Off J Soc Prev Res. 2016;17(4):417–28.

    Article  Google Scholar 

  17. Hunter SB, Chinman M, Ebener P, Imm P, Wandersman A, Ryan GW. Technical assistance as a prevention capacity-building tool: a demonstration using the getting to outcomes framework. Health Educ Behav Off Publ Soc Public Health Educ. 2009;36(5):810–28.

    Google Scholar 

  18. Wandersman A, Chien VH, Katz J. Toward an evidence-based system for innovation support for implementing innovations with quality: tools, training, technical assistance, and quality assurance/quality improvement. Am J Community Psychol. 2012;50(3–4):445–59.

    Article  PubMed  Google Scholar 

  19. Yazejian N, Metz A, Morgan J, Louison L, Bartley L, Fleming WO, et al. Co-creative technical assistance: essential functions and interim outcomes. Evid Policy. 2019;15(3):339–52.

    Article  Google Scholar 

  20. Spoth R, Guyll M, Lillehoj CJ, Redmond C, Greenberg M. Prosper study of evidence-based intervention implementation quality by community–university partnerships. J Community Psychol. 2007;35(8):981–99.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Chilenski SM, Perkins DF, Olson J, Hoffman L, Feinberg ME, Greenberg M, et al. The power of a collaborative relationship between technical assistance providers and community prevention teams: a correlational and longitudinal study. Eval Program Plann. 2016;1(54):19–29.

    Article  Google Scholar 

  22. Saunders SM, Howard KI, Orlinsky DE. The Therapeutic Bond Scales: Psychometric characteristics and relationship to treatment effectiveness. Psychological Assessment: A Journal of Consulting and Clinical Psychology. 1989;1(4):323–30.

  23. Appelbaum SH, Steed AJ. The critical success factors in the client-consulting relationship. J Manag Dev. 2005;24(1):68–93.

    Article  Google Scholar 

  24. de Haan E, Duckworth A, Birch D, Jones C. Executive coaching outcome research: the contribution of common factors such as relationship, personality match, and self-efficacy. Consult Psychol J Pract Res. 2013;65(1):40–57.

    Article  Google Scholar 

  25. Le LT, Anthony BJ, Bronheim SM, Holland CM, Perry DF. A technical assistance model for guiding service and systems change. J Behav Health Serv Res. 2016;43(3):380–95.

    Article  PubMed  Google Scholar 

  26. Motes P, Hess P, editors. Index. In: Collaborating with Community-Based Organizations Through Consultation and Technical Assistance [Internet]. New York, NY: Columbia University Press; 2007. p. 197–206. Available from: https://www.degruyter.com/document/doi/10.7312/mote12872-013/html.

  27. de Villiers MR, de Villiers PJT, Kent AP. The Delphi technique in health sciences education research. Med Teach. 2005;27(7):639–43.

    Article  PubMed  Google Scholar 

  28. Brady SR. The Delphi method. In: Jason LA, Glenwick DS, editors. Handbook of methodological approaches to community-based research: qualitative, quantitative, and mixed methods. Oxford University Press; 2015. p. 0. https://doi.org/10.1093/med:psych/9780190243654.003.0007. [cited 2024 Mar 27].

  29. Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67(4):401–9.

    Article  PubMed  Google Scholar 

  30. Niederberger M, Spranger J. Delphi technique in health sciences: a map. Front Public Health. 2020;8:457. https://doi.org/10.3389/fpubh.2020.00457.

  31. Boulkedid R, et al. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PloS One. 2011;6(6):e20476. https://doi.org/10.1371/journal.pone.0020476.

  32. Gattrell WT, Logullo P, Van Zuuren EJ, Price A, Hughes EL, Blazey P, et al. ACCORD (ACcurate COnsensus Reporting Document): a reporting guideline for consensus methods in biomedicine developed via a modified Delphi. PLOS Med. 2024;21(1):e1004326.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Dragostinov Y, Harðardóttir D, McKenna PE, Robb DA, Nesset B, Ahmad MI, Romeo M, Lim MY, Yu C, Jang Y, Diab M. Preliminary psychometric scale development using the mixed methods Delphi technique. Methods Psychol. 2022;7:100103.

    Article  Google Scholar 

  34. Hepworth LR, Rowe FJ. Using Delphi methodology in the development of a new patient-reported outcome measure for stroke survivors with visual impairment. Brain Behav. 2018;8(2):e00898.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Hsu CC, Sandford B. The Delphi technique: making sense of consensus. Pract Assess Res Eval. 2007;12(10). Available from: http://pareonline.net/getvn.asp?v=12&n=10.

  36. Strøm M, Lönn L, Bech B, Schroeder TV, Konge L, Aho P, Back M, Bicknell C, Björses K, Brunkwall J, Dake M. Assessment of competence in EVAR procedures: a novel rating scale developed by the Delphi technique. Eur J Vasc Endovasc Surg. 2017;54(1):34–41.

    Article  PubMed  Google Scholar 

  37. Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12:1–9.

  38. Wandersman A, Scheier LM. Strengthening the science and practice of implementation support: evaluating the effectiveness of training and technical assistance centers. Eval Health Prof. 2024;47(2):143–53. https://doi.org/10.1177/01632787241248768. PMID: 38790113.

    Article  PubMed  Google Scholar 

  39. Robinson CH, Damschroder LJ. A pragmatic context assessment tool (pCAT): using a Think Aloud method to develop an assessment of contextual barriers to change. Implement Sci Commun. 2023;4(1):3.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Stanick CF, Halko HM, Nolen EA, Powell BJ, Dorsey CN, Mettert KD, et al. Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale (PAPERS). Transl Behav Med. 2021;11(1):11–20.

    Article  PubMed  Google Scholar 

  41. Cross-Technology Transfer Center (TTC) Workgroup on Virtual Learning. Providing behavioral workforce development technical assistance during COVID-19: adjustments and needs. Transl Behav Med. 2022;12(1):ibab097.

    Article  Google Scholar 

  42. Capacity building for household surveys: providing technical assistance in the face of COVID-19. 2021. Available from: https://blogs.worldbank.org/opendata/capacity-building-household-surveys-providing-technical-assistance-face-covid-19. [cited 2024 Mar 14].

  43. Greenhalgh T, Payne R, Hemmings N, Leach H, Hanson I, Khan A, et al. Training needs for staff providing remote services in general practice: a mixed-methods study. Br J Gen Pract. 2023;74(738):e17–26.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Front Public Health. 2018;6:149.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the following individuals and/or organizations for their contributions to the development of the TA Engagement Scale: Build Initiative (Sherri Killins Stewart); Department of Education; Department of Health and Human Services, Administration for Children and Families (Annalisa Mastri, Lauren Antelo); National Center for Safe and Supportive Learning Environments (Brianna Cunniff, Elizabeth Chagnon, Frank Rider, Jeanne Poduska, Rob Mayo); Other/No Organizations (Kathleen Guarino, Laura Hurwitz, Nicole Denmark, Pam Imm); Wandersman Center (Amber Watson, Andrea Lamont, Brittany Cook, Abraham Wandersman).

Funding

No sources of funding were obtained for the current study.

Author information

Authors and Affiliations

Authors

Contributions

Vs, JT, and ZJ conceptualized the study design. Vs, JT, and ZJ administered study materials, analyzed study data, and were all major contributors to development of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Victoria C. Scott.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the University of North Carolina at Charlotte Institutional Review Board (IRB-23-0463). All participants consented to participate in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Scott, V.C., Temple, J. & Jillani, Z. Development of the Technical Assistance Engagement Scale: a modified Delphi study. Implement Sci Commun 5, 84 (2024). https://doi.org/10.1186/s43058-024-00618-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-024-00618-4

Keywords