Skip to main content

Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute

Abstract

Background

To meet the growing demand for implementation science expertise, building capacity is a priority. Various training opportunities have emerged to meet this need. To ensure rigor and achievement of specific implementation science competencies, it is critical to systematically evaluate training programs.

Methods

The Penn Implementation Science Institute (PennISI) offers 4 days (20 h) of virtual synchronous training on foundational and advanced topics in implementation science. Through a pre-post design, this study evaluated the sixth PennISI, delivered in 2022. Surveys measures included 43 implementation science training evaluation competencies grouped into four thematic domains (e.g., items related to implementation science study design grouped into the “design, background, and rationale” competency category), course-specific evaluation criteria, and open-ended questions to evaluate change in knowledge and suggestions for improving future institutes. Mean composite scores were created for each of the competency themes. Descriptive statistics and thematic analysis were completed.

Results

One hundred four (95.41% response rate) and 55 (50.46% response rate) participants completed the pre-survey and post-survey, respectively. Participants included a diverse cohort of individuals primarily affiliated with US-based academic institutions and self-reported as having novice or beginner-level knowledge of implementation science at baseline (81.73%). In the pre-survey, all mean composite scores for implementation science competencies were below one (i.e., beginner-level). Participants reported high value from the PennISI across standard course evaluation criteria (e.g., mean score of 3.77/4.00 for overall quality of course). Scores for all competency domains increased to a score between beginner-level and intermediate-level following training. In both the pre-survey and post-survey, competencies related to “definition, background, and rationale” had the highest mean composite score, whereas competencies related to “design and analysis” received the lowest score. Qualitative themes offered impressions of the PennISI, didactic content, PennISI structure, and suggestions for improvement. Prior experience with or knowledge of implementation science influenced many themes.

Conclusions

This evaluation highlights the strengths of an established implementation science institute, which can serve as a model for brief, virtual training programs. Findings provide insight for improving future program efforts to meet the needs of the heterogenous implementation science community (e.g., different disciplines and levels of implementation science knowledge). This study contributes to ensuring rigorous implementation science capacity building through the evaluation of programs.

Peer Review reports

Background

Implementation science offers an opportunity to systematically close the research-to-practice gap — perhaps one of the greatest current challenges of public health and clinical practice. Over the past two decades, the field has evolved to include a comprehensive repository of approaches that has enabled the advancement of the equitable adoption of evidence-based practices [1,2,3]. Recognizing the value of implementation science, the number of funding mechanisms (e.g., NIH PAR-22–105), targeted journals (e.g., Implementation Science, Implementation Science Communications, Implementation Research and Practice, and Global Implementation Research and Applications), and academic conferences (e.g., AcademyHealth/NIH Conference on the Science of Dissemination and Implementation in Health) have increased in recent years. To continue this momentum and guarantee the future growth of the field, the development of a robust bench of implementation scientists and practitioners is a priority.

Fortunately, multiple implementation science training programs characterized by different objectives and designs have emerged. A review conducted in 2022 identified 74 capacity building initiatives for dissemination and implementation science [4]. Programs vary in format (e.g., virtual and in-person), duration (e.g., 2-year program and brief 3-day institute), target audience (e.g., funded researchers and students), cost (open access and tuition-based programs), and sponsor (e.g., academic institutions and National Institutes of Health) [4,5,6]. Such initiatives have contributed to the growth of the field through the development of specific competencies critical for conducting rigorous implementation science. Despite the diverse set of educational initiatives, however, the demand for training in implementation science has outpaced the supply [7, 8].

Current capacity building initiatives may not meet the needs of all learners. For example, the level of interest and application in implementation science include individuals with an awareness of the field but without plans to lead implementation science projects. Other learners include individuals with an understanding of implementation science who seek to incorporate relevant concepts into their projects (i.e., they require foundational training) and individuals with expertise in implementation science who seek to advance the discipline through their work (i.e., “implementation scientists” requiring advanced training) [9]. The training needs and competencies for these different phenotypes will differ. In addition, some of the current training opportunities are not accessible. Some programs limit participants through selective applications (e.g., few spots for fellows), targeted area of application (e.g., mental health), time commitment (e.g., semester-long course), and audience (e.g., researchers not practitioners) [4,5,6]. Further, the perception in the field of specific training programs producing “card-carrying” implementation scientists can exacerbate concerns about gatekeeping [9]. There is a need to continue building capacity in implementation science, with attention to the development of innovative programs that target a wider range of educational and accessibility needs, and to systematically evaluate the impact of such programs.

The Penn Implementation Science Institute (PennISI) is a novel implementation science training opportunity. At the time of its inception, the PennISI offered one of the first brief implementation science training programs, thus filling a needed gap in educational offerings [6]. As implementation science training offerings continue to develop, systematic evaluation of such efforts is needed to ensure rigor through achievement of specific competencies and understanding of strengths and weakness of programs. This paper describes the PennISI to offer a potential model for the field. Following the movement in the field to use established competencies [4, 5, 10,11,12], this study also evaluates the impact of the PennISI on advancing thematic implementation science competencies.

Methods

Given the purpose of program evaluation, this project was deemed exempt by the Institutional Review Board at the University of Pennsylvania (Penn). The Consensus-Based Checklist for Reporting Survey Studies (CROSS) [13] guided the reporting of this evaluation (Additional File 1).

Penn Implementation Science Institute

To help fill the gap in implementation science training, individuals at Penn developed the Penn Implementation Science Institute (PennISI). Briefly, the PennISI offered one of the first brief training programs through an institution other than the National Institutes of Health. Facilitated by the Penn Implementation Science Center (PISCE@LDI) and the Penn Master of Science and Health and Policy Research (MSHP) program, the PennISI aims to provide participants with the tools to design and execute rigorous implementation research. Course Directors developed the curriculum based on other exemplar training programs (e.g., Implementation Research Institute) and key foundational concepts in implementation science. To note, the leadership team included three faculty members including clinician scientists that balanced responsibilities between research, clinical work, and education, thus acknowledging the limited capacity to devote time solely to curriculum development. Launched in 2017, the PennISI originally hosted an in-person institute. Due to the COVID-19 pandemic, the institute pivoted to a virtual format, which increased capacity for participant attendance. Further, the curriculum evolved each year based on participants’ feedback and emerging areas in the field (e.g., health equity and implementation science).

Currently, the PennISI is intended for scholars at all career levels interested in learning more about the foundation of implementation science for application to future research. Applications are accepted on a first-come, first-served basis until capacity is reached. PennISI is a credit-bearing course; all participants pay for the equivalent of half of a credit unit with the option of enrolling into a full credit unit with additional assignments. Given the cost of attendance ($2950 in 2022), limited full and partial tuition scholarships are available for Penn affiliates and scholars from low- and middle-income countries. To earn a scholarship, participants must explain how the PennISI will contribute to their career development.

The 2022 PennISI included 4 days (20 h) of virtual programming. Through a combination of didactic lectures, small group discussion (randomly assigned at the start of PennISI but remain the same throughout the week), expert panels, and optional office hours (2 h daily), the institute covers both foundations of and advanced topics in implementation science (see Table 1 for a detailed description of the 2022 PennISI). Topics include the following: introduction to implementation science, models and frameworks, study design and methods, behavioral economics, global health application, health equity, implementation outcomes, implementation strategies, dissemination, quality improvement, grant writing, de-implementation, and implementation science in the real world (e.g., examples of research programs and stories of success in implementation science). The institute began with a keynote lecture by Dr. Wynne Norton, the National Cancer Institute Program Director of Implementation Science. Diverse Penn-based and external implementation science experts facilitate all sessions and small group discussions. Guided by the Consolidated Framework for Implementation Research [14], a novel activity completed in the small group discussion involves the collective brainstorming of barriers, facilitators, and implementation strategies for human papillomavirus vaccination implementation (see Additional File 2). To prepare for each day, participants read assigned journal articles and write brief discussion posts on the topics (see Additional File 3 for the list of “greatest hits” articles). No formal evaluation of knowledge retention is completed. All course activities (e.g., lectures and small group discussions) were facilitated via Zoom, an online videoconferencing platform, and Canvas, an online academic course platform. All course materials, recorded lectures, and supplemental content (e.g., additional readings) are available on Canvas for one month after the conclusion of the PennISI. Additional informal consultation related to participants’ project ideas occurs ad hoc. The 2022 PennISI included an additional 3-h debrief with the visiting global health participants facilitated by one of the core facilitators (A.E.V.P.). This manuscript evaluates the sixth annual PennISI in the summer of 2022 that enrolled 109 participants.

Table 1 Overview of PennISI components

Procedure

A pre-post survey design was used. Established educational competencies for dissemination and implementation research training programs [15] guided the development of the survey instruments. These competencies include 43 items grouped into four thematic domains (definition, background, and rationale; theory and approaches; design and analysis; and practice-based considerations) with four response options (no expertise in this area, beginner, intermediate, and advanced) (see Additional File 4 for each category and corresponding items). Example items per competency domain include (1) “Define what is and what is not D&I research” for definition, background, and rationale, (2) “Describe a range of D&I strategies, models, and frameworks” for theory and approaches, (3) “Identify and measure outcomes that matter to stakeholders, adopters, and implementers” for design and analysis, and (4) “Determine when engagement in participatory research is appropriate with D&I research” for practice-based considerations. These competencies are considered the “gold standard” for the evaluation of dissemination and implementation science training programs (4), so the use of these competencies ensured systematic, comprehensive evaluation and will enable future comparison across training programs. To note, these competencies did not inform the initial development of the PennISI curriculum, as they were identified post-hoc for use in the evaluation. Instruments were supplemented with items related to participant characteristics, including demographic information and prior experience with dissemination and implementation science (e.g., submission of related manuscript). Response options related to participants’ positions and areas in which participants apply implementation science were specified based on the team’s collective knowledge and experience. To elicit additional information about satisfaction with the PennISI, the post-survey included additional items provided by Penn to evaluate all university courses (e.g., “Overall rating/quality of course”) and open-ended questions eliciting feedback on the PennISI (e.g., “Please comment on any strengths, weaknesses or suggestions for future improvement.”). The final pre-survey and post-survey instruments included 55 and 58 items, respectively (Additional File 4).

All surveys were administered via REDCap [16]. To maintain confidentiality, participants were not required to provide identifying information (e.g., name or email). In addition, neither the pre-survey nor the post-survey were required. To strongly encourage participation, the pre-survey distribution email included the following description: “The purpose is to allow us to evaluate the effectiveness of our program.” The pre-survey was distributed 4 days and again one day before the start of the PennISI, and the post-survey was distributed on the last day of the PennISI with a follow-up reminder at 12 days post PennISI.

Data analysis

Records that included data for any of the survey items were included in the analysis, and missing data were not imputed (i.e., the number of records in each calculation may differ depending on the response rate for that individual item). Duplicate participant entries were deleted when identifiable information was provided. Participant affiliations and participant positions were condensed into thematic categories (e.g., universities and academic medical centers representing the “US-based academic institution” category; doctoral student and undergraduate student representing the “pre-doctoral” category; master’s students were separated given the participation of doctoral-level clinical fellows enrolled in master-level programs at University of Pennsylvania). Given the self-reported nature of the questions, some respondents’ specified positions in the “other” category may have overlapped with existing categories, but reported items were preserved rather than grouped into categories. To calculate the mean scores for each of the competencies, the response options were coded from 0 to 4 (i.e., no expertise in this area — advanced). Mean composite scores were created for each of the competency domains by pooling data for all items within respective categories (e.g., 10 items for definition-related theme reflected in respective mean score). Descriptive statistics on the pre-survey and post-survey data were calculated. All analyses were completed in Stata Statistical Software (15.1; College Station, 2017). Given brief responses, an informal inductive analysis approach was conducted on the open-ended responses in which prevalent themes were identified by one investigator (A.E.V.P.).

Results

Participant characteristics

Nearly all participants completed the pre-survey (95.41%) (Table 2). Notably, the cohort included a racially diverse group of participants with 33.65% of participants identifying as White, 15.38% as Black or African American, 14.42% as Asian, 1.92% as American Indian or Alaskan Native, and 1.92% as Native Hawaiian or Other Pacific Islander. The majority of individuals were affiliated with an academic institution based in the USA (87.76%), and faculty (Assistant Professor, Associate Professor, and Professor) comprised approximately half of the cohort (47.11%). Additional participant positions varied, including academic trainees (e.g., postdoctoral fellow and doctoral student), clinicians (e.g., pediatrician), academic-based staff (e.g., research staff), and community partners (e.g., state health department). Fourteen attendees received a scholarship for participation, four of which were participants from low- and middle-income countries. Most participants classified their implementation science experience level as novice (24.04%) or beginner (57.69%), and two individuals reported expert-level experience. The content area in which participants apply implementation science was distributed fairly equally, but behavioral and mental health was the most common area of application (27.88%). Eighteen participants indicated that they had not yet applied implementation science to their research. Further, most participants had not yet submitted an implementation science-related grant (74.04%). Among those with implementation science experience, 21 individuals had submitted a related manuscript. Fifty-five participants completed the post-survey (response rate overall = 50.46%), and the demographic characteristics reflected those of the pre-survey respondents (Table 2) (e.g., greatest proportion of respondents in post-survey were affiliated from US-based academic institutions, specifically assistant professors). Post-surveys did not collect information about experience with D&I.

Table 2 Respondent characteristics

Overall impact

Overall, participants indicated high value from the PennISI, as all course evaluation criteria scored above a mean score of 3.55 (median score 4.00; 4.00 as highest possible score) (Additional File 5). The commitment of the course directors received the highest score with an almost perfect rating (mean score of 3.98; median score of 4.00). The overall rating and quality of the course had a mean of 3.77 (median score of 4.00). Further, participants reported high educational value and amount of information learned with a mean score of 3.68 (median score of 4.00). The item with the lowest mean score related to the appropriateness and challenge of the workload and material (mean score of 3.55; median score of 4.00).

Implementation science competencies

At baseline, mean scores for all of the implementation science competency domains were below beginner-level (i.e., composite score < 1) (Table 3). In the pre-survey, participants reported the highest level of knowledge related to definition, background, and rationale (mean score 0.82; median score 0.80), followed by practice-based competencies (mean score 0.69; median score 0.58), theory and approaches (mean score 0.66; median score 0.64), and design and analysis (mean score 0.54; median score 0.43). The individual competency with the greatest proportion of participants reporting “no expertise in this area” related to concepts of de-adoption and de-implementation study design (70.59%) (Additional File 6). Conversely, competencies with the highest baseline level of knowledge related to the impact of disseminating, implementing, and sustaining effective interventions as well as the importance of incorporating perspectives from different stakeholder groups (20.59% indicating no expertise for each of the two items). On the other end of the spectrum, four participants indicated advanced-level knowledge on various competency items (e.g., “Differentiate between D&I research and other related areas, such as efficacy research and effectiveness research”) (Additional File 6).

Table 3 Pre-post scores for implementation science competencies by theme

Mean scores for all implementation science competency domains increased in the post-survey to a score between beginner-level and intermediate-level expertise (i.e., composite score between 1 and 2) (Table 3). Similar to the baseline data, participants reported the highest level of knowledge related to definition, background, and rationale (mean score 1.51; median score 1.50), followed by practice-based competencies (mean score 1.43; median score 1.33), theory and approaches (mean score 1.41; median score 1.43), and design and analysis (mean score 1.30; median score 1.21). All individual items had seven or fewer participants indicating no expertise in the specific competency (range = 0–7), with the majority of items reflecting advanced-level competencies per the Padek taxonomy (Additional File 6). Items related to identifying and measuring outcomes as well as identifying the potential impact of disseminating, implementing, and sustaining effective interventions both reported all participants with at least beginner-level expertise (Additional File 6). One participant reported an inability to define what is and what is not D&I research. Acknowledging the bias of the smaller response rate in the post-survey, participants indicated advanced-level knowledge in 22 individual competency items (e.g., “Determine when engagement in participatory research is appropriate with D&I research”).

Qualitative themes

Table 4 provides an overview of and illustrative quotations for the themes that emerged from the participants’ post-survey open-ended responses.

Table 4 Post-survey qualitative themes

Overall impressions

Most participants expressed positive opinions of the PennISI overall. Respondents described the training program as “phenomenal” and “amazing.” Many of the responses attributed the value of the course to the PennISI instructors, specifically their enthusiasm, expert knowledge, and commitment to the program. Although the majority of participants reported high value from the PennISI, opinions varied depending on participants’ prior level of implementation science knowledge and experience; described further in the next sections.

PennISI content

Participants commented on different aspects of the content included in the PennISI. With regard to the didactic topics, respondents appreciated the inclusion of a focus on health equity and applied examples; some participants desired more opportunities for application. However, opinions on the content overall varied. Some participants found the material too introductory, while others perceived the material as too advanced for an introductory implementation science training program. Related, a common theme related to the appropriateness of the material for the intended level of the audience emerged. For those new to the field of implementation science, participants felt overwhelmed by the amount and level of material and indicated a sense of the target audience being future grant applicants. As a result, some respondents expressed limitations in engagement, value gained from the institute, and experience of trying to keep up with the material.

PennISI structure

Participants discussed various components of the PennISI structure. Overall, participants appreciated the virtual nature of the institute because the format increased accessibility and facilitated the provision of course materials online. However, some participants expressed that a virtual institute cannot fully replace engagement in an in-person program. To address engagement, respondents praised the communication of the PennISI, specifically the amount and quality of communication from the instructors, level of engagement in the Zoom chat, and the ability to ask questions throughout the institute (e.g., in lectures, office hours, and small group discussions). Further, although participants appreciated the inclusion of the office hours, some participants reported scheduling conflicts preventing attendance or feeling rushed in the session. Finally, opinions on the small group discussions varied. Overall, participants seemed to prefer the discussion that incorporated a case example (e.g., activity to identify CFIR determinants in group) and appreciated the opportunity to clarify course content. However, perceptions on the value gained and opinion on the structure of the small group discussions varied. Some participants felt that the range of implementation science experience and knowledge in the group, as well as stage of career, influenced the engagement of participants in these sessions (e.g., more knowledgeable participants dominated the conversation).

Suggestions for improvement

Participants suggested various ideas for improving the PennISI in the future, most of which related to the diverse implementation science experience level. First, participants recommended dividing the PennISI into two institutes: introduction to implementation science and advanced topics in implementation science. In the absence of creating two separate institutes, some participants requested extending the PennISI to a longer period of time to enable greater processing of information (e.g., 2 weeks to allow synthesis and reflection). Second, participants offered suggestions to modify the small group discussions. Most commonly, respondents recommended splitting the groups based on level of experience or knowledge with implementation science. In addition, some participants proposed creating more structure in the sessions through the use of case studies or opportunities for application of the didactic topic. Third, recommendations surrounded the didactic content. Some participants recommended providing a stronger foundation in implementation science at the beginning of the institute through the provision of a reference terminology dictionary, for example. Related, some respondents articulated a desire for less-complex implementation science project examples to illustrate a feasible place to start for novice implementation researchers. In addition, some participants with a higher level of experience requested specific lecture topics (e.g., adaptation).

Discussion

There is a critical need to build capacity in implementation science to ensure the future growth of the field. This study described and assessed the sixth annual PennISI, an innovative 4-day virtual implementation science institute. Findings indicate high value gained and perspectives on successful programmatic components, which can serve as a model for other brief training programs. Further, the evaluation provides valuable insight for improving future implementation science training programs, such as modifying course content and structure to align with participants’ baseline implementation science knowledge (see Table 5 for an overview of recommendations for future training programs).

Table 5 Recommendations for brief implementation science training programs

Overall, participants reported that the PennISI provided a successful implementation science training program. The high scores on all post-evaluation competencies highlight substantial perceived knowledge gained from the institute and the potential to yield change in all thematic domains. Participants’ highest ratings for competencies related to definition, background, and rationale suggest that brief training programs may particularly have the opportunity to impact introductory skills. Further, the positive perception of the PennISI from the majority of participants demonstrates the acceptability of the institute structure. Specifically, facilitating a virtual format increases the accessibility for participants. Although virtual programs cannot replace the dynamic fostered by in-person programs, the format creates a more inclusive institute to help extend reach to a diverse set of learners. Additional strengths of the PennISI included the inclusion of interactive elements, such as applied examples in lectures, small group discussions, office hours, and attention to questions in the chat. These preferences highlight the value of facilitating activities to foster engagement and create a positive learning environment. These findings are generalizable beyond the PennISI, and this institute can serve as a model for the development of other brief, virtual implementation science training programs, as well as other rapidly emerging areas of interest (e.g., climate change and artificial intelligence research).

This evaluation contributes to the efforts to systematically assess implementation science capacity building initiatives, which is critical for ensuring the rigor of implementation science education [4, 5, 10]. A strength of this study involved the use of established competencies for evaluating implementation science training programs [15]. This approach enables comparison of training programs on standard measures, both within the PennISI and outside of the institution. Perhaps most importantly, fostering cross-institutional discussion related to building capacity will advance the field of implementation science. Implementation science exists to close research-to-practice gaps. Educational initiatives ought to leverage this goal by not recreating the wheel through the development of new implementation science training programs de novo but rather by learning from existing effective courses. Standardized evaluation provides insight into what does and does not work for programs. The field has begun engaging in collaborative efforts through the development of a repository for implementation science-related grants [17] and sharing of translated frameworks, for example. Training programs ought to consider making course materials open source as well; in addition to the materials provided in the Additional Files, PennISI materials are available upon reasonable request to Course Directors. However, greater investment in capacity building is required to make this approach feasible and appealing to instructors often operating within the constraints of academia. Collaboration will shift the effort to help ensure that training programs achieve desired implementation science competencies.

Further, this study highlighted the type of learners seeking out brief implementation science training. The PennISI included a cohort of individuals with a diverse range of implementation science experiences and knowledge, ranging from novice to expert at baseline. These findings indicate that programs are attracting a mixed group of learners, which can result in both benefits (e.g., peer-to-peer learning) and limitations. For example, the institute experienced some challenges in meeting the needs of all participants, as discussed in this study and consistent with other capacity building initiatives [11]. This issue warrants thoughtful consideration for tailoring programs. Further, this study did not evaluate change in overall implementation science experience level in the post-survey, as a 4-day institute would not expect to yield meaningful change in overall expertise. Change in individual competency items is expected, but the overall level of knowledge (i.e., from beginner-level implementation scientist to intermediate-level implementation scientist) often requires increased time and application of concepts beyond 20 h of learning. Therefore, brief training programs may cater most appropriately to the first two phenotypes of learners who desire increased awareness and basic understanding of the field for collaboration but require additional mentorship for independent implementation research [9]. In addition, the majority of participants included individuals in academia, specifically faculty. Consistent with other capacity building efforts [4, 5], this trend helps guide the inclusion of topics for training. For example, given the nature of the audience, programs might include a session on writing grants in implementation science, both as a collaborator and a project lead. The dominance of academic researchers indicates that capacity building needs to improve recruitment for other individuals crucial to success in implementation science, such as practitioners who are often the minority in training programs [5, 6]. Training programs have begun to address this need (e.g., The Center for Implementation certificate program). One strategy could include advertisement of the training institute with researchers’ community partners. In addition to practitioners, this study emphasized the need to recruit global health colleagues. The PennISI included a cohort of individuals from four countries (United States, Ghana, South Africa, and Tanzania), which increases the diversity of the implementation science community. However, given that a great deal of implementation science capacity building occurs in high-income countries, training programs should increase access to global health colleagues in low- and middle-income countries through continued financial support and sponsorship with global health centers, as done for the PennISI. For sustainability, efforts should pivot to long-term partnership with global institutions to facilitate locally led training programs [18, 19]. As programs continue to develop, understanding the characteristics of individuals seeking out training will help tailor programs and recruitment accordingly.

In addition to informing tailoring of programs related to participants, findings from this evaluation provide insights for improving the structure of implementation science capacity building efforts. The evaluation revealed the potential impact of a mismatch in a training program’s course materials and participants’ level of implementation science knowledge (i.e., too advanced or too beginner depending on the participant). To address the demand for implementation science training, programs have emerged to try to meet the needs of all learners. The PennISI was advertised as providing both foundational and advanced topics in implementation science. However, survey-style programs that incorporate a lecture on each of the key topics in the field can limit the ability to provide in-depth information on each topic, which could result in confusion for novices or repetition for advanced scholars. To address this, course directors should clearly communicate the intended audience (e.g., early-career researchers) and level of information covered (e.g., introduction to implementation science) through written goals and competencies so potential participants have a better understanding of the anticipated material. If feasible, instructors ought to consider sharing the course syllabus at the time of registration for increased transparency. In the absence of resource constraints, the creation of multiple implementation science institutes that cater to different audiences would better address the various needs of learners. For example, dividing programs into two institutes, one for introductory material and one for advanced topics, could help standardize the audience and foster better engagement, knowledge retention, and satisfaction among participants. This modification could help address the lower mean score for complicated competencies (e.g., design and analysis) and participants’ desire for specific lecture topics. Given the limited bandwidth and resources to facilitate multiple institutes, one strategy to modify existing structures could involve separating participants into thematic groups for small group discussion (e.g., level of implementation science expertise or career level). To facilitate this process, instructors should collect and analyze self-reported information related to participants’ experience before the start of the training program. Further, the creation of multiple institutes (e.g., one beginner-level and one advanced-level, or simply offering of two beginner institutes) would increase access limited by enrollment constraints in a single institution. This evaluation suggests that application increases learning. Courses should incorporate more case examples that enable participants to apply concepts learned in didactic sessions (e.g., presenting the implementation science logic model in the beginning of the institute and revisiting the model with the introduction of each new concept; participants could complete the model concurrently for their project ideas). Participants could engage in these activities at their level of expertise and interest. Finally, to effectively complete all of the aforementioned recommendations to improve future training programs, greater institutional support, such as dedicated administrative support (0.5 FTE or greater depending on the volume of registrants) is necessary.

This evaluation has some limitations. First, although consistent with non-incentivized surveys, the 50% response rate for the post-survey may have introduced non-response bias. Second, the PennISI was facilitated via a virtual platform. Therefore, the level of engagement with the course may have varied among participants (e.g., participants not attending sessions or participants splitting attention), which could have influenced knowledge retention and the reported post-survey outcomes. Determination of complete engagement of participants at each session is not feasible. Participants who did not complete the survey may not have perceived as much value gained from the PennISI, which could have positively skewed the change in competencies. Third, the post-survey measures were obtained immediately after the conclusion of the PennISI, which limited the ability to assess change in competencies over time. Future efforts could consider a longitudinal design to assess sustained knowledge retention and application to research. Fourth, the free-response question format limited the depth of qualitative information obtained. Although the data provided helpful insights, future efforts could conduct in-depth approaches with a sub-set of participants to elicit more detailed input. Fifth, the D&I competencies included in the evaluation did not inform the development of the PennISI curriculum, which could explain the lack of change in some of the post-survey items. Future efforts could modify the curriculum based on targeted competencies to assess changes in significance of pre-post survey differences.

Conclusions

This study provides an example of an effective training institute on advancing implementation science competencies that can serve as a model for brief, virtual training programs. Findings also highlight insights for improving future programs, which emphasizes the value of standardized evaluation of educational programs. Efforts should continue to refine and adapt capacity building to meet the needs of the growing, heterogenous implementation science community.

Availability of data and materials

All data is provided in the Additional Files, and additional materials are available upon reasonable request.

Abbreviations

PISCE@LDI:

Penn Implementation Science Center

PennISI:

Penn Implementation Science Institute

References

  1. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.

    Article  PubMed  Google Scholar 

  2. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Huebschmann AG, Johnston S, Davis R, Kwan BM, Geng E, Haire-Joshu D, et al. Promotnig rigor and sustainment in implementation science capacity building programs: a multi-method study. Implement Res Pract. 2022;(3):26334895221146261.

  5. Davis R, D’Lima D. Building capacity in dissemination and implementation science: a systematic review of the academic literature on teaching and training initiatives. Implement Sci. 2020;15(1):97.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Chambers DA, Pintello D, Juliano-Bult D. Capacity-building and training opportunities for implementation science in mental health. Psychiatry Res. 2020;283:112511.

    Article  PubMed  Google Scholar 

  7. Baumann AA, Carothers BJ, Landsverk J, Kryzer E, Aarons GA, Brownson RC, et al. Evaluation of the Implementation Research Institute: Trainees’ Publications and Grant Productivity. Adm Policy Ment Health. 2020;47(2):254–64.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Proctor EK, Landsverk J, Baumann AA, Mittman BS, Aarons GA, Brownson RC, et al. The implementation research institute: training mental health implementation researchers in the United States. Implement Sci. 2013;8:105.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Beidas RS, Dorsey S, Lewis CC, Lyon AR, Powell BJ, Purtle J, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci. 2022;17(1):55.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Straus SE, Sales A, Wensing M, Michie S, Kent B, Foy R. Education and training for implementation science: our interest in manuscripts describing education and training materials. Implement Sci. 2015;10:136.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Carlfjord S, Roback K, Nilsen P. Five years’ experience of an annual course on implementation science: an evaluation among course participants. Implement Sci. 2017;12(1):101.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Kirchner JE, Dollar KM, Smith JL, Pitcock JA, Curtis ND, Morris KK, Fletcher TL, Topor DR. Development and preliminary evaluation of an implementation facilitation program. Implement Res Pract. 2022(3).

  13. Sharma A, Minh Duc NT, Luu Lam Thang T, Nam NH, Ng SJ, Abbas KS, et al. A Consensus-Based Checklist for Reporting of Survey Studies (CROSS). J Gen Intern Med. 2021;36(10):3179–87.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Padek M, Colditz G, Dobbins M, Koscielniak N, Proctor EK, Sales AE, et al. Developing educational competencies for dissemination and implementation research training programs: an exploratory analysis using card sorts. Implement Sci. 2015;10:114.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81.

    Article  PubMed  Google Scholar 

  17. National Cancer Institute. Sample grant applications 2021. Available from: https://cancercontrol.cancer.gov/is/funding/sample-grant-applications.

  18. Bartels SM, Haider S, Williams CR, Mazumder Y, Ibisomi L, Alonge O, et al. Diversifying implementation science: a global perspective. Glob Health Sci Pract. 2022;10(4):e2100757.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Yapa HM, Barnighausen T. Implementation science in resource-poor countries and communities. Implement Sci. 2018;13(1):154.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank all individuals who supported the success of the PennISI, including additional course facilitators not part of the authorship team (Emily Becker-Haimes, PhD; Danielle Cullen, MD, MPH, MSHP; Rebecca Hamm, MD, MSCE; Katelin Hoskins, PhD, MSN, MBE; Sarita Sonalkar, MD, MPH; and Rebecca Stewart, PhD), administrative support (Kathleen Cooper and Izzy Kaminer, MS), and visiting faculty (Srinath Adusumalli, MD, MSHP; Alison Buttenheim, PhD, MBA; Krisda Chaiyachati, MD, MPH, MSHP; Kate Courtright, MD, MSHP; David Mandell, ScD; Yehoda Martei, MD, MSCE; Nathalie Moise, MD, MS; Jennifer Myers, MD; Michael Posencheg, MD; Byron Powell, PhD, LCSW; Jonathan Purtle, DrPH, MsC; Rachel Shelton, ScD, MHP; and Rebecca Trotta, PhD, RN).

Funding

This work was supported by the Penn Implementation Science Center at the Leonard Davis Institute of Health Economics (PISCE@LDI), the Penn CFAR iSPHERE Scientific Working Group, an NIH-funded (P30 045088) program, and the National Cancer Institute (P50 CA244690).

Author information

Authors and Affiliations

Authors

Contributions

A.E.V.P., R.S.B., and M.B.L.F. contributed to the concept of the evaluation. A.E.V.P. led the data collection and analysis. A.E.V.P. drafted the first version of the manuscript. A.E.V.P., R.S.B., M.B.L.F., C.P.B., K.A.R., C.W., J.A.S., and A.B. participated in the interpretation of the findings. All authors provided critical revision of content and have read and approved the final manuscript.

Corresponding author

Correspondence to Amelia E. Van Pelt.

Ethics declarations

Ethics approval and consent to participate

This evaluation was exempt from the University of Pennsylvania Institutional Review Board.

Consent for publication

Not applicable.

Competing interests

Dr. Beidas is principal at Implementation Science & Practice, LLC. She receives royalties from Oxford University Press, consulting fees from United Behavioral Health and OptumLabs, and serves on the advisory boards for Optum Behavioral Health, AIM Youth Mental Health Foundation, and the Klingenstein Third Generation Foundation outside of the submitted work. She is a member of the Editorial Board for the journal. Dr. Lane-Fall is Vice President of the Anesthesia Patient Safety Foundation and sits on the Board of Directors of the Foundation for Anesthesia Education and Research.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Checklist for Reporting of Survey Studies (CROSS).

Additional file 2.

CFIR Small Group Activity.

Additional file 3.

Greatest Hits Articles Provided to Participants.

Additional file 4.

Pre- and Post-survey Instruments.

Additional file 5.

Penn Course Evaluation.

Additional file 6.

Individual Implementation Science Competencies by Theme.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van Pelt, A.E., Bonafide, C.P., Rendle, K.A. et al. Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute. Implement Sci Commun 4, 131 (2023). https://doi.org/10.1186/s43058-023-00512-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-023-00512-5

Keywords