Skip to main content

Implementing traumatic brain injury screening in behavioral healthcare: protocol for a prospective mixed methods study

Abstract

Background

Characteristics of both individuals and innovations are foundational determinants to the adoption of evidenced-based practices (EBPs). However, our understanding about what drives EBP adoption is limited by few studies examining relationships among implementation determinants and implementation outcomes through theory-driven hypothesis testing. Therefore, drawing on the Theory of Planned Behavior and Diffusion of Innovations Theory, this study will disentangle relationships between provider characteristics and innovation factors on the early adoption of the Ohio State University Traumatic Brain Injury Identification Method (OSU TBI-ID) in behavioral health settings.

Methods

This study will utilize an explanatory sequential mixed methods design. In Phase I (quantitative), Time 1, we will investigate behavioral health providers (N = 200) attitudes, perceived behavioral control, subjective norms, and intentions to screen for TBI upon completion of a video module introducing the OSU TBI-ID. At Time 2, we will examine the number of TBI screens conducted over the previous month, as well as the feasibility, appropriateness, and acceptability of using the OSU TBI-ID in practice. Structural equation modeling will be used to determine whether provider characteristics predict TBI screening intentions, and whether intentions mediate actual TBI screening behaviors. We will then test whether feasibility, appropriateness, and acceptability of the OSU TBI-ID moderates the relationship between intentions and TBI screening behaviors. In Phase II (qualitative), we will develop an interview guide using results from Phase I and will conduct semi-structured interviews with providers (N = 20) to assess contextual determinants of TBI screening adoption. Qualitative data will be thematically analyzed using sensitizing concepts from the Consolidated Framework for Implementation Research and integrated with the quantitative results using a joint display.

Discussion

This mixed methods study capitalizes on two theory-driven hypotheses bridging proximal (e.g., screening intent) to distal (actual behaviors) implementation outcomes and will contextualize these results qualitatively to advance our understanding about why TBI screening adoption has failed to translate to the behavioral healthcare context. Results of this study will offer insights into what is driving TBI screening adoption so that implementation strategies can be selected with greater precision to improve the adoption, sustainment, and scale-up of TBI screening in behavioral healthcare.

Background

Multiple challenges prohibit the implementation of evidence-based practice (EBP) innovations in behavioral healthcare settings. Implementation research points to the characteristics and behaviors of clinicians acting within the treatment context as primary influencers in the adoption of EBPs, particularly during early phases of implementation [1, 2]. Provider characteristics, including attitudes toward an innovation or perceived control over implementing the innovation are well-known determinants (e.g., barriers and facilitators) affecting EBP adoption [2]. Other known determinants affecting implementation adoption are factors related to the innovation itself, which may include innovation complexity, appropriateness of the innovation to the specific context or treatment practices, feasibility of implementing the innovation within that context, and perceived acceptability of the innovation [1, 3]. Yet, these determinants do not likely operate individually during early phases of implementation, but rather as a function of the other on EBP adoption within the realm of the service context [1, 4]. For instance, although providers may consider the intervention appropriate (i.e., relevant to their client population), and they may report strong intentions to implement the innovation, their actual behaviors may not reflect any practice changes because the intervention was not feasible to conduct in practice. Subsequently, adoption of the innovation may be low, and therefore the desired practice remains unchanged. However, interactions between provider characteristics, innovation factors, and implementation outcomes are not well-studied, leaving the relationships between these interactions and under what context these interactions materialize unclear [5].

Although determinants are foundational to the success or failure of EBP adoption, our understanding about the key drivers influencing EBP adoption and service integration in the behavioral health environment can be improved through better precision and specification of relationships among constructs through theory-driven hypothesis testing [3, 6,7,8,9]. Numerous implementation frameworks identify and define multi-level constructs important to innovation adoption [1, 10, 11], but relationships between these constructs are rarely specified [7], leaving researchers and clinicians to wade through muddled processes in their attempt to understand why innovation adoption in behavioral health contexts has failed or succeeded. Differentiation between the implementation determinants from the mediators and moderators acting as drivers to EBP adoption using theory-driven hypotheses can help us to target where along the implementation cascade EBP adoption occurs [3, 6, 7]. Lack of or misspecification among determinants, mediators, and moderators may not only restrict our understanding of the implementation process, but also inhibits our ability to select appropriate implementation strategies that target these modifiers [6]. Using theoretically driven hypotheses can offer insight into what constructs to test, how to specify them, and where they should be placed along the implementation cascade (proximally or distally) [3, 9] so that implementation strategies can be appropriately selected [5, 12].

Further compounding this problem, however, is the lack of implementation studies addressing the treatment integration of services for clients with complex physical and mental health comorbidities into behavioral health contexts, thereby leaving gaps in our understanding about why EBP innovations developed for these clients have failed to penetrate this service landscape. Although studies in traumatic brain injury (TBI) screening and assessment have demonstrated improved symptom delineation between mental health and neuropsychological symptoms leading to mental health referrals [13] and that validated TBI screening methods can improve clinical care decisions [14], the processes that increase the adoption of these services in behavioral health settings is unknown. Therefore, integrated care pathways that involve complex interventions with multiple components (e.g., screening, intervention adaptation, referral) [15] for these individuals are inconsistent or absent and we are left to question why these clients do not experience more successful outcomes.

Research-to-practice gap: implementing TBI screening in behavioral healthcare settings as a first step toward optimizing care

An estimated 50% of clients seeking treatment for substance use or other mental health comorbidities have a lifetime history of TBI [16]. TBI is a complex health condition and a leading cause of disability among adults in the United States [17]. A TBI is a type of acquired brain injury that occurs when an object forcefully hits the head, the head hits an object, or an object pierces the skull and enters brain tissue [18]. A TBI may also result from whiplash effects or blast-induced head trauma. Persistent neurological changes to brain structure and function [19] elevate the acute injury to a chronic, dynamic process affecting physical, psychological, and social domains over the lifespan [20]. Studies demonstrate that TBI can lead to the onset of new or worsened risky substance use and/or mental health conditions, such as anxiety and depression, as well as higher likelihood of mental health service utilization or psychiatric hospitalization in later life [21,22,23]. TBI of any severity (e.g., concussions/mild, moderate, or severe) can also lead to cognitive dysfunction (e.g., learning and memory deficits, poor comprehension) [24, 25], maladaptive social behaviors (e.g., poor social awareness), behavioral dysregulation (e.g., aggression, emotional outbursts) [26], and poor coping skills [18, 27,28,29]. These changes can create challenges to clients’ ability to fully engage in and benefit from behavioral health treatment. Behavioral health treatment approaches should be adapted to accommodate client need (e.g., shortened treatment session or frequent reminders about appointments), but this first requires implementation of systematic screening methods to identify which clients need adapted behavioral health treatment [19, 20].

TBI is often underrecognized by providers in behavioral health treatment settings due to lack of provider or client awareness of TBI, as well as lack of provider skills and self-efficacy to screen for lifetime history of TBI using established screening methods [14, 30]. Lack of TBI identification in adults with risky substance use or mental health comorbidities may result in misattribution of the symptoms of TBI leading to mislabeling the client as ‘non-compliant’ or poorly motivated [31], and could affect treatment or intervention decisions that do not account for TBI-related sequela.

The evidence-based practice innovation

The Ohio State University TBI Identification method (OSU TBI-ID) is a comprehensive, evidence-based TBI screening method that behavioral health providers can use to screen for lifetime history of TBI in 3–5 min and was first validated among a cohort of clients seeking substance use disorder treatment in behavioral health settings [22, 23]. The OSU TBI-ID uses optimal recall methods to prompt a client’s recall of injuries to the head and neck, then determines whether each injury was a TBI. Multiple indices of the extent of one’s lifetime history of TBI are derived [32] including age at first injury, worst injury (based on length of loss of consciousness), most recent injury (moderate or severe injuries in recent months or any TBI in recent weeks), multiple injuries, and TBI from repeated impacts like blasts experienced in combat or blows to the head incurred from domestic violence [33, 34]. Ascertaining lifetime exposure to TBI through structured elicitation of self-report is superior to relying only on medical record data, single questions about TBI (i.e., “Have you ever sustained a TBI?”), or assessment procedures like neuroimaging or neuropsychological assessment that are specific but not sensitive to a person’s exposure to TBI in their entire life. Self-report is particularly advantageous for lifetime TBI identification among vulnerable populations who may not have sought treatment for their injuries. Despite the current use of the OSU TBI-ID in other health, community, and rehabilitation settings [9, 25, 26], this TBI screening method has not been widely adopted in behavioral health treatment.

This protocol describes a prospective, mixed-methods study that investigates early adoption of the OSU TBI-ID in behavioral healthcare settings. We seek to understand providers’ attitudes and beliefs about adopting the OSU TBI-ID into routine service delivery, as well as the acceptability, feasibility, and appropriateness of implementing the OSU TBI-ID into the behavioral health service context.

Theoretical foundations

This study is driven by the Theory of Planned Behavior (TPB) [35] and Roger’s Diffusion of Innovations Theory (DOI) [36, 37] due to the salience of provider characteristics and factors related to the innovation in predicting the early adoption of new innovations in behavioral healthcare settings [1, 38]. The TPB specifies provider-level characteristics (i.e., attitudes, perceived behavioral control, and subjective norms) that are known to influence providers’ intentions to adopt innovations in behavioral healthcare settings [35]. Specifically, provider attitudes toward TBI screening, perceived control over screening, and the perceived social pressures to screen for TBI may influence providers’ intentions to adopt the OSU TBI-ID into service delivery. Previous studies have used the TBP to examine providers’ intentions to adopt TBI-specific interventions in other health settings [39]. However, studies have yet to examine how these provider-level characteristics influence TBI screening intention and actual TBI screening behaviors in behavioral healthcare settings where many individuals with comorbid TBI, risky substance use, and mental health conditions seek treatment.

Roger’s Diffusion of Innovations Theory suggests that innovation-level factors (i.e., acceptability, feasibility, and appropriateness) are also critical components to the adoption of innovations in behavioral healthcare [1, 37]. Though acceptability, feasibility, and appropriateness of the innovation are often examined as implementation outcomes [3], these factors could potentially be the moderators that bridge providers’ intent to screen for TBI to actual TBI screening behaviors. Whereas the TPB specifies provider characteristics as determinants to intentions and ultimately TBI screening behaviors, the extent to which the relationship between intentions (proximal indicator) and the level of TBI screening adoption (distal outcome) may be moderated by factors related to the innovation. Specifically, the extent to which the OSU TBI-ID is perceived as acceptable, feasible, and relevant to the service landscape (i.e., client need, within the scope of practice) could influence whether TBI screening is used and to what extent. For instance, although positive attitudes toward TBI screening may predict stronger intentions to conduct TBI screening and ultimately TBI screening behaviors, providers may find that the innovation does not fit within the current service context which therefore reduces the extent to which TBI screening is used. Similarly, while providers may report high levels of control over their ability to implement the innovation and therefore stronger intention to perform the innovation, actual implementation of the innovation may be thwarted by feasibility concerns once providers have had the chance to trial the innovation, thereby affecting overall adoption of that innovation in practice. Testing the relationships between intention (proximal indicator), moderators (acceptability, feasibility, and appropriateness), and TBI screening behaviors (distal outcome) could improve our knowledge and understanding about why TBI screening has failed to translate to the behavioral health service context. Subsequently, implementation strategies that directly target these moderators can be developed, tested, and used in other behavioral healthcare settings where EBP adoption and integration of services for complex health conditions is lacking.

Service context

Assessing the contextual determinants to TBI screening is also vital to identifying implementation strategies aimed to increase the uptake and sustainment of TBI screening among behavioral health providers [40]. Assessing the implementation context and environment under which a new innovation is used is necessary to connecting hypotheses and theorized mechanisms to behaviors [40]. Assessing the service context adds depth to our understanding of how and why TBI screening is or is not adopted in behavioral health settings [40, 41]. Based on the context, implementation strategies can be identified that harness a social environment that is supportive of the innovation.

Study aims

This study will investigate the adoption of TBI screening in behavioral health settings through three specific aims. Figure 1 provides the conceptual model specifying each aim and relationships among constructs.

Fig. 1
figure 1

Conceptual model

Aim 1: Examine the relationships between behavioral health providers’ attitudes, perceived behavioral control, and subjective norms as predictors to TBI screening intentions and examine whether intentions to adopt TBI screening mediate actual TBI screening behaviors at a one-month follow-up.

Hypothesis 1: Providers who have more favorable attitudes, greater perceived behavioral control, and greater perceived social pressure within the organization to screen for TBI will demonstrate greater TBI screening intentions and will report higher TBI screening behaviors at the one-month follow-up.

Aim 2: Investigate whether the acceptability, feasibility, and appropriateness of TBI screening using the OSU TBI-ID moderates the relationship between TBI screening intentions and actual TBI screening behaviors.

Hypothesis 2: Greater perceived acceptability, feasibility, and appropriateness of TBI screening using the OSU TBI-ID will strengthen the relationship between TBI screening intent and actual TBI screening behaviors.

Aim 3: Assess the contextual determinants to TBI screening adoption. We will investigate determinants to TBI screening adoption through qualitative, semi-structured interviews with a subset of behavioral health providers who completed the quantitative surveys.

Methods/design

Design overview

We use the Journal Article Reporting Standards for Mixed Methods Research (JARS-MMR) for reporting on all components of this mixed methods study throughout this article (see Additional File 1: Appendix) [42]. This study will utilize an explanatory sequential mixed methods design to prospectively investigate the provider-level characteristics, innovation-level factors, and contextual determinants to early TBI screening adoption in behavioral healthcare settings. The explanatory sequential mixed methods design consists of two distinct, consecutive phases, where emphasis is placed on the quantitative phase and the qualitative phase is used to contextualize the quantitative results (QUANT ➔ qual) [43, 44]. The mixed methods approach in this study will include comprehensive data collection and analytical techniques to combine theory-driven hypothesis-testing with in-depth assessments that explain the implementation of TBI screening in behavioral healthcare. Utilizing comprehensive data collection methods could help us understand the complex interplay of this environment with the innovation, which is particularly important to our understanding about service integration for individuals with physical and mental comorbidities.

Phase I (QUANT) will focus on theory-driven hypotheses to test the relationships among provider-level characteristics and innovation-level factors hypothesized to affect TBI screening adoption in behavioral healthcare settings. We will prospectively investigate behavioral health providers’ attitudes, perceived behavioral control, subjective norms, and intentions to adopt this TBI screening method into service delivery (Aim 1). After 1 month following completion of the survey, providers will receive a second survey assessing the proportion of TBI screens conducted over the past month, as well as the perceived acceptability, feasibility, and appropriateness of using the OSU TBI-ID in practice (Aim 2). Using the two-time point approach, this study will investigate theory-driven relationships between provider-level characteristics and innovation-level factors by capitalizing on theory-driven hypotheses to bridge proximal (e.g., screening intent) to distal (actual behaviors) outcomes with better precision. The expected outcome of Phase I is to identify mediators and moderators to inform our understanding of the conditions under which TBI screening adoption occurs.

Phase II (qual) will build upon the quantitative results through qualitative interviews with behavioral health providers to assess any additional determinants within the behavioral healthcare context affecting TBI screening adoption [40, 45]. We will develop a qualitative interview guide using the quantitative results from Phase I. Recognizing that additional determinants beyond characteristics of providers and factors related to the TBI screening innovation trigger influences to EBP adoption, and that behavioral health organizations inherently differ by type and resources (e.g., community-based, hospital-based, private practice) we will purposively recruit and interview a subset of providers from Phase I to assess the contextual determinants to the adoption of the OSU TBI-ID into service delivery (Aim 3). The qualitative results will expand on or dispute the results from our hypotheses by contextualizing TBI screening adoption within this behavioral health service landscape.

Participants and setting

Participants will be behavioral health providers (N = 200) employed in behavioral health settings throughout the United States (e.g., private practices, community-based mental health clinics). Providers will be identified through national organizations and directories of behavioral health providers. To be eligible for this study, participants must be 18 years and older, English speaking, and currently employed as a licensed behavioral health provider in the United States (e.g., Licensed Psychologists, Clinical Social Workers, Professional Clinical Counselors, Professional Counselors, Marriage and Family Therapists, and/or Chemical Dependency Counselors).

PHASE I (QUANT)

Aim 1

Examine the relationships between behavioral health providers’ attitudes, perceived behavioral control, and subjective norms as predictors to TBI screening intentions and examine whether intentions to adopt TBI screening mediate actual TBI screening behaviors at a one-month follow-up.

Aim 2

Investigate whether the acceptability, feasibility, and appropriateness of TBI screening using the OSU TBI-ID moderates the relationship between TBI screening intentions and actual TBI screening behaviors.

Procedures

Recruitment and data collection

Phase I data collection will consist of two consecutive time points using a prospective cohort study design. At Time 1, providers nationally will be emailed a detailed description about the study, the study inclusion criteria, and the Qualtrics link containing the OSU TBI-ID video module and Time 1 survey measures (Aim 1). Consent to participate will be on the first page of the Qualtrics survey, where proceeding to the survey questions signifies informed consent. Providers will first be asked to watch the 30-min, OSU TBI-ID video module to raise awareness about why TBI screening is relevant to behavioral health treatment, introduce the OSU TBI-ID screening form, and demonstrate step-by-step procedures on how to administer the OSU TBI-ID screening method using case exemplars. Providers will then be asked about their attitudes toward using this screening method, perceived social pressures to use the screening method, perceived control over using the screening method, and their intentions to use this screening method over the next month. The total time to complete the Time 1 survey is approximately 15 min. Providers will receive a Certificate of Completion to submit for 1 free continuing education unit and will be entered into a raffle for the chance to win a $50 gift card from a list of university-approved vendors. A total of 60 winners will be selected using a random number generator in Excel.

At Time 2, providers will be sent a second survey one-month later that assesses their perceptions of the acceptability, feasibility, and appropriateness of using the OSU TBI-ID screening method in practice after they have had the chance to trial the intervention. The survey will also ask them to report the number of times they used this screening method during the 1 month since watching the training video (Aim 2). To increase the response rate for Time 2, the Dillman method will be applied [46]. Providers will be asked to include their email address at the end of both surveys as well as a unique digital identifier (i.e., the last two digits of their phone numbers and their two-digit birth month) to link the Time 1 and Time 2 surveys and to eliminate potential duplicates. Total time to complete the Time 2 survey is approximately 15 min. Participants who complete the Time 2 survey measures will be entered into another raffle for the chance to win a $25 gift card from a list of university-approved vendors. A total of 20 winners will be selected using a random number generator in Excel.

Key constructs and measures

The following measures will be used to investigate the constructs proposed by the TPB (i.e., attitudes about screening for TBI, perceived behavioral control over TBI screening, subjective norms, intention to screening for TBI, and TBI screening behaviors) (Aim 1) and the constructs proposed by Diffusions of Innovations Theory (i.e., acceptability, feasibility, and appropriateness of TBI screening using the OSU TBI ID) (Aim 2). See Table 1.

Table 1 Measures, Key Constructs, and Definitions of Constructs

Theory of planned behavior constructs

The 28-item TPB Questionnaire for TBI (TPBQ-TBI) will be used to measure provider attitudes, subjective norms, perceived behavioral control, and intentions to adopt the OSU TBI-ID. The TPBQ-TBI was adapted from a previously established TPBQ measure by tailoring the referent in each item (e.g., directing participants to refer to the OSU TBI-ID) [39]. A total of 24 items were retained from the original measure and 4 items were added and adapted from another TPBQ measure to capture items relevant to the present study but that were excluded from the original measure [47].

Attitudes

Thirteen items will be used to assess provider attitudes toward conducting TBI screening using the OSU TBI-ID. Three items assess attitudes regarding compatibility of the intervention; three items assess perceived ease of use of the intervention; four items assess perceived usefulness of the intervention; and three items assess overall attitudes toward the intervention. Each item is measured on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree) and summed for a total score. Higher scores on the attitudes subscale reflect more positive attitudes toward screening for TBI using the OSU TBI-ID. The original TPBQ measure demonstrated high internal consistency reliability for attitudes (α = 0.94) [39].

Perceived behavioral control

Five items will be used to measure perceived behavioral control over using the OSU TBI-ID. Items are measured on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree) and summed for a total score. Higher scores on this subscale equate to greater perceived control over TBI screening behaviors and self-efficacy. The original TPBQ measure demonstrated acceptable internal consistency reliability for perceived behavioral control (α = 0.77) [39].

Subjective norms

Five items will measure perceived social pressure to screen for TBI. Each item is measured on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree) and summed for a total score. Higher scores on this subscale equate to more positive norms associated with TBI screening. The original TPBQ measure demonstrated good internal consistency reliability for subjective norms (α = 0.87) [39].

Intentions

Three items will be used to measure intentions to screen for TBI using the OSU TBI-ID over the following month. Items are measured on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree) and summed for a total score. Higher scores equate to greater intentions to screen for TBI using the OSU TBI-ID. The original TPBQ measure demonstrated high internal consistency reliability for subjective norms (α = 0.92) [39].

TBI screening behaviors

TBI screening behaviors will be measured as a continuous, individual-level variable using 4-items aimed to capture the proportion of TBI screens conducted over a 1-month period relative to the number of clients who sought treatment during that period. TBI screening behaviors will be determined by the following questions: “Overall, how many new clients sought services from you over the last month?” “How many times did you screen for TBI using the OSU TBI-ID with new clients over the last month?” “Overall, how many established clients did you see over the last month?” “How many times did you screen for TBI using the OSU TBI-ID with established clients over the last month?” The total number of positive TBI screens will also be assessed by provider self-report of the number of individuals who screened positive for having a lifetime history of TBI.

Acceptability

The Acceptability of the Intervention Measure (AIM) will be used to measure acceptability of using the OSU TBI-ID in behavioral healthcare settings [48]. Each of the items will be adapted to replace “intervention” with intervention of interest for this study (e.g., OSU TBI-ID). This scale has 4-items which are measured on a 5-point Likert scale ranging from 1 (completely disagree) to 5 (completely agree). Items are summed for a total score, where higher scores indicate greater acceptability. The AIM has demonstrated high internal consistency (α = 0.85) and test-retest reliability (r = 0.80).

Feasibility

The Feasibility of the Intervention Measure (FIM) will be used to measure feasibility of using the OSU TBI-ID in behavioral healthcare settings [48]. Each item will be adapted to replace “intervention” with intervention of interest for this study (e.g., OSU TBI-ID). This scale has 4-items which are measured on a 5-point Likert scale ranging from 1 (completely disagree) to 5 (completely agree). Items are summed for a total score, where higher scores indicate greater acceptability. The FIM has demonstrated high internal consistency (α = 0.89) and test-retest reliability (r = 0.88).

Appropriateness

The Intervention Appropriateness Measure (IAM) will be used to measure appropriateness of TBI screening using the OSU TBI-ID [48], and each items will be adapted to replace “intervention” with the OSU TBI-ID intervention for this study. This scale has 4-items which are measured on a 5-point Likert scale ranging from 1 (completely disagree) to 5 (completely agree). Items are summed for a total score, where higher scores indicate greater acceptability. The IAM has demonstrated high internal consistency (α = 0.91) and test-retest reliability (r = 0.73).

Analyses

Descriptive statistics will be used to describe provider and behavioral health organization characteristics in SPSS v27 [49]. For Aim 1, structural equation modeling (SEM) will be conducted in Mplus 8.5 [50]. ‘Attitudes,’ ‘perceived behavioral control,’ and ‘subjective norms’ will be exogenous variables hypothesized to have direct effects on the endogenous variable, ‘intention,’ and an indirect effect through intention on the endogenous variable, ‘TBI screening behavior.’ SEM permits the investigation of the direct and indirect effects of the constructs from the TPB on TBI screening behaviors, removes measurement error from the main constructs, and allows for proper handling of the ordinal nature of the data. Because the TPBQ-TBI items are measured using ordinal response options, the robust Weighted Least Squares Mean and Variance (WLSMV) estimator will be used [51]. Because TBI screening is conducted by individuals and does not necessarily depend on organizational policies or team-level procedures, the data will not be clustered within organizations, treatment teams, or any other entity.

For Aim 2, hierarchical multiple regression will be conducted using SPSS v27 [49]. The three exogenous variables (attitudes, norms, control) will be added to Block 1, intention added to Block 2, and the interaction terms (intention x acceptability, intention x feasibility, and intention x appropriateness) will be added to Block 3 to test the moderating effects of the three exogenous variables on the relationship between intention and TBI screening behaviors. Moderators will be added to block 3 one at a time and will be retained in the model only if they are statistically significant. Interaction effects will be graphed to facilitate interpretation.

Power calculation

Using the MacCallum et al. (1996) power and Root Mean Square Error of Approximation (RMSEA) specifications for determining sample sizes in SEM and the Preacher & Coffman (2004) sample size computation in Rweb, a total of N = 53 participants are needed to sufficiently power the model with an alpha level of p < .05, df = 408, power level of .80, and RMSEAalternative = .06 [52, 53]. However, using standard conventions for sample sizes in SEM, the minimum analytic sample will be N = 200 [54].

Model fit

Model fit for the SEM will be evaluated using the following fit indices and cutoffs: χ2 > .05, the Comparative Fit Index (CFI, > .95), Tucker Lewis Index (TLI, > .95), Standardized Root Mean Square Residual (SRMR, < .80), and the point estimate and 90% CI of the Root Mean Square Error of Approximation (RMSEA, < .06) [55]. Depending on the normality of the data, Maximum Likelihood or robust Maximum Likelihood estimation will be used.

Missing data

Missing Values Analysis (MVA) will be conducted to determine percentage and patterns of missing data [56]. Little’s test of Missing Completely at Random (MCAR) will be used as one method to determine patterns of missing data [57]. Missing at Random (MAR) test will also be conducted in SPSS by creating dummy variables of missingness on each of the variables with missing data and evaluating the differences. If data are found to be MCAR or MAR (p < 0.05), hypothesis testing will proceed in Mplus. The Mplus uses full information maximum likelihood (FIML) for handling missing data and can be used if data are MCAR or MAR [58].

PHASE II (qual)

Aim 3

Assess the contextual determinants to TBI screening adoption. We will investigate determinants to TBI screening adoption through qualitative, semi-structured interviews with a subset of behavioral health providers who participated in the quantitative survey.

Researchers’ position statement

Our previous experience using qualitative phenomenology [59] to assess behavioral health provider practices in treating clients with co-occurring TBI and substance use disorders will guide Phase II of this mixed methods study [30]. Our previous work using deductive thematic analysis using constructs from the Consolidated Framework for Implementation Research (CFIR) will guide our analytical approach [60].

Procedures

Data collection and sample size

A total of N = 20 providers who completed both Time 1 and Time 2 survey measures will be purposively selected using non-random, maximum variation sampling, with providers selected based on demographic-level differences (e.g., state, practice type) and response convergence with the quantitative results [61]. This method is expected to include a variety of providers employed across behavioral healthcare settings so that variations in determinants to TBI screening can be assessed across contexts. A total of N = 20 providers is needed to reach saturation of the data using a phenomenological research approach [59]. Providers will be contacted directly by email using the emails they provided in the surveys in Phase I. To account for the national sample and subsequent location variations of each provider, all interviews will be conducted through Zoom videoconferencing software and audio-recorded with the participant’s permission. Interviews are anticipated to last between 30 and 45 min. Participants who complete the interview will receive a $30 gift card from a list of OSU-approved vendors.

Measures

A semi-structured interview guide will be developed based on results from Phase I [45]. A semi-structured interview protocol approach creates consistency between interviews with a standardized set of questions while allowing for probing and follow-up questioning [62]. The interview questions will be structured to assess contextual determinants to TBI screening adoption in behavioral healthcare settings guided by the quantitative results and constructs from CFIR.

Analysis

Interview data will be managed and analyzed using NVivo 12.0 [63]. Interviews will be transcribed verbatim upon interview completion using a professional transcription service. Deductive, thematic analysis will be conducted using sensitizing concepts from CFIR [1, 64, 65]. Two coders will independently familiarize themselves with the data by reading each transcript, taking notes, and creating an initial set of codes based on CFIR’s five domains (e.g., characteristics of individuals, intervention characteristics, inner setting, outer setting, and process). Coders will meet to discuss the initial set of codes, then re-review transcript data, and refine codes into main themes. The review/revision process will continue until no new themes have emerged. Themes will be developed to ensure internal homogeneity (i.e., codes within themes share common features) and external heterogeneity (i.e., themes are distinct). Supporting quotes for each theme will be used to represent the essence of each theme.

Rigor

Co-coding, a detailed audit trail, and peer debriefing will be used to ensure rigor and reproducibility of the results [66]. A reflexive journal will also be used to bracket the authors’ thoughts and opinions about the study process, including experiences in recruitment, data collection, and analyses [62].

Data integration

To fully embody a mixed methods design, several points of data integration will occur [67, 68]. First, data from Phase I will be connected to Phase II by using the quantitative results to develop the qualitative interview protocol [69]. Second, the results from each phase will be integrated through a joint display table presenting both quantitative and qualitative data at the end of the entire study [70]. A joint display occurs at the reporting phase after all data have been collected and fully analyzed and is a visual method for presenting quantitative and qualitative results together. Following all data collection and analyses, the quantitative and qualitative data will be also integrated through weaving [67, 70]. Weaving occurs when results from both quantitative and qualitative data are written and presented in the text together on a concept-by-concept basis. A procedural diagram outlining the phases, steps, products, and timeline for this study is presented in Fig. 2 [43].

Fig. 2
figure 2

Procedural diagram for the explanatory sequential mixed methods design Note: Red arrows denote points of integration

Discussion

This project investigates provider-level characteristics, innovation-level factors, and contextual determinants to the early adoption of TBI screening in behavioral healthcare settings. Specifically, we will examine theory-driven relationships between characteristics of providers and the acceptability, feasibility, and appropriateness of the innovation by capitalizing on two hypotheses to bridge proximal (i.e., intention to screen) and distal outcomes (i.e., actual screening behavior). Testing these theory-driven hypotheses is necessary to understand the mediators, moderators, and causal pathways that lead to innovation adoption [8, 71]. This study moves the field of implementation science forward by testing these proposed hypotheses and identifying mediators and moderators that potentially influence the diffusion of a TBI screening innovation in behavioral health. In addition, by examining relationships among feasibility, acceptability, appropriateness, and screening adoption, we also have the potential to address questions about interactions among implementation outcomes [3]. Subsequently, implementation strategies that directly target these moderators and a causal chain of events can be developed, tested, and used in behavioral healthcare settings where TBI screening adoption is lagging.

Furthermore, because the behavioral health service context is broad and unique based on the specific setting, we will explore behavioral health providers perceptions about how the behavioral health context in which they work may affect TBI screening adoption. Our mixed methods approach could help us to better understand the complex interplay of this specific healthcare environment with the TBI screening innovation, particularly since TBI screening is new to this service context. Based on these results, implementation strategies can be identified that harness a social environment that is supportive of the innovation [72].

The expansion of literature over the past two decades has uncovered clear relationships between TBI and psychiatric comorbidities. However, few interventions have been developed specifically for individuals with these co-occurring conditions [73], and even fewer have penetrated the service landscape. This study is the first to take an existing TBI screening innovation used in physical health and community-based social service settings and brings it to the behavioral health treatment context using principles of implementation research and science. Behavioral health providers are a large segment of “untapped” professionals who could bridge gaps in service access by identifying individuals with co-occurring TBI and psychiatric comorbidities who may need additional services or adapted behavioral health treatment [30]. From here, TBI screening coupled with individualized adaptations to behavioral health delivery can be implemented to improve clinical care decisions and treatment options for individuals with co-occurring TBI and psychiatric comorbidities.

The potential benefits of this study are considerable to individuals who have co-occurring TBI, substance use disorders, and mental health comorbidities. Although TBI is one of the leading causes of death and disability in the United States, and TBI increases riskier substance use and mental health conditions, TBIs are often under recognized and under identified by behavioral health providers who are frequently treating individuals with these comorbidities [30]. Behavioral health providers who do not possess knowledge of TBIs or who do not intend to screen for TBI due to contextual factors will miss a large proportion of individuals in need of individualized treatment approaches and intervention decisions that account for the effects of their TBI. This study is the first to investigate the factors leading to the adoption of TBI screening in behavioral healthcare. Increasing TBI screening in behavioral health settings could have significant impact for how interventions and treatments are delivered, where and how referrals are made, and could reduce the risks for future injury or riskier substance use or worse mental health conditions associated with TBI.

Availability of data and materials

Not applicable.

Abbreviations

AIM:

Acceptability of the Intervention Measure

CFI:

Comparative Fit Index

CFIR:

Consolidated Framework for Implementation Research

DOI:

Diffusion of Innovations Theory

EBPs:

Evidenced-based practices

FIM:

Feasibility of the Intervention Measure

FIML:

Full information maximum likelihood

IAM:

Intervention Appropriateness Measure

JARS-MMR:

Journal Article Reporting Standards for Mixed Methods Research

MCAR:

Little’s test of Missing Completely at Random

MAR:

Missing at Random

MVA:

Missing Values Analysis

OSU TBI-ID:

Ohio State University Traumatic Brain Injury Identification Method

RMSEA:

Root Mean Square Error of Approximation

SRMR:

Standardized Root Mean Square Residual

SEM:

Structural equation modeling

TBI:

Traumatic Brain Injury

TBP:

Theory of Planned Behavior

TPBQ-TBI:

Theory of Planned Behavior Questionnaire for TBI

TLI:

Tucker Lewis Index

WLSMV:

Weighted Least Squares Mean and Variance

References

  1. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Benjamin Wolk C, Powell BJ, Beidas RS. Contextual influences and strategies for dissemination and implementation in mental health [Internet]. Oxford University Press; 2015 [cited 2021 Nov 23]. Available from: http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199935291.001.0001/oxfordhb-9780199935291-e-12

  3. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

    Article  Google Scholar 

  4. Fitzgerald L, Ferlie E, Wood M. Interlocking interactions, the diffusion of innovations in health care. Hum Relat. 2002.

  5. Smith JD, Li DH, Rafferty MR. The implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;25;15(1):84.

    Article  Google Scholar 

  6. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Lewis CC, Boyd MR, Walsh-Bailey C, Lyon AR, Beidas R, Mittman B, et al. A systematic review of empirical studies examining mechanisms of implementation in health. Implement Sci. 2020;15(1):21.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11(10):e053474.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2020;283:112461.

    Article  PubMed  Google Scholar 

  10. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Sales AE, Barnaby DP, Rentes VC. Letter to the editor on “the implementation research logic model: A method for planning, executing, reporting, and synthesizing implementation projects” (Smith JD, Li DH, Rafferty MR. the implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15 (1):84. Doi:https://doi.org/10.1186/s13012-020-01041-8). Implement Sci. 2021;16(1):97.

  13. Gress Smith JL, Roberts NA, Borowa D, Bushnell M. An interdisciplinary approach to the screening, diagnosis, and treatment of OEF/OIF veterans with mild traumatic brain injury. Appl Neuropsychol Adult. 2020;0(0):1–9.

    Google Scholar 

  14. Dams-OʼConnor K, Cantor JB, Brown M, Dijkers MP, Spielman LA, Gordon WA. Screening for traumatic brain injury: findings and public health implications. J Head Trauma Rehabil. 2014;29(6):479–89.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Seys D, Panella M, VanZelm R, Sermeus W, Aeyels D, Bruyneel L, et al. Care pathways are complex interventions in complex systems: new European pathway association framework. Int J Care Coord. 2019;22(1):5–9.

    Google Scholar 

  16. Corrigan JD, Mysiw WJ. Substance abuse among persons with TBI. In: Brain injury medicine: principles and practice, Second Edition. New York, NY: Demos Medical Publishing; 2012. p. 1315–1328.

  17. World Health Organization. Neurological disorders: Public health challenges [Internet]. World Health Organization; 2006 [cited 2021 Jul 26]. Available from: https://apps.who.int/iris/handle/10665/43605

  18. National Institute of Neurological Disorders and Stroke. Traumatic Brain Injury Information Page [Internet]. National Institutes of Health National Institute of Neurological Disorders and Stroke. 2019 [cited 2021 Oct 15]. Available from: https://www.ninds.nih.gov/Disorders/All-Disorders/Traumatic-Brain-Injury-Information-Page

  19. Sophie Su Y, Veeravagu A, Grant G. Neuroplasticity after Traumatic Brain Injury. In: Laskowitz D, Grant G, editors. Translational Research in Traumatic Brain Injury [Internet]. Boca Raton (FL): CRC Press/Taylor and Francis Group; 2016 [cited 2021 Oct 15]. (Frontiers in Neuroscience). Available from: http://www.ncbi.nlm.nih.gov/books/NBK326735/

  20. Masel BE, DeWitt DS. Traumatic brain injury: a disease process, not an event. J Neurotrauma. 2010;27(8):1529–40.

    Article  PubMed  Google Scholar 

  21. Albicini M, McKinlay A. Anxiety disorders in adults with childhood traumatic brain injury: evidence of difficulties more than 10 years postinjury. J Head Trauma Rehabil. 2018;33(3):191–9.

    Article  PubMed  Google Scholar 

  22. Corrigan JD, Bogner J, Mellick D, Bushnik T, Dams-O’Connor K, Hammond FM, et al. Prior history of traumatic brain injury among persons in the traumatic brain injury model systems national database. Arch Phys Med Rehabil. 2013;94(10):1940–50.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Sariaslan A, Sharp DJ, D’Onofrio BM, Larsson H, Fazel S. Long-term outcomes associated with traumatic brain injury in childhood and adolescence: a nationwide swedish cohort study of a wide range of medical and social outcomes. PLoS Med [Internet] 2016 [cited 2021 Jan 2];13(8). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4995002/

  24. Sun H, Luo C, Chen X, Tao L. Assessment of cognitive dysfunction in traumatic brain injury patients: a review. Forensic Sci Res. 2017;2(4):174–9.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Faul M, Coronado V. Epidemiology of traumatic brain injury. Handb Clin Neurol. 2015;127:3–13.

    Article  PubMed  Google Scholar 

  26. Levin HS, Robertson CS. Mild traumatic brain injury in translation. J Neurotrauma. 2013;30(8):610–7.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Emery CA, Barlow KM, Brooks BL, Max JE, Villavicencio-Requis A, Gnanakumar V, et al. A systematic review of psychiatric, psychological, and behavioural outcomes following mild traumatic brain injury in children and adolescents. Can J Psychiatry Rev. 2016;61(5):259–69.

    Article  Google Scholar 

  28. Schulz-Heik RJ, Poole JH, Dahdah MN, Sullivan C, Date ES, Salerno RM, et al. Long-term outcomes after moderate-to-severe traumatic brain injury among military veterans: successes and challenges. Brain Inj. 2016;30(3):271–9.

    Article  PubMed  Google Scholar 

  29. Weil ZM, Corrigan JD, Karelina K. Alcohol abuse after traumatic brain injury: experimental and clinical evidence. Neurosci Biobehav Rev. 2016;62:89–99.

    Article  CAS  PubMed  Google Scholar 

  30. Coxe KA, Pence EK, Kagotho N. Social work care in traumatic brain injury and substance use disorder treatment: a capacity-building model. Health Soc Work. 2021;46(4):277–88.

    Article  PubMed  Google Scholar 

  31. McHugo GJ, Krassenbaum S, Donley S, Corrigan JD, Bogner J, Drake RE. The prevalence of traumatic brain injury among people with co-occurring mental health and substance use disorders. J Head Trauma Rehabil. 2017;32(3):E65–74.

    Article  PubMed  Google Scholar 

  32. Warner M, Schenker N, Heinen MA, Fingerhut LA. The effects of recall on reporting injury and poisoning episodes in the National Health Interview Survey. Inj Prev J Int Soc Child Adolesc Inj Prev. 2005;11(5):282–7.

    Article  CAS  Google Scholar 

  33. Bogner J, Corrigan JD. Reliability and predictive validity of the Ohio State University TBI identification method with prisoners. J Head Trauma Rehabil. 2009;24(4):279–91.

    Article  PubMed  Google Scholar 

  34. Corrigan JD, Bogner J. Initial reliability and validity of the Ohio State University TBI identification method. J Head Trauma Rehabil. 2007;22(6):318–29.

    Article  PubMed  Google Scholar 

  35. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

    Article  Google Scholar 

  36. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Rogers EM. Diffusion of innovations, 5th edition. 5th ed. New York: Free Press; 2003. p. 576.

    Google Scholar 

  38. National Cancer Institute, services USD of H and H, health NI of. Theory at a glance: a guide for health promotion practice. 2nd edition. Bethesda, Md.: CreateSpace Independent Publishing Platform; 2012. 62

  39. Glegg SMN, Holsti L, Velikonja D, Ansley B, Brum C, Sartor D. Factors influencing therapists’ adoption of virtual reality for brain injury rehabilitation. Cyberpsychology Behav Soc Netw. 2013;16(5):385–401.

    Article  Google Scholar 

  40. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Admin Pol Ment Health. 2011;38(1):44–53.

    Article  Google Scholar 

  41. Hamilton AB, Finley EP. Qualitative methods in implementation research: an introduction. Psychiatry Res. 2019;280:112516.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Levitt HM, Bamberg M, Creswell JW, Frost DM, Josselson R, Suárez-Orozco C. Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: the APA publications and communications board task force report. Am Psychol. 2018;73(1):26.

    Article  PubMed  Google Scholar 

  43. Creswell JW. A concise introduction to mixed methods. Inc: SAGE Publications; 2015.

    Google Scholar 

  44. Ivankova NV, Creswell JW, Stick SL. Using mixed-methods sequential explanatory design: from theory to practice. Field methods. 2006;18(1):3–20.

    Article  Google Scholar 

  45. Creswell JW, Clark VLP. Designing and conducting mixed methods research. Second edition. Los Angeles: SAGE Publications, Inc; 2010. 488 p.

  46. Dillman DA. Mail and internet surveys: the tailored design method. 2nd ed. New York: Wiley; 1999. p. 480.

    Google Scholar 

  47. Davis AK, Rosenberg H. Acceptance of non-abstinence goals by addiction professionals in the United States. Psychol Addict Behav. 2013;27(4):1102–9.

    Article  PubMed  Google Scholar 

  48. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Corp IBM. IBM SPSS statistics for windows, version 27.0. Armonk. NY: IBM Corp. 2020.

  50. Muthén LK, Muthén BO. Mplus User’s Guide. Sixth. Los Angeles, CA: Muthén & Muthén; 2019.

    Google Scholar 

  51. Bollen KA. Introduction. In: Structural equations with latent variables [Internet]. John Wiley & Sons, Ltd; 1989 [cited 2021 Apr 26]. p. 1–9. Available from: https://doi.org/10.1002/9781118619179.ch1

  52. MacCallum RC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996;1(2):130–49.

    Article  Google Scholar 

  53. Preacher KJ, Coffman DL. Computing power and minimum sample size for RMSEA [Computer software] [Internet]. 2006 [cited 2021 Oct 15]. Available from: http://quantpsy.org/

  54. Wang J, Wang X. Structural equation modeling: applications using Mplus. 1st edition. Chichester, West Sussex ; Hoboken, N.J: Wiley; 2012. 478.

  55. West SG, Taylor AB, Wu W. Model fit and model selection in structural equation modeling. In: Handbook of structural equation modeling. New York, NY, US: The Guilford Press; 2012. p. 209–31.

    Google Scholar 

  56. Little R, Rubin DB. The analysis of social science data with missing values. Sociol Methods Res. 1989;18(2–3):292–326.

    Article  Google Scholar 

  57. Little RJA. A test of missing completely at random for multivariate data with missing values. J Am Stat Assoc. 1988;83(404):1198–202.

    Article  Google Scholar 

  58. Bowen NK, Guo S. Structural equation modeling. 224 pOxford University Press; 2011.

  59. Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 3rd ed. Los Angeles: SAGE Publications; 2013. p. 448.

    Google Scholar 

  60. Van Deinse TB, Bunger A, Burgin S, Wilson AB, Cuddeback GS. Using the consolidated framework for implementation research to examine implementation determinants of specialty mental health probation. Health Justice. 2019;7(1):17.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Coyne IT. Sampling in qualitative research. Purposeful and theoretical sampling; merging or clear boundaries? J Adv Nurs. 1997;26(3):623–30.

    Article  CAS  PubMed  Google Scholar 

  62. Padgett DK. Qualitative methods in social work research. 2nd ed. Los Angeles, Calif: SAGE Publications, Inc; 2008. 304 p.

    Google Scholar 

  63. QSR International Pty Ltd. Nvivo. 2020.

  64. Bowen GA. Grounded theory and sensitizing concepts. Int J Qual Methods. 2006;5(3):12–23.

    Article  Google Scholar 

  65. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

    Article  Google Scholar 

  66. Nowell LS, Norris JM, White DE, Moules NJ. Thematic analysis: striving to meet the trustworthiness criteria. Int J Qual Methods. 2017;16(1):1609406917733847.

    Article  Google Scholar 

  67. Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs-principles and practices. Health Serv Res. 2013;48(6pt2):2134–56.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Onwuegbuzie A, Johnson RB. The validity issue in mixed research. Res Sch. 2006;13(1):48–63.

    Google Scholar 

  69. Curry L, Nunez-Smith M. Mixed methods in health sciences research: a practical primer. In Thousand Oaks, California: SAGE Publications, Inc.; 2020. Available from: https://methods.sagepub.com/book/mixed-methods-in-health-sciences-research-a-practical-primer

    Google Scholar 

  70. Guetterman TC, Fetters MD, Creswell JW. Integrating quantitative and qualitative results in health science mixed methods research through joint displays. Ann Fam Med. 2015;13(6):554–61.

    Article  PubMed  PubMed Central  Google Scholar 

  71. Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7:3.

    Article  PubMed  PubMed Central  Google Scholar 

  73. Bogner J, Corrigan JD. Interventions for substance misuse following TBI: a systematic review. Brain Impair. 2013;14(1):77–91.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Dr. Natasha Bowen for providing input to the analytical strategy for the structural equation model.

Funding

This study is supported by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health under Award Number F31-NS124263 ((Coxe) Hyzak, PI). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This work is also supported by the Alumni Grants for Graduate Research and Scholarship through the Ohio State University and the Ph.D. Seed Grant Program through the College of Social Work at the Ohio State University awarded to KAH. K.A. ACB is supported by the National Institute for Drug Abuse (NIDA) through grant number R34DA046913 (Bunger, PI). AKD is supported by private philanthropic funding from Tim Ferriss, Matt Mullenweg, Craig Nerenberg, Blake Mycoskie, and the Steven and Alexandra Cohen Foundation. AKD is also supported by the Center for Psychedelic Drug Research and Education, funded by anonymous private donors. The OSU TBI-ID was developed, and Drs. Corrigan and Bogner’s efforts were supported in part by a grant from the National Institute on Disability, Independent Living, and Rehabilitation Research to Ohio State University (Grant #90DP0040). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this publication do not necessarily represent the policy of NINDS, NIDILRR, NIDA, ACL, HHS, and you should not assume endorsement by the Federal Government.

Author information

Authors and Affiliations

Authors

Contributions

KAH conceptualized the study and drafted the entire manuscript. ACB provided overall substantive edits to each manuscript draft, as well as input to the conceptual model. AKD provided overall substantive edits to the study procedures and the sampling strategy for Phase I of the study and provided input to the TPBQ measure. JB and JDC are the original developers of the OSU TBI-ID and created the educational video used for this study. JB also provided overall edits to the manuscript. JDC also assisted in providing input to the sampling strategy for Phase I and provided overall substantive edits to the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kathryn A. Coxe-Hyzak.

Ethics declarations

Ethics approval and consent to participate

All participants will provide informed consent prior to study participation. This research study was reviewed and approved by the Institutional Review Board at The Ohio State University (Study #2021E0734).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

43058_2022_261_MOESM1_ESM.docx

Additional file 1.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Coxe-Hyzak, K.A., Bunger, A.C., Bogner, J. et al. Implementing traumatic brain injury screening in behavioral healthcare: protocol for a prospective mixed methods study. Implement Sci Commun 3, 17 (2022). https://doi.org/10.1186/s43058-022-00261-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-022-00261-x

Keywords

  • Traumatic brain injury
  • OSU TBI-ID
  • TBI screening
  • Behavioral health treatment
  • Mixed methods
  • Theory of planned behavior
  • Diffusion of innovations
  • CFIR
  • Acceptability
  • Feasibility