Skip to main content

Measurement feedback system implementation in public youth mental health treatment services: a mixed methods analysis



Prior studies indicate the effectiveness of measurement-based care (MBC), an evidence-based practice, in improving and accelerating positive outcomes for youth receiving behavioral health services. MBC is the routine collection and use of client-reported progress measures to inform shared decision-making and collaborative treatment adjustments and is a relatively feasible and scalable clinical practice, particularly well-suited for under-resourced community mental health settings. However, uptake of MBC remains low, so information on determinants related to MBC practice patterns is needed.


Quantitative and qualitative data from N = 80 clinicians who implemented MBC using a measurement feedback system (MFS) were merged to understand and describe determinants of practice over three study phases. Quantitative, latent class analysis identified clinician groups based on participants’ ratings of MFS acceptability, appropriateness, and feasibility and describes similarities/differences between classes in clinician-level characteristics (e.g., age; perceptions of implementation climate; reported MFS use; phase I). Qualitative analyses of clinicians’ responses to open-ended questions about their MFS use and feedback about the MFS and implementation supports were conducted separately to understand multi-level barriers and facilitators to MFS implementation (phase II). Mixing occurred during interpretation, examining clinician experiences and opinions across groups to understand the needs of different classes of clinicians, describe class differences, and inform selection of implementation strategies in future research (phase III).


We identified two classes of clinicians: “Higher MFS” and “Lower MFS,” and found similarities and differences in MFS use across groups. Compared to Lower MFS participants, clinicians in the Higher MFS group reported facilitators at a higher rate. Four determinants of practice were associated with the uptake of MBC and MFS in youth-serving community mental health settings for all clinicians: clarity, appropriateness, and feasibility of the MFS and its measures; clinician knowledge and skills; client preferences and behaviors; and incentives and resources (e.g., time; continuing educational support). Findings also highlighted the need for individual-level implementation strategies to target clinician needs, skills, and perceptions for future MBC and MFS implementation efforts.


This study has implications for the adoption of evidence-based practices, such as MBC, in the context of community-based mental health services for youth.

Peer Review reports


Recent national prevalence studies indicate that almost 50 million—or one in six—children and adolescents (hereafter, youth) under age 18 meet the criteria for at least one diagnosable mental health condition in the USA [1], with longitudinal data indicating a discernable rise in mental health burden among youth in the USA and worldwide [2,3,4]. Despite documentation of this need, poor access to mental health services for youth persists as a major public health concern [1, 5, 6]. Among youth who do access mental health treatment, most do so in local school and community-based settings through the public mental health or education sectors [7]. These systems are drastically underfunded, resulting in scarce resources to support and sustain the consistent use of evidence-based practices (EBPs) by clinicians [8,9,10,11,12].Footnote 1 As a result, there is substantial heterogeneity in the quality of services provided in these “usual care” settings due to inconsistent EBP implementation and a lack of detailed, descriptive information about existing practices and needed supports [11,12,13]. Public child-serving systems are in dire need of feasible, scalable strategies to elevate the quality of the usual care mental health treatment they provide.

The current study examines implementation outcomes of a relatively feasible, scalable clinical practice to elevate the quality of usual care mental health services for youth of varied ages and presenting concerns: the use of a digital measurement feedback system (MFS) to promote measurement-based care (MBC) [14,15,16]. MBC is the routine collection and use of client-reported progress measures to inform shared decision-making between the client and clinician, and collaborative treatment adjustments as needed [16]. MBC has three components: (1) collection of client-reported outcome measures to track progress on goals, (2) sharing progress data with the youth client and their parent or caregiver and discussing their perspectives on progress data, and (3) acting on those data, informed by youth and caregiver perspectives, and other information, to make collaborative decisions about the continued treatment approach [17]. MBC has been associated with greater and faster improvements in client outcomes among youth and adults alike and is consistent with an evidence-based approach to mental health service delivery [18,19,20]. Despite its applicability across client populations, treatment approaches, presenting concerns, and potential to improve youth mental health care quality, MBC is currently implemented by only a small fraction of usual care mental health clinicians [21]. Moreover, systematic evaluations of MBC implementation in child-serving settings, with a particular focus on how to support clinicians’ use of MBC with youth, are still needed [20].

MBC implementation determinants

Determinants of MBC implementation, which comprise multi-level barriers and facilitators to this practice, include factors for clients (e.g., time for completing measures, symptoms or differing abilities that make MBC more or less challenging), clinicians (e.g., questions about if MBC is superior to clinical judgment, MBC knowledge and skill, administrative burden such as paperwork, questions about how the data will be used to evaluate performance), organizations (e.g., degree of guidance on measure selection, resources for training, staff retention or turnover, leadership support, organizational climate), and behavioral health care systems (e.g., incentives for MBC and recognition of MBC benefits) [19]. In reality, multi-level determinants and implementation outcomes are interrelated and interdependent, particularly for MBC, as the client experience, client-clinician interactions, and clinician professional experiences in their organizations are all nested within health care systems [22,23,24,25,26].

Studies examining implementation processes in health care systems underscore the importance of considering the influence of clinician characteristics and perspectives as well as organizational factors such as climate and leadership to understand the adoption and use of EBPs [27,28,29]. Current implementation science theory suggests that implementation strategies should be selected and designed to address specific barriers identified within a health care system for a particular practice [30]. Examinations of the effectiveness of tailored strategies are in progress and early findings are mixed with scholars in this area recommending additional research [31, 32]. The current study extends research focused solely on barriers and facilitators of MBC by examining how selected determinants that appear in the literature predict class membership of clinicians by their reported implementation of MBC using an MFS. This provides a foundation for selecting implementation strategies for clinicians or clinician groups based on their personal constellation of determinants and use patterns. Given the multitude of determinants documented to predict MBC use, we focused specifically on clinician years of experience and implementation climate to predict and describe class differences due to consistent emphasis on these determinants of evidence-based practice (EBP) adoption and implementation in the literature. Implementation climate has been shown to predict more positive attitudes toward EBPs and clinician use of EBP [33, 34]. Clinician years of experience has also been linked to EBP knowledge and attitudes [35, 36], although some findings are mixed [11, 37,38,39], suggesting the need for continued examination of this determinant in various samples and implementation study designs. We complemented our focus on clinician years of experience and implementation climate by incorporating qualitative data that reflects a wider range of multi-level determinants related to MBC and MFS implementation.

Measurement feedback systems to support MBC implementation

Use of a measurement feedback system (MFS) is one proposed implementation strategy to support the adoption and implementation of MBC [40]. MFS are health information technologies that capture client-reported outcome data and provide real-time graphical displays or other user-centered features to support clinical decision-making [40, 41]. Some of the highest effect sizes of MBC on client outcomes (e.g., d=0.49 to 0.70) were found when clinicians and clients viewed measures together and could identify through graphical displays when the client is “off track” compared to what would be expected [18, 42,43,44].

Despite the promise of MFS, clinician adoption and uptake of these technologies are variable and fraught with numerous barriers [26, 45, 46]. Findings from organization- or system-wide implementations of MFS suggest that clinician collection of measures can be more readily facilitated than clinician review and use of these data to inform treatment decisions [46, 47]. A closer look at “users” versus “non-users” of an MFS rollout in the Veteran Affairs Canada indicated that “non-users” were more likely to report barriers such as time burden and difficulty using the MFS despite reporting they had adequate knowledge and skill to use MBC [47]. Given the mounting evidence of challenges to MFS implementation, there is a great need for continued research to uncover factors that will facilitate efforts to address these barriers. For example, a system-wide MFS rollout in Hawai’i indicated that MFS implementation was facilitated by clinicians’ willingness to overcome administrative barriers and perceptions that the measure itself is clinically useful to aide decision-making [48]. Although MFS implementation has been explored in adult-serving settings, there is scant literature on predictors of MFS implementation success in youth mental health service delivery contexts. One notable case study provides a detailed account of barriers and facilitators to MFS implementation based on semi-structured interviews with eighteen clinicians across two sites, comparing and contrasting clinician-reported determinants by site [26]. In the current study, we seek to build upon work such as this to understand variations of determinants based on their clinician MFS use.

Implementation outcomes

Implementation outcomes are proximal indicators of implementation success based on specific actions taken to promote a new treatment, practice, or service [49]. Implementation outcomes include acceptability, adoption, appropriateness, costs, feasibility, fidelity, penetration, and sustainability [49]. By explicitly measuring implementation outcomes, one can track the extent to which the implementation process occurs as expected. Neither more distal service outcomes nor client outcomes will be influenced by introduction of a new practice unless the implementation process is successful. Implementation outcomes are affected by the implementation strategies selected and various dimensions of the service context, including barriers and facilitators to implementing a new practice in a particular setting [50, 51].

Current study

Given the need for deeper understanding of implementation determinants associated with the use of MBC and MFS in youth mental health service delivery contexts, this study sought to understand how clinician characteristics and perceptions of implementation climate related to MBC implementation outcomes and determinants of practice. The goals of the current study were to (1) identify groups or “classes” of clinicians based on their self-reported MFS implementation outcomes; (2) determine if those groups were characteristically distinct; and (3) examine barriers and facilitators to MFS implementation by class to inform how to best support future adoption and use of MFS based on clinician characteristics. We measured acceptability, feasibility, and appropriateness as implementation outcomes because they are conceptually distinct yet related and often examined in formative studies as key predictors of implementation success [52].

We used a cross-sectional design to survey clinicians at the end of 1 year of full MFS implementation. Using latent class analysis (LCA), we identified clinician classes based on self-reported implementation outcomes. LCA is a probabilistic, person-centered method through which individuals are assigned to a specific class and is an appropriate method to use for the identification of groups of people based on their similarities (in contrast, variable-centered approaches, such as confirmatory factor analysis, focus on associations between variables [53]). In our study, class membership was estimated based on participant endorsements of implementation outcomes. After conducting the LCA, we examined how clinician characteristics (e.g., years of experience) and agency factors (i.e., implementation climate) were associated with class membership. Finally, we conducted qualitative coding on clinician comments about barriers and facilitators to implementation and used mixed methods analysis to expand our understanding of implementation determinants for clinicians overall, and by class. Study description and result reporting are consistent with Levitt et al.’s (2018) recommendations as described in the Mixed Methods Article Reporting Standards.


Study context

The data for this study were originally collected as part of a quality improvement project in partnership with a large, suburban school district in the Mid-Atlantic region (i.e., 85,000 students in 125 public schools) and their network of community-based mental health clinicians who provide services on school grounds. Services were primarily funded by public insurance reimbursement; 74% of the students served qualified for free or reduced price lunch at school, an imperfect but conventional indicator of socioeconomic disadvantage in the USA [54]. The project focused on the adoption and implementation of a new MFS to help clinicians implement MBC and more systematically track student psychosocial progress during mental health treatment. The MFS used in this study was a private label of ACORN ( developed with funding from the state’s Accountable Care Organization at the time. The MFS included the Client Feedback Form, a 19-item, standardized measure of internalizing concerns, externalizing concerns, and working alliance, with child- and parent-reported versions. The measures demonstrated strong internal consistency (α = 0.87–0.90) and construct validity was established via the youth Outcome Questionnaire and Child Behavior Checklist [55]. Measures were collected via paper and pencil or electronically in the MFS during session, with an emphasis on in vivo review of responses with the youth and/or parent to discuss progress and adjust treatment as needed. Data collection and submission options (i.e., via paper and pencil or electronic) were provided to accommodate for variations in technology at school sites.


Eighty clinicians from four community-based mental health agencies participated in the study. Of those, 52.5% (n = 42) were from one agency. All clinicians reported that their highest level of education was a master’s degree. A minority of participants (11.3%, n = 9) were supervising or lead clinicians (who also saw clients), and the majority (68.75%, n = 55) were trained in social work. Participant ages ranged from age 21–61, with a median age range of 31–40, and years of experience in behavioral health ranged from <1 year, to over 20 years, with a median of 3–5 years. See Table 1 for participating clinician characteristics.

Table 1 Clinician participant characteristics (N = 80)

Implementation strategies and procedures

The current study was reviewed by the Yale University Institutional Review Board and approved as exempt from continuing review due to anonymous data collection. Implementation of MBC via the MFS occurred over the course of an academic school year. The year prior was a planning year in which the MFS was chosen by school district and mental health agency leadership and pilot tested to inform training, implementation strategies, and recommended frequency of data collection [56]. During implementation, clinicians received a one hour virtual training by the MFS developer, mental health agency leaders received a one hour virtual supervisors training by the MFS developer, and quarterly meetings were held with school district and agency leaders to discuss implementation progress and strategies for ongoing support. Each agency offered implementation strategies focused on fostering clinician peer-to-peer and supervisory support for using the MFS (e.g., reviewing client measures during supervision, pairing new clinicians with implementing clinicians for peer consultation on how to use the MFS), and regular discussion of successes and challenges during staff meetings. Implementation strategies started at the beginning of the school year with the virtual MFS trainings and continued throughout the school year. Clinicians reported perceptions of the acceptability, feasibility, and appropriateness (i.e., implementation outcomes) of the MFS at the end of the year. All data were collected electronically, with participants completing the quantitative questionnaire first, and then—during the same session—responding to qualitative questions.



Demographic data

Clinicians self-reported demographic and professional characteristics including age, years working in behavioral health, field of training, highest degree, job role, and caseload size. To preserve clinician anonymity, race/ethnicity and gender were not collected.

MFS use

At the end of the implementation year, clinician use of the MFS was measured by asking clinicians to report the number of clients with whom they had completed at least one assessment, the number of clients with whom they had completed at least two assessments, the number of clients to whom they provided feedback based on MFS results, and the number of clients for whom they changed treatment approach, informed by MFS results. Although MBC as a practice is facilitated by the MFS as a tool [57], this implementation was focused on the use of the MFS for MBC, so clinician-reported implementation outcome measures such as MFS utilization and acceptability, appropriateness, and feasibility (below) referred to the MFS as the referent “practice” or “innovation.”

Implementation climate

Clinicians’ perceptions of the implementation climates of their agencies were assessed with the Implementation Climate Scale [58]. The ICS has demonstrated reliability and validity in measuring staff perceptions of the implementation climate within organizations [58, 59]. The ICS is an 18-item measure with six subscales (three items per subscale) developed to assess organizational support for the implementation of EBPs, in general. All items are scored on a 5-point Likert scale: 0 (“not at all”) to 4 (“very great extent”). The focus on EBP subscale is comprised of items addressing the degree to which an organization prioritizes the use of EBPs (e.g., “one of this agency’s main goals is to use EBPs effectively”). The education support subscale addresses the degree to which an organization provides educational resources and supports for clinicians’ use of EBPs (e.g., this agency provides conferences, workshops, or seminars focusing on EBPs”). The recognition subscale assesses the degree to which individuals with expertise in EBPs are recognized within the organization (e.g., “agency staff who use EBPs are held in high esteem in this agency”). The selection subscale addresses the degree to which an agency hires staff who have expertise in and value EBPs (e.g., “this agency actively recruits staff who value EBP”). The openness subscale pertains to staff openness to change (e.g., “this agency selects staff who are adaptable”). In the current sample, the overall scale and the focus, recognition, rewards, and openness subscales had acceptable internal consistency (alphas between 0.70 and 0.92). However, internal consistency of the educational support and selection subscales was below the threshold (alphas of 0.52 and 0.62 respectively), although this may be related to the small number of items per scale [60]. Conservatively, findings associated with these two subscales should be interpreted with caution.

Implementation outcomes

Implementation outcomes were assessed using three measures with demonstrated reliability in assessing clinician-reported implementation outcomes: the Acceptability of Intervention Measure (AIM), the Intervention Appropriateness Measure (IAM), and the Feasibility of Intervention Measure (FIM [52]). Each scale is comprised of four items and each item is scored on a 5-point Likert scale ranging from 1 (“completely disagree”) to 5 (“completely agree). The AIM addresses the extent to which clinicians find an EBP to be satisfactory (e.g., “I like the MFS”); the IAM assesses clinicians’ perceptions about the fit of the EBP (e.g., “the MFS seems like a good match”), and the FIM probes clinician perceptions of the viability of EBPs (e.g., “the MFS seems implementable”).


Qualitative data were drawn from clinicians’ responses to the following four open-ended questions: 1) What do you need right now to support your use of the MFS? 2) What do you think new clinicians will need to support their use of the MFS next year? 3) What do you like about the MFS? 4) What do you not like about the MFS or recommend could be improved? Finally, clinicians were given the option of providing additional comments, feedback, or recommendations about the implementation of the MFS in an open text box.

Data analytic plan

The current study employed a partially mixed, sequential, equal status design with mixing occurring in the interpretation phase ([61]; Table 2). Using mixed methods allowed us to extend the scope of inquiry by using different methods to address different aspects of the topic at hand (i.e., for the purpose of expansion [62, 63]). Specifically, in phase I, we used quantitative methods to identify classes of participants based on self-reported implementation outcomes and compare clinician characteristics across classes. In phase II, determinants of practice were identified through qualitative data coding, and codes were guided by an established list of determinants [64]. Finally, mixing occurred in phase III, when quantitative and qualitative data were combined to understand the needs of different classes of clinicians, describing class differences, and providing information to inform the selection of specific implementation strategies for future implementation and research efforts.

Table 2 Mixed methods study phases

Quantitative data analysis

Quantitative data analyses were conducted in SPSS Version 26 and MPlus 7. Using intraclass correlation coefficients (ICCs), we examined shared variance of participants within agency at the item and subscale levels for the Implementation Climate Scale and three implementation outcome measures [52, 58]. All ICCs were small-to-medium.Footnote 2 Next, we conducted a series of LCAs with robust standard errors (a conservative approach to account for the nesting of clinicians in agencies [65, 66]) to identify classes of participants to better understand variations in implementation outcomes. Specifically, clinicians’ self-reported quantitative ratings of MFS acceptability, appropriateness, and feasibility were analyzed in a latent class framework to identify classes of clinicians. We conducted all LCAs to model subgroups of participants with different patterns of endorsement of implementation outcomes. To identify a final model, we compared entropy and fit statistics across models with two, three, and four classes and selected the two-class model. Although LCA is frequently used with much larger samples, there is support for use of LCA with samples as small as N = 30, when there are relatively few, distinctive classes [65]. We evaluated LCA solutions based on the theoretical interpretability of the classes, class size, Akaike’s Information Criterion (AIC), Bayesian Information Criterion (BIC), adjusted BIC (aBIC), entropy, Vuong-Lo-Mendell-Rubin Likelihood ratio test (VLMR LRT), and Lo-Mendell-Rubin Adjusted Likelihood Ratio Test (LMR LRT). AIC, BIC, and aBIC are relative fit statistics, with smaller values interpreted as indicating superior fit. Entropy measures class distinction, such that values of 0.8 and higher indicate acceptable class separation. VLMR and LMR LRT provide p values to assess if adding additional classes results in statistically significant fit improvement (Nylund et al., 2007). We used logistic regressions to understand specific predictors of class membership and identify similarities and differences in clinician characteristics at the level of the individual, including age, caseload, years of experience, clinician reports of implementation climate, and MFS utilization, across classes. As a conservative method, we adjusted for agency in all regression analyses by including agency as a covariate.

Qualitative data analysis

Consistent with a thematic coding approach, we established a coding scheme based on Flottorp and colleagues’ [64]  domains of determinants of practice. These domains include guideline factors, individual health professional factors, client factors, professional interactions, incentives and resources, capacity for organizational change, and social, political, and legal factors. Qualitative data were organized by question (rather than respondent) for analysis. Although our a priori coding scheme consisted only of determinants from Flottorp and colleagues, emergent themes were identified within each domain through a consensus process to contextualize our understanding and operationalization of these domains for the current practice and setting. Both authors served as coders and met weekly during the early coding process to review codes and establish interrater consensus. After all qualitative material was coded in Excel, we met and reviewed each individual code to ensure agreement and consensus.

After identifying determinants, clinician responses were coded as either “barriers” or “facilitators” for the use of the MFS. Both authors reviewed all qualitative data, and final codes were established through a consensus process which included iterative rounds of coding and discussion to ensure shared understanding of the data. When a clinician’s response indicated multiple determinants, the response was coded multiple times (e.g., if a response indicated guideline factors and individual health professional factors, both codes were applied). We were blind to clinician class membership during coding; class membership was an added variable after coding was completed. We both received doctoral-level training and supervision in qualitative methods, with experience conducting and publishing qualitative and mixed methods research [67,68,69,70,71,72,73,74].


Following the completion of all analyses, quantitative and qualitative results were mixed. Clinician class was added to the qualitative dataset at the clinician level, and we re-examined coded determinants to identify patterns within and between classes. Specifically, we examined the type of determinants as well as the relative frequency of barriers and facilitators. After mixing, we reviewed findings separately and then met to discuss and achieve consensus concerning patterns in the data. The goal of mixing quantitative and qualitative analyses was to understand possible differences and similarities in the barriers and facilitators experienced by different groups of clinicians.


Latent class analysis: a two-class solution

Entropy was acceptable for all models. Relative fit statistics (i.e., AIC, BIC, aBIC) suggested that there might be support for a more complex model; however, the non-significant p-values for the Vuong-Lo-Mendell Likelihood Ratio Test and the Lo-Mendell adjusted Likelihood Ratio Test indicated support for the two-class model [75]. See Table 3 for model fit statistics. Therefore, we examined patterns of endorsement of AIM, IAM, and FIM measures across classes and chose to move forward with the two-class model for parsimony (Fig. 1).

Table 3 Model-fit statistics comparisons
Fig. 1
figure 1

Endorsement of AIM, IAM, and FIM items for two-class solution

In the two-class model, there were 40 clinicians in each class. The first class, the “Higher MFS Self-Reported Implementation Outcomes group,” (referred to as “Higher MFS” going forward) viewed the MFS more positively and was comprised of clinicians who gave it higher ratings of acceptability, ranging from 2 to 4 on the AIM, M = 2.89, SD = 0.59; appropriateness, ranging from 1.5 to 4 on the IAM, M = 2.91, SD = 0.57; and feasibility ranging from 2.50 to 3.75, M=3.05, SD=0.44, than clinicians in the second class, the “Lower Self-Reported MFS Implementation Outcomes” (referred to as “Lower MFS” going forward; Fig. 1). Among clinicians in the Lower MFS class, ratings of acceptability on the AIM ranged from 0 to 3, M = 1.11, SD = 0.84; ratings of appropriateness on the IAM ranged from 0 to 3, M = 1.38, SD = 0.85; and ratings of feasibility on the FIM ranged from 0.5 to 3.75, M = 1.78, SD = 0.73. In general, the two classes followed similar patterns of responses on the AIM, IAM, and FIM, with Higher MFS clinicians agreeing more than Lower MFS clinicians. We inspected the distribution of classes across agencies using chi-square tests and found no significant differences, with two exceptions: clinicians in agency 2 were significantly less likely to be in class I compared to clinicians in agency 3 χ2(1, n = 59)= 7.13, p = 0.01, and clinicians in agency 4 χ2(1, n = 28)= 4.50, p = 0.03. Adjusting for agency as a covariate, there were no differences across classes in age, years in behavioral health, or caseload (Table 4).

Table 4 Class comparisons on clinician characteristics, climate perceptions, and MFS use

Perceptions of implementation climate

Clinician’s implementation climate ratings of their agencies were significantly different across classes (Table 4), such that Higher MFS participants rated their agency’s climate as more supportive of EBPs (M = 2.48, SD = 0.65) than Lower MFS participants (M = 1.95, SD = 0.80, Adjusted Odds Ratio (aOR) = 2.80 [1.37, 5.72], p = 0.01). There were also significant differences on five of the six the ICS subscales. Specifically, Higher MFS participants reported significantly more organizational focus on EBP (M = 3.03, SD = 0.69) than Lower MFS participants (M = 2.51, SD = 0.96; aOR = 2.36 [1.26, 4.43], p = 0.01), and significantly more educational support for EBP (M = 2.23, SD = 0.76) compared to Lower MFS participants (M = 1.74, SD = 0.89; aOR = 2.36 [1.26, 4.42], p = 0.01). Clinicians in the Higher MFS group reported significantly higher levels of organizational recognition for EBP (M = 2.31, SD = 0.90) compared to Lower MFS participants (M = 1.50, SD = 1.02; aOR = 2.40 [1.37, 4.19], p < 0.01) and significantly more rewards for EBP (M = 2.81, SD = 72) than Lower MFS participants (M = 2.29, SD = 0.94; aOR = 2.18 [1.18, 4.02], p = 0.01). Higher MFS clinicians also had significantly higher ratings on the selection for EBP subscale, indicating that Higher MFS participants were more likely to perceive their agencies as selecting staff with strong backgrounds in EBP (M = 2.98, SD = 0.66), compared to Lower MFS participants (M = 2.54, SD = 0.83; aOR = 2.27 [1.13, 4.55], p = 0.01). Subscales addressing educational support for EBP and selection for EBP had low internal consistency, and findings should therefore be interpreted with caution. There were no significant differences between classes in clinician’s endorsement the Openness subscale, indicating no difference across classes in clinician perception of agency likelihood to select staff who were open to EBP.

MFS use

Clinicians reported one difference in MFS use between classes: adjusting for agency, Higher MFS participants had significantly higher rates of providing feedback based on MFS data (M = 0.59, SD =0.49) compared to Lower MFS participants (M = 0.23, SD = 0.32; aOR = 10.32, p < 0.01; Table 4). Adjusting for agency, there were no significant differences between classes on the rate completion of at least 1 or at least 2 assessments, or clinician rates of changing treatment approach based on MFS results.

Qualitative results

Four domains of determinants emerged most frequently in clinician’s responses to open-ended questions about MFS implementation: guideline factors, individual professional factors, client factors, and incentives and resources (see Table 5). Thus, we focused our qualitative analyses on these areas. Whereas quantitative results focus on factors (i.e., possible determinants) at the clinician level to distinguish classes (particularly as implementation climate varied more at the clinician than agency level, in this sample), our qualitative results reflect clinician perspectives of determinants at multiple levels, which are indicated in the subheadings below.

Table 5 Percentages and counts of implementation determinants identified in qualitative data

Guideline factors (EBP level)

Guideline factors are various aspects of how clinicians were recommended to use MBC via the MFS, which we operationally defined as the clarity, strength, appropriateness, and feasibility of the Outcome Rating Scale in the MFS as well as the MFS itself [46]. Therefore, qualitative results in this section reflect clinicians’ feedback about both the progress measure and the feedback system. All agencies had the same expectations for their school-based clinicians, which was to collect and discuss the Outcome Rating Scale as often as possible, up to every session, with all students and families served. Forty-six percent of determinants identified in the data were guideline factors. Clinicians identified various guideline factors as influential in their use of the MFS, including compatibility with existing clinical practices, effort required to change or adhere to the recommendation, feasibility, clarity, and the evidence supporting the recommendation. For example, clinicians expressed a desire for problem area-specific measures for progress monitoring instead of a global outcome measure, and the preference to choose unique measures for each client based on clinical judgment. One clinician wrote: “I would rather be using pre/post testing scales or client rating scales that focused on the specific issues our clients are facing such as [a] specific depression inventory, trauma rating scales, etc.” Frequency of recommended assessment arose as an important consideration in the guideline domain, with some clinicians sharing that they would prefer to administer the measure with less regularity. As one clinician explained, “I would be much more comfortable with using it once every 2-3 months… Monthly administration means that we spend about 20% of our time every month completing this.” Other aspects of the guideline that clinicians described liking included the ease of navigating the tool, the enhanced ability to monitor progress over time, the opportunity to get feedback from clients, and how the MFS enabled them to focus on evidence-based outcomes.

Individual professional factors (clinician level)

Individual professional factors are clinician characteristics such as knowledge, skills, self-efficacy, and attitudes about MBC and the MFS [46]. These accounted for 20% of identified determinants, with knowledge and skills, attitudes, and professional behavior [65] emerging as particularly important. For example, clinicians emphasized familiarity with, and knowledge of the MFS as key for implementation, and multiple clinicians reported that the initial MFS training they received was sufficient to equip them with the necessary understanding. Professional behavior also emerged as influential. Clinicians emphasized the importance of being organized, creating systems for themselves so that they would remember to implement the MFS, and carving out time to review findings first by themselves, and then with clients, to guide treatment. Clinician attitudes and perceptions were also highlighted as multiple clinicians described a desire for more autonomy, and disliking guidelines that required them to implement certain protocols for all clients. Relatedly, some clinicians expressed doubts about their clients’ ability to answer reliably. For example, one clinician noted, “I have a couple of girls who are very extreme in their perception of things. This makes it difficult because they will answer with the highest number, even if that is not necessarily the case.”

Client factors (client level)

Client needs, knowledge and beliefs, preferences, and motivations about the MFS and its outcome measure accounted for 18% of identified determinants. Multiple clinicians working with younger clients reported that the lack of adjusted, age-appropriate measures posed a significant challenge to implementation of the MFS. Additionally, some clinicians reported that their clients did not like the MFS. For example, a clinician noted: “I have tried to make [the MFS] fun for my kids by turning it into a game with a ball, but they still get annoyed that they have to do it.” Others reported that the MFS allowed clients to report problems that might not otherwise have arisen during treatment. For example, one clinician noted that the MFS helped to improve rapport and engagement, and another wrote, “it's helpful to quickly assess where clients are with their symptoms. At times clients will not identify any stressors or challenges, but this will be identified in the survey [and]… would otherwise be missed.” Similarly, a different clinician noted that the MFS helped to inform the direction of treatment, to better meet the needs of youth: “I have had many sessions change because of a discussion based on an answer they had to on [the MFS], and I don't think I would have found that information if we had just done a normal session.”

Incentives and resources (agency level)

Incentives and resources included those necessary for clinicians to use the MFS, as well as incentives, disincentives, and supports for implementation (e.g., assistance for clinicians, continuing educational support; 47). Thirteen percent of the identified determinants were incentives and resources. Specifically, clinicians identified the availability of necessary resources, assistance for clinicians, and continuing education to be particularly important. Time was the most frequently identified resource that clinicians mentioned. For example, one clinician reported, “I have so [many] other things to do for my job that I'm unable to use this tool to my or the client’s advantage. Unfortunately, it’s more of a requirement to meet… I haven’t had the time to go into the system [to] review the data for trends, on any of my 21 cases.” Clinicians also described a desire for someone else to help with data entry/management, and scheduling systems that would automatically create reminders for monthly implementation. Finally, some clinicians expressed a desire for continued educational support, such as refresher trainings, and examples of how to more seamlessly incorporate MBC and the MFS into treatment sessions.


By mixing our quantitative and qualitative data, we found that 55% of the determinants identified by Higher MFS participants were framed as facilitators that helped them implement MBC and use the MFS. In contrast, 37% of the determinants identified by Lower MFS participants were framed as facilitators. Specifically, when we coded qualitative data blind to clinician class, we identified 120 instances of facilitators and 100 instances of barriers in data from Higher MFS participants, compared to 72 instances of facilitators and 125 instances of barriers in data from Lower MFS participants. Table 5 provides an overview of the types of determinants identified by class, and percentages of instances in which each determinant was identified based on the total determinants identified.

The most common four determinants of practice identified in the qualitative data were identified across both classes. Although the Higher MFS participants reported more determinants than the Lower MFS participants, clinicians across both classes endorsed the same types of determinants. However, the fact that, descriptively, a greater proportion of the determinants identified by participants in the Higher MFS group were facilitators, compared to those identified by Lower MFS participants, suggests additional support for the two-class model: across both the qualitative and quantitative data clinicians in the Higher MFS group reported more support for MBC and the MFS.


In the context of an MFS implementation project within publicly funded, community-based youth mental health treatment services, we found support for two classes of clinicians. We tentatively refer to them as “Higher MFS” and “Lower MFS” participants, based on differences in clinician-reported implementation outcomes (i.e., perceptions of appropriateness, acceptability, and feasibility of the MFS and its outcome measure) and patterns of MFS use. Identifying these classes proved useful in several ways to deepen our understanding of different types of implementation experiences and perceptions that clinicians reported at the end of a full year of implementation. First, adjusting for agency, Higher MFS participants tended to view their organizational settings as more supportive of EBP in general, including higher levels of agreement that their agencies focused on EBP, recognized and rewarded clinicians for using EBP, provided educational support for EBP, and selected clinicians with experience, skills, and training in EBP (of note, the two latter subscales achieved below threshold reliability, and findings may therefore be interpreted with caution). Both groups reported similar data collection rates using the MFS, and higher rates of measure collection compared to rates of providing feedback or using MFS data to inform changes to treatment course. Previous MBC studies show similar trends: using data and adjusting treatment based on data is often the most difficult practice to change in implementation [47, 76,77,78]. Yet, even adjusting for agency, Higher MFS participants reported significantly higher rates of providing feedback based on the MFS, compared to the Lower MFS participants.

Findings from our qualitative analyses provided additional support for the two classes, indicating that participants in the Higher MFS group reported a higher proportion of facilitators compared to Lower MFS participants. However, types of determinants most commonly noted were the same across classes. Of note, this is not to say that implementation differences between the classes are solely due to intrapersonal clinician factors; indeed, clinicians in both classes identified similar multi-level determinants influencing their practices and those in the Higher MFS group experienced more facilitators and fewer barriers. Prior research also indicates that clinician and agency-level determinants likely interact in reality to influence implementation outcomes [79]. Our goal with this study was to center the clinicians’ perspective to understand their experiences and multi-level determinants, but future work might look to mix different stakeholder perspectives to triangulate that of clients, supervisors, leaders, and clinicians. Interestingly, qualitative results of determinants did not surface elements of the school context as much as we anticipated. This may be because the clinicians in this sample are community-partnered, which means their organizational context for implementation is a blend of their agency as the employer and school site(s) as the practice setting, so they may not face as many school-context-specific barriers to implementation as their school-employed counterparts [68, 71]. Contextual factors influencing implementation of measurement-based care and other evidence-based practices in schools, by various types of school-based clinicians, is an important area of future inquiry.

Our results indicate that clinician perceptions of implementation climate are clearly linked to clinician-reported implementation outcomes. Prior studies [80, 81] indicate the importance of clinician attitudes toward standardized assessments. Although such attitudes were not explicitly measured in the present study, our findings reflect the complex interplay between clinician perceptions and behaviors. Clinicians in the Higher and Lower MFS classes were distinguished not only by their rates of providing feedback and perspectives about the MFS, but also by their viewpoints on implementation climate. As demonstrated by small-to-medium ICCs, clinician’s perspectives about implementation climate varied in part by clinician. This suggests, perhaps, that implementation strategies targeting individual clinician perspectives and behaviors, rather than, or in addition to, agency climate, may be important in influencing the uptake of MBC and use of MFS in the future. In fact, there is a history of strategy tailoring at the organization, site, or agency level [32, 82], but our results indicate that strategy tailoring may need to also occur at the clinician level. Prior work from our team underscores the significant amount of implementation variation at the individual clinician level that is not well accounted for by our traditional measures of clinician knowledge, perceptions, and professional characteristics [77]. Tailoring to the clinician level is admittedly a more costly approach to tailoring than at the organizational level [83], so perhaps a phased approach to tailoring whereby strategies are first selected for the organization and then further customized at the clinician level for lower MFS implementers could prove to be more resource-effective.

Implications for MBC in youth service settings

In all settings, including those that serve youth, implementation outcomes are influenced by multilevel and interrelated determinants regarding the recommended practice, setting factors, and characteristics of the individuals implementing. Systematic methods to identify the determinants that matter most can inform which implementation supports or strategies are provided. Coding implementation data by determinant using a pre-existing checklist, as we did in this study, can supply actionable, real-time information to help agency leadership, trainers, coaches, and project directors to select and provide appropriate, tailored implementation strategies. In fact, the CFIR-ERIC Implementation Strategy Matching Tool [51] is practical, free tool, accessible online, that identifies implementation strategies matched to determinants. Indeed, the community-based settings in which youth most frequently receive mental health care often have very limited resources, further underscoring the importance of identifying the most important and influential strategies to support implementation.

Guideline factors emerged as particularly critical in our study, indicating the importance of the recommendation or practice itself. This suggests that targeting intervention characteristics, such as those that promote clarity and adaptability of a measure and/or the MFS, identifying and preparing champions to support implementation in their clinical settings, and capturing and sharing local knowledge about the measure and MFS, are strategies likely to increase implementation of these practices in youth-serving, community-based mental health agencies [51]. Similarly, strategies addressing clinicians’ knowledge and beliefs about the intervention, self-efficacy in their capability achieve implementation goals, and commitment to implementing the MFS in a sustained manner [51] will be important to promote the uptake and sustainment of MBC and use of the MFS.

Limitations and future directions

Several limitations should be noted. First, clinicians in agency 2 were significantly less likely to be in the Higher MFS class compared to those in agencies 3 and 4. This may indicate that unmeasured agency-level factors were important, perhaps in some agencies more so than others. Given this finding, we used multiple quantitative methods to mitigate the clustering effects of agency on clinician outcomes (i.e., LCA with robust standard errors and adjusting for agency in quantitative between-class analyses). A second limitation is that qualitative data on barriers and facilitators may have been influenced by the wording of the qualitative questions. Indeed, barriers and facilitators, as identified in this study, are essentially “mirror images” of each determinant domain. For example, clinicians identified lack of time as a barrier; ample time would therefore be a facilitator. Future research may consider deemphasizing the distinction between barriers and facilitators, for example by focusing on facilitators. Third, our classes were based on clinician-reported acceptability, feasibility, and appropriateness as leading outcomes for this type of work and may reflect high and low response pattern tendencies. However, the fact that there were other systematic differences between the classes provides support for the existence of real underlying between-group differences. Fourth, future research addressing MBC and MFS implementation efforts could examine other, more distal implementation outcomes such as fidelity (which may in fact be predicted or preceded by acceptability, implementation, and appropriateness [84]) as well as elicit feedback not only from clinicians but also clients and agency leadership. In our study, all client factors were clinician reported, and given evidence of the potential importance of agency-level effects, it will be important to understand the perspectives of agency leaders in future work addressing MBC and MFS implementation efforts. Relatedly, agency factors were not comprehensively assessed qualitatively. Fifth, although there are many strengths associated with our person-centered quantitative methodology, it is possible that by grouping clinicians into classes we failed to detect granular differences in associations between implementation outcomes and other study variables. Finally, our study was cross-sectional, relying on clinician’s retrospective self-report of MFS use, and did not address causation, incorporate contemporaneous reports of MFS utilization (which would likely be more accurate), or address possible trajectories in change of clinician reports of implementation outcomes over time, which are areas for future possible research.


By investigating clinician factors associated with the implementation of MBC via a MFS, this study has important implications for the adaptation of EBPs in the context of community-based, “usual care” mental health services for youth. For example, our findings suggest that there are meaningful differences between clinicians, such that some are more likely to rate an MFS as acceptable, appropriate, and feasible. Notably, although we identified classes of clinician perceptions of implementation climate, these varied significantly at the clinician level, controlling for agency. This underscores the potential need to target interventions at the individual clinician level, and we are hopeful that this work and continued efforts to understand variation in clinician experiences with MBC implementation and perceived implementation climate can inform targeted, future implementation supports and solutions.

Availability of data and materials

The datasets analyzed during the current study are not publicly available as data were gathered as part of a quality improvement project but are available from the second author on reasonable request.


  1. See Hoagwood and colleagues’ (2001) review for a history of evidence-based practices in child and adolescent mental health services in the United States, including treatment interventions and settings (10).

  2. Item level ICCs ranged from 0.002 to 0.106 and subscale level ICCs ranged from 0.019 to 0.080.



Measurement-based care


Measurement feedback system


Evidence-based practice


Latent class analysis


Implementation Climate Scale


Acceptability of Intervention Measure


Intervention Appropriateness Measure


Feasibility of Intervention Measure


Interclass correlation coefficient


  1. Whitney DG, Peterson MD. US national and state-level prevalence of mental health disorders and disparities of mental health care use in children. JAMA Pediatr. 2019;173(4):389–91.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Bor W, Dean AJ, Najman J, Hayatbakhsh R. Are child and adolescent mental health problems increasing in the 21st century? A systematic review. Aust N Z J Psychiatry. 2014;48(7):606–16.

    Article  PubMed  Google Scholar 

  3. Twenge JM, Cooper AB, Joiner TE, Duffy ME, Binau SG. Age, period, and cohort trends in mood disorder indicators and suicide-related outcomes in a nationally representative dataset, 2005–2017. J Abnorm Psychol. 2019;128(3):185.

    Article  PubMed  Google Scholar 

  4. Centers for Disease Control and Prevention. Youth risk behavior survey data summary & trends report 2007–2017. Hyattsville: National Prevention Information Network; 2018. Available from:

  5. National Research C, Institute of M. Preventing mental, emotional, and behavioral disorders among young people: progress and possibilities. Board of Children, Youth, and Families, Division of Behavioral and Social Sciences and Education. Washington DC: The National Academies Press; 2009.

    Google Scholar 

  6. Merikangas KR, He J-p, Burstein M, Swendsen J, Avenevoli S, Case B, et al. Service utilization for lifetime mental disorders in US adolescents: results of the National Comorbidity Survey–Adolescent Supplement (NCS-A). J Am Acad Child Adolesc Psychiatry. 2011;50(1):32–45.

    Article  PubMed  Google Scholar 

  7. Duong MT, Bruns EJ, Lee K, Cox S, Coifman J, Mayworm A, et al. Rates of mental health service utilization by children and adolescents in schools and other common service settings: a systematic review and meta-analysis. Adm Policy Ment Health Ment Health Serv Res. 2021;48(3):420–39.

    Article  Google Scholar 

  8. Maag JW, Katsiyannis A. School-based mental health services: funding options and issues. J Disabil Policy Stud. 2010;21(3):173–80.

    Article  Google Scholar 

  9. Stewart RE, Mandell DS, Beidas RS. Lessons from Maslow: prioritizing funding to improve the quality of community mental health and substance use services. Psychiatr Serv. 2021;72(10):1219–21.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Hoagwood K, Burns BJ, Kiser L, Ringeisen H, Schoenwald SK. Evidence-based practice in child and adolescent mental health services. Psychiatr Serv. 2001;52(9):1179–89.

    Article  PubMed  CAS  Google Scholar 

  11. Brookman-Frazee L, Haine RA, Baker-Ericzén M, Zoffness R, Garland AF. Factors associated with use of evidence-based practice strategies in usual care youth psychotherapy. Adm Policy Ment Health Ment Health Serv Res. 2010;37(3):254–69.

    Article  Google Scholar 

  12. Weisz JR, Ugueto AM, Cheron DM, Herren J. Evidence-based youth psychotherapy in the mental health ecosystem. J Clin Child Adolescent Psychol. 2013;42(2):274–86.

    Article  Google Scholar 

  13. Bickman L, Rosof-Williams J, Salzer M, Summerfelt W, Noser K, Wilson S, et al. What information do clinicians value for monitoring adolescent client progress and outcomes? Prof Psychol Res Pract. 2000;31:70–4.

    Article  Google Scholar 

  14. Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, Smart DW. Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clin Psychol. 2003;10(3):288–301.

    Google Scholar 

  15. Lyon AR, Lewis CC. Feedback systems to support implementation of measurement-based care. Behav Ther. 2017;40(7):241–7.

    Google Scholar 

  16. Scott K, Lewis CC. Using measurement-based care to enhance any treatment. Cogn Behav Pract. 2015;22(1):49–59.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Dollar KM, Kirchner JE, DePhilippis D, Ritchie MJ, McGee-Vincent P, Burden JL, et al. Steps for implementing measurement-based care: Implementation planning guide development and use in quality improvement. Psychol Serv. 2020;17(3):247–61.

    Article  PubMed  Google Scholar 

  18. Fortney JC, Unützer J, Wrenn G, Pyne JM, Smith GR, Schoenbaum M, et al. A tipping point for measurement-based care. Psychiatr Serv. 2017;68:179–88.

    Article  PubMed  Google Scholar 

  19. Lewis CC, Boyd M, Puspitasari A, Navarro E, Howard J, Kassab H, et al. Implementing measurement-based care in behavioral health: A review. JAMA Psychiatry. 2019;76(3):324–35.

    Article  PubMed  Google Scholar 

  20. Parikh A, Fristad MA, Axelson D, Krishna R. Evidence base for measurement-based care in child and adolescent psychiatry. Child Adolescent Psychiatric Clin. 2020;29(4):587–99.

    Article  Google Scholar 

  21. Jensen-Doss A, Haimes EMB, Smith AM, Lyon AR, Lewis CC, Stanick CF, et al. Monitoring treatment progress and providing feedback is viewed favorably but rarely used in practice. Adm Policy Ment Health Ment Health Serv Res. 2018;45(1):48–61.

    Article  Google Scholar 

  22. Connors EH, Douglas S, Jensen-Doss A, Landes SJ, Lewis CC, McLeod BD, et al. What gets measured gets done: How mental health agencies can leverage measurement-based care for better patient care, clinician supports, and organizational goals. Adm Policy Ment Health Ment Health Serv Res. 2021;48(2):250–65.

    Article  Google Scholar 

  23. Douglas SR, Jonghyuk B, de Andrade ARV, Tomlinson MM, Hargraves RP, Bickman L. Feedback mechanisms of change: how problem alerts reported by youth clients and their caregivers impact clinician-reported session content. Psychother Res. 2015;25(6):678–93.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Douglas S, Button S, Casey SE. Implementing for sustainability: Promoting use of a measurement feedback system for innovation and quality improvement. Adm Policy Ment Health Ment Health Serv Res. 2016;43(3):286–91.

    Article  Google Scholar 

  25. Marty D, Rapp C, McHugo G, Whitley R. Factors influencing consumer outcome monitoring in implementation of evidence-based practices: results from the National EBP Implementation Project. Adm Policy Ment Health Ment Health Serv Res. 2008;35(3):204–11.

    Article  Google Scholar 

  26. Gleacher AA, Olin SS, Nadeem E, Pollock M, Ringle V, Bickman L, et al. Implementing a measurement feedback system in community mental health clinics: a case study of multilevel barriers and facilitators. Adm Policy Ment Health Ment Health Serv Res. 2016;43(3):426–40.

    Article  Google Scholar 

  27. Ogden LP, Vinjamuri M, Kahn JM. A model for implementing an evidence-based practice in student fieldwork placements: barriers and facilitators to the use of “SBIRT”. J Soc Serv Res. 2016;42(4):425–41.

    Article  Google Scholar 

  28. Ploeg J, Davies B, Edwards N, Gifford W, Miller PE. Factors influencing best-practice guideline implementation: lessons learned from administrators, nursing staff, and project leaders. Worldviews Evid-Based Nurs. 2007;4(4):210–9.

    Article  PubMed  Google Scholar 

  29. Stadnick NA, Lau AS, Barnett M, Regan J, Aarons GA, Brookman-Frazee L. Comparing agency leader and therapist perspectives on evidence-based practices: associations with individual and organizational factors in a mental health system-driven implementation effort. Admin Pol Ment Health. 2018;45(3):447–61.

    Article  Google Scholar 

  30. Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Baker R, Camosso‐Stefinovic J, Gillies C, Shaw EJ, Cheater F, Flottorp S, et al. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev. 2015;(4):CD005470

  32. Lewis CC, Marti CN, Scott K, Walker MR, Boyd M, Puspitasari A, et al. Standardized versus tailored implementation of measurement-based care for depression in community mental health clinics. Psychiatr Serv. 2022;73(10):1094–101.

    Article  PubMed  Google Scholar 

  33. Powell BJ, Mandell DS, Hadley TR, Rubin RM, Evans AC, Hurford MO, et al. Are general and strategic measures of organizational context and leadership associated with knowledge and attitudes toward evidence-based practices in public behavioral health settings? A cross-sectional observational study. Implement Sci. 2017;12(1):1–13.

    Article  Google Scholar 

  34. Williams NJ, Ehrhart MG, Aarons GA, Marcus SC, Beidas RS. Linking molar organizational climate and strategic implementation climate to clinicians’ use of evidence-based psychotherapy techniques: cross-sectional and lagged analyses from a 2-year observational study. Implement Sci. 2018;13(1):1–13.

    Article  Google Scholar 

  35. Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Ment Health Serv Res. 2004;6(2):61–74.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Nakamura BJ, Higa-McMillan CK, Okamura KH, Shimabukuro S. Knowledge of and attitudes towards evidence-based practices in community child mental health practitioners. Adm Policy Ment Health Ment Health Serv Res. 2011;38(4):287–300.

    Article  Google Scholar 

  37. Stewart RE, Chambless DL, Baron J. Theoretical and practical barriers to practitioners’ willingness to seek training in empirically supported treatments. J Clin Psychol. 2012;68(1):8–23.

    Article  PubMed  Google Scholar 

  38. Nelson TD, Steele RG. Predictors of practitioner self-reported use of evidence-based practices: practitioner training, clinical setting, and attitudes toward research. Adm Policy Ment Health Ment Health Serv Res. 2007;34(4):319–30.

    Article  Google Scholar 

  39. Bearman SK, Weisz JR, Chorpita BF, Hoagwood K, Ward A, Ugueto AM, et al. More practice, less preach? The role of supervision processes and therapist characteristics in EBP implementation. Adm Policy Ment Health Ment Health Serv Res. 2013;40(6):518–29.

    Article  Google Scholar 

  40. Lyon AR, Lewis CC, Boyd MR, Hendrix E, Liu F. Capabilities and characteristics of digital measurement feedback systems: results from a comprehensive review. Adm Policy Ment Health Ment Health Serv Res. 2016;43(3):441–66.

    Article  Google Scholar 

  41. Bickman L. A Measurement Feedback System (MFS) is necessary to improve mental health outcomes. J Am Acad Child Adolesc Psychiatry. 2008;47(10):1114.

    Article  PubMed  Google Scholar 

  42. Krageloh CU, Czuba K, Billington R, Kersten P, Siegert R. Using feedback from patient-reported outcome measures in mental health services: a scoping study and typology. Psychiatr Serv. 2015;66(3):563–70.

    Article  Google Scholar 

  43. Lambert MJ, Whipple JL, Kleinstäuber M. Collecting and delivering progress feedback: a meta-analysis of routine outcome monitoring. Psychotherapy. 2018;55(4):520.

    Article  PubMed  Google Scholar 

  44. Shimokawa K, Lambert MJ, Smart DW. Enhancing treatment outcome of patients at risk of treatment failure: meta-analytic and mega-analytic review of a psychotherapy quality assurance system. J Consult Clin Psychol. 2010;78(3):298–311.

    Article  PubMed  Google Scholar 

  45. Lyon AR, Cook CR, Locke J, Davis C, Powell BJ, Waltz TJ. Importance and feasibility of an adapted set of implementation strategies in schools. J Sch Psychol. 2019;76:66–77.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Bickman L, Kelley SD, Breda C, de Andrade AR, Riemer M. Effects of routine feedback to clinicians on mental health outcomes of youths: results of a randomized trial. Psychiatr Serv. 2011;62(12):1423–9.

    Article  PubMed  Google Scholar 

  47. Ross DF, Ionita G, Stirman SW. System-Wide Implementation of Routine Outcome Monitoring and Measurement Feedback System in a National Network of Operational Stress Injury Clinics. Admin Pol Ment Health. 2016;43(6):927–44.

    Article  Google Scholar 

  48. Kotte A, Hill KA, Mah AC, Korathu-Larson PA, Au JR, Izmirian S, et al. Facilitators and barriers of implementing a measurement feedback system in public youth mental health. Adm Policy Ment Health Ment Health Serv Res. 2016;43(6):861–78.

    Article  Google Scholar 

  49. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65–76.

    Article  Google Scholar 

  50. Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Bergman LR, Magnusson D. A person-oriented approach in research on developmental psychopathology. Dev Psychopathol. 1997;9(2):291–319.

    Article  PubMed  CAS  Google Scholar 

  54. Domina T, Pharris-Ciurej N, Penner AM, Penner EK, Brummet Q, Porter SR, et al. Is free and reduced-price lunch a valid measure of educational disadvantage? Educ Res. 2018;47(9):539–55.

    Article  Google Scholar 

  55. Brown J. Client Feedback Form Manual. Beacon Health Options. 2014. Available from:

  56. Montgomery K, Connors E. Implementation science in schools. In: Franklin C, Harris MB, Allen-Meares P, editors. School services sourcebook. 3rd ed. Oxford University Press; In press.

  57. Barber J, Resnick SG. Collect, Share, Act: A transtheoretical clinical model for doing measurement-based care in mental health treatment. Psychol Serv. 2022. Advance online publication. Available from:

  58. Lyon AR, Cook CR, Brown EC, Locke J, Davis C, Ehrhart M, et al. Assessing organizational implementation context in the education sector: confirmatory factor analysis of measures of implementation leadership, climate, and citizenship. Implement Sci. 2018;13(1):5.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: the development and validity testing of the Implementation Climate Scale (ICS). Implement Sci. 2014;9(1):157.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Tang W, Cui Y, Babenko O. Internal consistency: Do we really know what it is and how to assess it. J Psychol Behav Sci. 2014;2(2):205–20.

    Google Scholar 

  61. Leech NL, Onwuegbuzie AJ. A typology of mixed methods research designs. Qual Quant. 2009;43(2):265–75.

    Article  Google Scholar 

  62. Greene JC, Caracelli VJ, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educ Eval Policy Anal. 1989;11(3):255–74.

    Article  Google Scholar 

  63. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Adm Policy Ment Health Ment Health Serv Res. 2011;38(1):44–53.

    Article  Google Scholar 

  64. Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:35.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Nylund-Gibson K, Choi AY. Ten frequently asked questions about latent class analysis. Transl Issues Psychol Sci. 2018;4(4):440–61.

    Article  Google Scholar 

  66. Muthén L, Muthén B. Mplus user’s guide (1998–2017). Los Angeles: Muthén & Muthén; 2017.

    Google Scholar 

  67. Connors EH, Lyon AR, Garcia K, Sichel CE, Hoover S, Weist MD, et al. Implementation strategies to promote measurement-based care in schools: evidence from mental health experts across the USA. Implement Sci Commun. 2022;3(1):1–7.

    Article  Google Scholar 

  68. Connors E, Prout J, Vivrette R, Padden J, Lever N. Trauma-focused cognitive behavioral therapy in 13 urban public schools: mixed methods results of barriers, facilitators, and implementation outcomes. Sch Ment Heal. 2021;13(4):772–90.

    Article  CAS  Google Scholar 

  69. Connors EH, Arora P, Blizzard AM, Bower K, Coble K, Harrison J, et al. When behavioral health concerns present in pediatric primary care: factors influencing provider decision-making. J Behav Health Serv Res. 2018;45(3):340–55.

    Article  PubMed  Google Scholar 

  70. Sichel CE, Javdani S, Gordon N, Huynh PPT. Examining the functions of women’s violence: accommodation, resistance, and enforcement of gender inequality. J Prev Intervention Commun. 2020;48(4):293–311.

    Article  Google Scholar 

  71. Connors EH, Schiffman J, Stein K, LeDoux S, Landsverk J, Hoover S. Factors associated with community-partnered school behavioral health clinicians’ adoption and implementation of evidence-based practices. Adm Policy Ment Health Ment Health Serv Res. 2019;46(1):91–104.

    Article  Google Scholar 

  72. Sichel CE, Winetsky D, Campos S, O'Grady MA, Tross S, Kim J, et al. Patterns and contexts of polysubstance use among young and older adults who are involved in the criminal legal system and use opioids: A mixed methods study. J Subst Abuse Treat. 2022;143:108864.

  73. Sichel CE, Javdani S, Shaw S, Liggett R. A role for social media? A community-based response to guns, gangs, and violence online. J Commun Psychol. 2021;49(3):822–37.

    Article  Google Scholar 

  74. Stein KF, Connors EH, Chambers KL, Thomas CL, Stephan SH. Youth, caregiver, and staff perspectives on an initiative to promote success of emerging adults with emotional and behavioral disabilities. J Behav Health Serv Res. 2016;43(4):582–96.

    Article  PubMed  Google Scholar 

  75. Nylund KL, Asparouhov T, Muthén BO. Deciding on the number of classes in latent class analysis and growth mixture modeling: a Monte Carlo simulation study. Struct Equ Model. 2007;14(4):535–69.

    Article  Google Scholar 

  76. Bickman L, Douglas SR, De Andrade ARV, Tomlinson M, Gleacher A, Olin S, et al. Implementing a measurement feedback system: a tale of two sites. Adm Policy Ment Health Ment Health Serv Res. 2016;43(3):410–25.

    Article  Google Scholar 

  77. Connors E, Lawson G, Wheatley-Rowe D, Hoover S. Exploration, preparation, and implementation of standardized assessment in a multi-agency school behavioral health network. Adm Policy Ment Health Ment Health Serv Res. 2021;48(3):464–81.

    Article  Google Scholar 

  78. Lyon AR, Liu FF, Connors EH, King KM, Coifman JI, Cook H, et al. How low can you go? Examining the effects of brief online training and post-training consultation dose on implementation mechanisms and outcomes for measurement-based care. Implement Sci Commun. 2022;3(1):1–15.

    Article  Google Scholar 

  79. Glisson C, James LR. The cross-level effects of culture and climate in human service teams. J Organ Behav. 2002;23(6):767–94.

    Article  Google Scholar 

  80. Bjaastad JF, Jensen-Doss A, Moltu C, Jakobsen P, Hagenberg H, Joa I. Attitudes toward standardized assessment tools and their use among clinicians in a public mental health service. Nordic J Psychiatry. 2019;73(7):387–96.

    Article  Google Scholar 

  81. Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. J Clin Child Adolesc Psychol. 2010;39(6):885–96.

    Article  PubMed  PubMed Central  Google Scholar 

  82. Lewis CC, Scott K, Marriott BR. A methodology for generating a tailored implementation blueprint: An exemplar from a youth residential setting. Implement Sci. 2018;13(1):1–13.

    Article  Google Scholar 

  83. Wensing M. The Tailored Implementation in Chronic Diseases (TICD) project: Introduction and main findings. Implement Sci. 2017;12(1):1–4.

    Article  CAS  Google Scholar 

  84. Damschroder LJ, Reardon CM, Opra Widerquist MA, Lowery J. Conceptualizing outcomes for use with the Consolidated Framework for Implementation Research (CFIR): the CFIR Outcomes Addendum. Implement Sci. 2022;17(1):1–10.

    Article  Google Scholar 

Download references


We are deeply grateful for the time and talents of participating clinicians, community-based agencies, and the school district who partnered with us on this project. Specifically, we acknowledge the contributions of The Children’s Guild, Thrive Behavioral Health, Innovative Therapeutic Services, Ms. Kathy Lane, and Mr. Ryan Voegtlin as local community champions of this effort. We also appreciate Ms. Nicolina Fusco’s and Ms. Sophia Selino’s support with references.


This work was supported by K08 MH116-119 PI: Connors.

Author information

Authors and Affiliations



CS conducted all quantitative and qualitative analyses. EC designed the quantitative and qualitative survey tools, reviewed all quantitative analyses, and contributed to qualitative analyses. Both authors contributed to the writing of all sections and approved the final manuscript.

Corresponding author

Correspondence to Corianna E. Sichel.

Ethics declarations

Ethics approval and consent to participate

The current study was reviewed by the Institutional Review Board and approved as exempt from continuing review due to anonymous data collection. Data were collected as part of a quality improvement project, and completely de-identified prior to the current, secondary analyses. Therefore, there were no additional consent processes.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sichel, C.E., Connors, E.H. Measurement feedback system implementation in public youth mental health treatment services: a mixed methods analysis. Implement Sci Commun 3, 119 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: