Skip to main content

Understanding the value of adhering to or adapting evidence-based interventions: a study protocol of a discrete choice experiment

Abstract

Background

Whereas the value of an evidence-based intervention (EBI) is often determined by its effect on clinical outcomes, the value of implementing and using EBIs in practice is broader, reflecting qualities such as appropriateness, equity, costs, and impact. Reconciling these value conflicts involves a complicated decision process that has received very limited scholarly attention. Inspired by studies on decision-making, the objective of this project is to explore how practitioners appraise the values of different outcomes and to test how this appraisal influences their decisions surrounding the so-called fidelity–adaptation dilemma. This dilemma is related to the balance between using an EBI as it was designed (to ensure its effectiveness) and making appropriate adaptations (to ensure alignment with constraints and possibilities in the local context).

Methods

This project consists of three sub-studies. The participants will be professionals leading evidence-based parental programs in Sweden and, in Sub-study 1, parents and decision-makers. Sub-study 1 will use sequential focus groups and individual interviews to explore parameters that influence fidelity and adaptation decisions—the dilemmas encountered, available options, how outcomes are valued by practitioners as well as other stakeholders, and value trade-offs. Sub-study 2 is a discrete choice experiment that will test how value appraisals influence decision-making using data from Sub-study 1 as input. Sub-study 3 uses a mixed-method design, with findings from the two preceding sub-studies as input in focus group interviews to investigate how practitioners make sense of findings from optimal decision situations (experiment) and constrained, real-world decision situations.

Discussion

The project will offer unique insights into decision-making processes that influence how EBIs are used in practice. Such knowledge is needed for a more granular understanding of how practitioners manage the fidelity–adaptation dilemma and thus, ultimately, how the value of EBI implementation can be optimized. This study contributes to our knowledge of what happens once EBIs are adopted—that is, the gap between the way in which EBIs are intended to be used and the way in which they are used in practice.

Background

The implementation of behavioral evidence-based interventions (EBI) inevitably involves decisions that lead either to high fidelity or to adaptations [1, 2]. On the one hand, there is a need to ensure that EBIs can be delivered as originally designed (i.e., with high fidelity). On the other hand, when EBIs are used in practice, constraints and opportunities in the local context often prompt practitioners to make adaptations (i.e., deliberate actions to change the content or delivery so that it fits the context [2]). This implies that a choice often needs to be made between actions that promote either high fidelity or adaptation of the EBI. This decision, sometimes referred to as the “fidelity–adaptation dilemma” [3], is a critical but underexplored concern for implementation research [4]. The way in which people appraise the value of different decision outcomes is of particular importance to gain a better understanding of how fidelity–adaptation decisions are made.

Appraising the value of EBI outcomes

Many studies have examined how fidelity and different types of adaptations [5, 6] affect efficacy and effectiveness [7,8,9]. Mixed findings suggest that both high fidelity and significant adaptations may improve the effects of an EBI. However, efficacy is only one of many values that an EBI can optimally produce, making value a multicomponent, multilevel construct that represents the combination of benefits [2]. Other potentially relevant values are recognized in service outcomes, such as safety, timeliness, efficiency, cost-effectiveness, equity, and user (patient)-centeredness, as well as implementation outcomes, such as appropriateness, acceptability, and feasibility [10]. In line with this, empirical research supports the appropriateness of considering the value of fidelity and adaptations in relation to outcomes such as sustainability [11,12,13], reach [14], feasibility [6], equity [15], person-centeredness [16], and cost-effectiveness [17]. Thus, the value of an EBI can be appraised against several different outcomes. The perceived value of achieving a certain outcome can vary, which means that the reconciliation of the fidelity–adaptation dilemma depends on an appraisal of the outcome and the perceived likelihood of achieving it. For instance, it has been suggested that for some stakeholders, such as policy makers, cost-effectiveness may be a more important driver of decisions to adopt and use EBIs than efficacy [17].

The appraisal process is complicated by the fact that each decision concerning fidelity and adaptations may affect different outcomes in different ways. For example, a practitioner may omit certain content of an EBI that is perceived as culturally inappropriate to increase acceptability but may, at the same time, decrease the efficacy of the EBI [18, 19]. The preferred solutions to the fidelity–adaptation dilemma vary depending on how important it is to achieve a certain outcome. Thus, the value of an EBI reflects an appraisal of the configuration of all potential outcomes across service users, providers, organizations, and systems [2].

This means that decisions about fidelity and adaptations need to be made on the basis of a holistic judgment of how the decision will affect the configuration of outcomes. Thus, individuals take several potential outcomes into account simultaneously [20]. For example, it has been shown that practitioners take patient-related as well as financial and system-related factors into account in clinical decisions [21]. Such holistic judgments can be difficult since the multiple outcomes may not align [22]. This represents a value conflict since a certain choice may improve one outcome at the expense of another. Despite the ubiquity of these dilemmas, there is a surprising lack of discussion in the implementation literature about how different outcomes are valued and how this appraisal drives decisions about fidelity and adaptation.

Intervention and implementation studies have generally focused on the impact of EBIs or their implementation on a small set of outcomes. There is a shortage of theoretically based empirical investigation into how multicomponent value appraisals of outcomes are made and how they influence decisions related to fidelity and adaptation. Studies have shown that practitioners make adaptations for various reasons, such as to satisfy patient preferences [18], to make the EBI culturally appropriate [23], and to retain patients in the program [24]. However, these studies have not explicitly focused on how practitioners value and negotiate different outcomes in their decision-making. Moreover, although some empirical studies have suggested that decision-makers and practitioners (as well as researchers) may favor different outcomes [17, 25], there is a lack of studies directly contrasting different stakeholder perspectives on valued outcomes.

Thus, although some studies have indicated the type of outcomes that practitioners value (e.g., meeting patient needs [18], appropriateness [23], and patient retention [24]), they have not addressed the relationship between options and outcomes, the way in which outcomes are combined to make an option attractive, or the way in which trade-offs between conflicting outcomes are negotiated. Thus, whereas the literature suggests that fidelity and adaptation decisions can be justified based on how different outcomes are valued, there is a knowledge gap concerning how this appraisal process plays out in decision-making. This is particularly important, considering that decisions related to fidelity and adaptation represent choices between options that may have different configurations of values—some that make them attractive and some that do not. Knowledge of how outcomes are appraised is therefore essential for understanding how decisions related to fidelity and adaptations are made.

Making decisions about fidelity and adaptation

Although decisions affecting fidelity and adaptation can be made not only at the individual but also at the organizational and system levels, frontline practitioners (e.g., service providers in health or social care organizations) sit at the nexus of the fidelity–adaptation dilemma. They (1) tend to have the most in-depth understanding of the local service context and the fit of an EBI, (2) are the direct targets of EBI training and professional development strategies, and (3) are the default (and frequently unsupported) fidelity–adaptation decision-makers. Indeed, for EBI practitioners, the fidelity–adaptation dilemma is not a philosophical or theoretical question. It entails a complicated decision process wherein practitioners need to weigh their options for action based on several—sometimes conflicting—outcomes. The way in which these decisions are made remains largely unexplored in the implementation literature and practice.

A decision is formally defined as “a commitment to a course of action that is intended to yield results that are satisfying for specified individuals” [26]. Two theories, expected utility and ecological rationality, provide somewhat different perspectives on decision-making. The expected utility theory is a “rational theory” proposing that people act to maximize utility—that is, they act based on a holistic estimate of the total value, or satisfaction, that a choice will offer [20]. Thus, utility is frequently used to operationalize value as a multicomponent, multilevel construct [20]. The theory of expected utility describes how people make decisions according to structured processes wherein rules of logic or probability are applied to optimize the utility of a decision [27]. This includes a series of steps, starting with identifying a problem that calls for a decision and the available options for action, followed by an assessment of the possibilities and risks of each option and the anticipated impact on outcomes [28]. In contrast, ecological rationality theory is a bounded rationality theory that describes how decisions are influenced by the task at hand, individual factors, and the environment in which they are made [29, 30].

The literature on EBI fidelity and adaptations suggests that decisions are optimized when made through a structured process, proactively and with careful consideration of how they will affect core components of an EBI and, ultimately, its effectiveness [19, 31,32,33,34]. This reflects a rational approach to decision-making [35] that mimics the decision-making process described by expected utility theory. However, practitioners often make decisions about fidelity or adaptations under bounded conditions that are more in line with ecological rationality—conditions under which a lack of time and think-space leads to ad hoc and implicit decisions with limited consideration of consequences [24, 29, 36]. Thus, theoretically, the appraisal of which outcomes are most valuable is central in decisions related to the fidelity–adaptation dilemma [27]. This is particularly true when the outcomes are in conflict, which means that the decision involves a trade-off between outcomes. Nevertheless, the validity of these theoretical perspectives in relation to the fidelity–adaptation dilemma has yet to be investigated.

The lack of knowledge of how outcomes are valued and how this affects fidelity–adaptation decisions limits our understanding of which outcomes EBIs should be valued against to optimize their benefit. This calls for studies on the relationship between value appraisals and decisions to adhere or adapt.

Aim and research questions

Inspired by decision-making theory, the objective of this project is to explore how practitioners appraise the values of the outcomes of EBI implementation and to test how this appraisal influences decisions related to fidelity–adaptation dilemmas.

The project consists of three sub-studies. Sub-study 1 is exploratory and aims to uncover the dilemmas, options, valued outcomes, and value trade-offs in fidelity and adaptation decisions (research questions [RQ] 1–4). This will inform Sub-study 2, a discrete choice experiment that will test the impact of value configurations on decisions related to fidelity and adaptation (RQ5). The explorative and experimental findings of Sub-studies 1 and 2 will form the basis of Sub-study 3, a mixed-method study aiming to shed light on the differences between optimal and constrained decision situations (RQ6).

Sub-study 1: exploring fidelity–adaptation parameters

RQ1. What are the typical fidelity–adaptation dilemmas that practitioners encounter?

RQ2. What options for action are there for practitioners facing these typical dilemmas?

RQ3. What outcomes are valued by the various stakeholders in decisions related to fidelity–adaptation dilemmas?

  1. a.

    What are the value appraisals associated with different options for practitioners?

  2. b.

    How do value appraisals differ between stakeholders (i.e., service users, practitioners, and decision-makers)?

RQ4. What value conflicts do practitioners encounter, and how are trade-offs made?

Sub-study 2: experimentally testing rational fidelity–adaptation decision-making

RQ5. How do experimentally manipulated value configurations explain choices between fidelity and adaptation?

  1. a.

    Which option do practitioners prefer?

  2. b.

    How do value appraisals influence decisions?

  3. c.

    What trade-offs are practitioners willing to make to improve a certain value? What risks are they willing to take?

Sub-study 3: contrasting experimental and real-world value trade-offs

RQ6. How do practitioners make sense of decisions made in optimal decision situations (experiment) and constrained, real-world decision situations?

Theoretical approach

This project uses decision-making theory to understand how value appraisals influence decisions related to the fidelity–adaptation dilemma. It combines the theories of expected utility and ecological rationality, which will allow us to understand how decisions about fidelity and adaptation are made rationally, in line with current recommendations in implementation research [19, 31,32,33,34], as well as how decisions are often made in practice, under bounded conditions.

Expected utility theory is specifically used to outline the parameters explored (Sub-study 1) and manipulated in the experiment (Sub-study 2). To understand decisions, the first step is to identify the options (RQ2) available in typical fidelity–adaptation dilemmas (RQ1). The next step is to understand how the potential outcomes of these options are valued—that is, how the different options are judged based on the configuration of outcomes (RQ3). Theoretically, each outcome, called an attribute, can be conceptualized on different levels, such as being more or less efficient or user-centered. After identifying options, attributes, and levels, we will explore the decision situation as a whole, including value conflicts and trade-offs, such as when one option maximizes effectiveness but reduces reach and another increases reach but sacrifices effectiveness (RQ4).

The theory of expected utility is a normative theory in that it describes how people should make decisions in light of uncertainties, such as when practitioners need to choose between options that all include some kind of risky prospects or the fidelity–adaptation dilemma [27]. However, in many decision situations, there are constraints that limit time and information, and the “more-is-more” approach suggested by expected utility theory may not always be feasible [27]. Bounded rationality is a broad theoretical framework used to describe how people, making the most of their cognitive ability in a complex environment that restricts the use of comprehensive decision-making processes, tend to focus on finding good enough solutions. For example, they stop searching for options as soon as they identify one that exceeds a certain aspiration level or meets certain criteria [30]. The shortcomings and risks associated with applying heuristics have received significant interest (e.g., by Nobel Prize laureate Daniel Kahneman), including with regard to clinical decision-making [37]. However, it has also been pointed out that the use of heuristics may not mean that people are irrational but rather ecologically rational—that is, adaptive in that they use a “less-is-more” approach to making decisions in contexts that are known to them [29]. Ecological rationality theory and its perspective on bounded rationality will inform the contrasts explored in RQ6. In this, we will follow recommendations to compare the experimental findings with real-world data [38].

Methods

Design

This is a multi-method study consisting of three sub-studies: a qualitative, exploratory study (Sub-study 1), a discrete choice experiment conducted through a survey (Sub-study 2), and a mixed-method study (Sub-study 3) (Fig. 1). The findings will be reported using the Good Reporting of A Mixed Methods Study guidelines [39].

Fig. 1
figure 1

Outline of study design and research questions

Setting

This is a nationwide study set in Sweden that will focus on parental programs as the target EBIs. Parental programs are psychosocial interventions that aim to improve parenting practices and behaviors, covering the prevention–intervention spectrum (i.e., from universal to indicated prevention programs) and often delivered in community contexts. Parental programs are well suited for studying fidelity–adaptation decision-making because there are multiple programs available that (1) have strong empirical support [40], (2) contain established fidelity assessment rubrics, (3) have broad uptake in practice settings, and (4) involve several categories of professionals. Applying these criteria to the Swedish context, we identified six programs for likely inclusion: All Children in Focus [41], Comet [42], Cope [43], Incredible Years [44], Triple-P [45], and Connect [46]. Parental programs are also on the policy agenda in Sweden, being promoted by the national government, and thus have broad policy and practical implications.

Participants and recruitment

In Sweden, parental programs are primarily the responsibility of local and regional government agencies. Consequently, there are 290 municipalities and 21 regions that can potentially participate. Within these organizations, the primary participant group will be practitioners implementing evidence-based parental programs, including several categories of professionals (e.g., social workers, preschool teachers, psychologists, and nurses). Additionally, service recipients and decision-makers (e.g., managers and policy makers) will be recruited to address RQ3.

The participants will be independently invited to each sub-study, although it is anticipated that some may participate in more than one (e.g., focus groups and experiment). For Sub-studies 1 and 3, we will use stratified purposeful sampling [47] to ensure the representation of municipalities of all sizes and both rural and urban areas. Organizations will be invited to participate based on the information provided on their websites. There is also an established collaboration supporting reach-out with the Family Law and Parental Support Authority, which plays a national coordinating role in the implementation of parental programs in Sweden.

If the initial contact with an agency is positive, all professionals working on parental programs in that organization who have formal training in a parental program that meets the abovementioned criteria and have practical experience in leading parental groups will be invited to participate in the study. RQ3b also targets service users (parents) and decision-makers (managers and policy makers). These will be recruited through the participating practitioners using a snowball sampling approach. In Sub-study 2, we will invite all potential participants identified in the previous steps.

Sub-study 1: data collection and analysis

A combination of qualitative methods will be used to address RQ1–4 and to ensure the external validity of the experiment (Sub-study 2) by basing it on empirical data [20]. First, fidelity–adaptation dilemmas (RQ1) will be explored through about 10 individual interviews with parental program practitioners, aimed at understanding how practitioners describe typical fidelity–adaption dilemmas. Throughout the study, the numbers of interviews will depend on the richness and complexity of the data [48]. This means that the numbers of respondents cannot be determined beforehand.

Second, sequential focus group interviews [49] with 4–8 participants in each group will be conducted to explore the alternatives between which practitioners choose (i.e., options [RQ2]) and the outcomes that influence their choices (i.e., attributes [RQ3a]). Inspired by the procedure of Coast and Horroks for exploring attributes [50], we will use an iterative approach whereby data collection and analysis proceed concurrently. Once the analysis indicates that options have been thoroughly explored, we will gradually shift the emphasis from primarily investigating options in the initial focus groups to the outcomes that are valued with those options (i.e., the attributes). This shift also entails moving from a fully explorative approach in the first focus group, in which no option has yet been identified, to a sequential interview format, in which the exploration of options is followed by the presentation of the options identified in previous focus groups. This approach allows the exploration of the anticipated outcomes of all options and the way in which they are valued. With this iterative approach, there is no fixed set of focus groups planned in advance. Based on previous research, we anticipate three to four practitioner focus groups [51]. Another two to three focus groups of parents and decision-makers, respectively, will be formed to explore how these stakeholders appraise the value of parental groups (RQ3b).

Third, another round of individual interviews with practitioners (RQ4) will be conducted. Previously collected data will be used as input to elicit information about the value conflicts and the associated trade-offs that practitioners experience. The interviews will include two phases, inspired by Grundstein et al.’s study on ethical decision-making [52]. First, the participants will be asked to describe critical incidents in which they had to make a decision related to the fidelity–adaptation dilemma and the value trade-offs involved. Second, the dilemmas and associated options and valued outcomes identified in the previous steps will be presented, and value conflicts will be explored. In this, we will also collect data on how each outcome varies, which will subsequently inform the selection of response categories (i.e., levels) for the attributes when constructing the experiment (RQ5). Data collection, transcription, and analysis will be conducted iteratively. We estimate that 15–20 interviews will be required.

Data analysis

Data from all qualitative methods in Sub-study 1 will be analyzed using reflexive thematic analysis [53, 54]. The interviews will be recorded and transcribed verbatim. Two persons will conduct the analysis and reflectively compare each other’s interpretations of the data. To evaluate the study’s trustworthiness, Guba and Lincoln’s [55] criteria and recommendations for verification strategies will be used. The remaining researchers will act as informed outsiders.

Sub-study 2: data collection and analysis

After exploring the parameters underlying decisions about fidelity and adaptation, to address RQ5, we will conduct a discrete choice experiment using an online survey to test how different combinations of attributes and levels affect choices [20].

The experiment will mimic real-world decisions. Practitioners will make decisions based on a holistic appraisal of two options—adhering or adapting. The options differ in terms of combinations of attribute levels—that is, they offer some outcomes with more value and others with less (i.e., different configurations of values). The value configuration will serve as the independent variable, and the choice between adhering or adapting will be the dependent variable. In the hypothetical example shown in Table 1, the attributes (outcomes) are efficacy, timeliness, and two types of adverse outcomes—one related to those receiving the EBI and one related to those not receiving it, and the dependent variable is adherence to or adaptation of the program.

Table 1 A hypothetical example of a survey question in the discrete choice experiment

Experimental survey development

Scenarios (i.e., typical dilemmas; RQ1), options (RQ2), attributes (RQ3), and the response categories that are relevant to an attribute (i.e., levels; RQ4) will be identified through the preceding qualitative steps (Sub-study 1). A central part of the experiment consists of the determination of the number of scenarios and questions in the survey. The number of possible combinations of attributes and levels increases exponentially (e.g., five attributes with five levels each equals 3125 combinations). Therefore, we will use a fractional factorial experimental design. This involves reducing the number of combinations statistically [56].

If the pre-work indicates that some value judgments do not vary meaningfully, we may opt for a partial profile design, in which the level of specific attributes is held constant [57]. This will also be done if there are more than five to six outcomes associated with a specific scenario to reduce the cognitive load associated with judging a large number of attributes [57] and to minimize the risk of the survey being perceived as too time-consuming and/or difficult [58].

The experiment will include approximately four scenarios, each with a number of questions where the respondent will be asked to choose between two or more options (i.e., option-choice questions). We will aim for around 96 option-choice questions, 24 for each dilemma. Each respondent will be randomly assigned to one of six versions, meaning that each participant will respond to 16 questions, four per scenario. Up to 16 questions are commonly used as a trade-off between cognitive load and data acquisition [59]. The sequence of scenarios will be randomized. For each question, participants will be asked to indicate how certain they are about their choice, to assess choice certainty [60]. In addition, the stability and rationality of responses will be tested. Choice consistency (if the same choice is made twice) will be tested by repeating one question in each survey, and rationality by including a discrete choice comparison where one alternative is superior to the other (choice monotonicity) [60]. Data collected through the survey will also include demographic information, education, professional experience, and experience in evidence-based parental programs.

Data collection

After careful pilot testing, participants will be sent emails with a link to a secure web-based survey. The required sample size for an experiment depends on the numbers of choice tasks, alternatives, and analysis cells [59]. A rule of thumb is that a sample size of over 100, or 20 respondents per questionnaire version (six in this case), is required [59]. Based on this, we aim for at least 120 respondents, which we deem feasible given that, according to a conservative estimate, at least 1000 practitioners work on parental programs in Sweden.

Data analysis

The data will be analyzed using random parameters logit or multinomial logit models, depending on whether the scenario includes a choice between two or more than two options. Nested models may be an alternative if they better reflect how decisions are made (i.e., if decisions follow a decision-tree structure). We will use effect coding rather than dummy coding, as this allows the estimation of effect sizes for each attribute level. Individual utility profiles will be modeled, from which we will estimate parameters reflecting which option practitioners prefer, including the magnitude, sign, and statistical significance of coefficients; the relative importance of different outcomes; how the configurations of values influence choices; how changes in value configurations affect the probability of a certain decision; and willingness to trade—that is, what a person is willing to give up in one outcome to improve another (the marginal rate of substitution) [56].

Differences in decisions between stakeholders and practitioners with different levels of experience will be analyzed through split-sample analysis using Log likelihood Chow tests to examine whether preferences differ between subgroups.

To determine trade-off levels, we will calculate the maximum acceptable risk (from preference weights)—that is, the greatest risk that participants are willing to accept for a given outcome or the marginal rate of substitution. The experiment will thus provide information on which decision, given the known value configurations, represents the optimal balance between the different outcomes.

Sub-study 3: data collection and analysis

Sub-study 3 will use a mixed-method design [61]. We will conduct three to four focus group interviews using the findings of Sub-studies 1 and 2 as input, thus inviting the participants to elaborate and expand on the findings from the experiment, which represent a highly structured decision situation, and interviews, which provide information about less ordered, constrained decision situations. We will contrast qualitative (RQ1–4) and quantitative (RQ5) data and then collect additional qualitative data to address a new research question (RQ6) [62]. An interview guide will be developed based on the dilemmas and value conflicts identified. For each dilemma, the respondents will be shown graphical summaries of findings from the interviews and the experiment. The interview guide will be semi-structured around each dilemma and will point practitioners to value conflicts. However, the questions will be open-ended, for example, “What are your thoughts on the results?” and “How does this reflect your own experience?”

Data analysis

The data will be analyzed using the six-step reflexive thematic approach, which aims to identify themes and make sense of patterns of meanings in the data [53, 54].

Discussion

This study is unique in its attempt to approach the fidelity–adaptation dilemma in the context of practitioners’ decision-making processes. Decision theories are used to first explore and then test how practitioners appraise the values of different outcomes of EBI implementations and how this value appraisal influences decisions related to fidelity–adaptation dilemmas. This decision-making perspective will make a significant contribution to the literature, which to date has primarily either provided general recommendations on how decisions should be made or described when, what, how, and by whom (but seldom why) adaptations are made.

Furthermore, the project will provide an example of how discrete choice experiments can be used to understand decisions affecting fidelity and adaptations when practitioners use EBIs. Discrete choice experiments have been used extensively in health economics, for example, to evaluate patient experiences and health outcomes and assess trade-offs in outcomes and clinical decision-making [63]. It has been suggested that discrete choice experiments can be used as a stakeholder engagement strategy to identify attributes of EBIs and implementation strategies that can make the adoption of an EBI more attractive [64,65,66]. For example, it can be used to select and tailor implementation strategies to the needs of providers [67] or to barriers and facilitators [58] and to evaluate the appropriateness of these strategies [68]. Its advantages over traditional surveys include a more granular understanding of people’s decisions based on how they value options [67] and a better reflection of the trade-offs often involved in decisions related to implementation [58]. Despite its suggested benefits [64, 67], the use of discrete choice experiments in implementation research is still in its infancy and has, to our knowledge, not yet been used to explore the fidelity–adaptation dilemma and the influence of value configurations on practitioners’ decisions.

This study is also original in terms of its focus on how different outcomes are valued and drive decisions. Debates about the fidelity–adaptation dilemma in the literature have primarily focused on how it affects effectiveness, whereas practitioners are obligated to consider multiple types of outcomes in their decision-making. Thus, this project will offer a rare insight into how considerations of the values of different outcomes affect practitioners’ decisions related to fidelity and adaptations. Such knowledge is necessary for producing solid recommendations on how fidelity and adaptation decisions should be made. Thus, the project will provide a deeper understanding of how the values of EBI outcomes are appraised and how implementation success is to be determined.

Availability of data and materials

The datasets used will be available from the corresponding author on reasonable request.

Abbreviations

EBI:

Evidenced-based intervention

References

  1. Lyon AR, Bruns EJ. User-centered redesign of evidence-based psychosocial interventions to enhance implementation—hospitable soil or better seeds? JAMA Psychiatry. 2019;76(1):3–4. https://doi.org/10.1001/jamapsychiatry.2018.3060.

    Article  PubMed  Google Scholar 

  2. von Thiele Schwarz U, Aarons GA, Hasson H. The Value Equation: Three complementary propositions for reconciling fidelity and adaptation in evidence-based practice implementation. BMC Health Serv Res. 2019;19(1):868.

  3. Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clin Psychol Rev. 1998;18(1):23–45. https://doi.org/10.1016/S0272-7358(97)00043-3.

    Article  CAS  PubMed  Google Scholar 

  4. Movsisyan A, Arnold L, Evans R, Hallingberg B, Moore G, O’Cathain A, et al. Adapting evidence-informed complex population health interventions for new contexts: a systematic review of guidance. Implement Sci. 2019;14(1):105. https://doi.org/10.1186/s13012-019-0956-5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Sundell K, Beelmann A, Hasson H, von Thiele Schwarz U. Novel programs, international adoptions, or contextual adaptations? Meta-analytical results from German and Swedish Intervention Research. J Clin Child Adolesc Psychol. 2015:1–13.

  6. Escoffery C, Lebow-Skelley E, Haardoerfer R, Boing E, Udelson H, Wood R, et al. A systematic review of adaptations of evidence-based public health interventions globally. Implement Sci. 2018;13(1):125. https://doi.org/10.1186/s13012-018-0815-9.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Mihalic S. The importance of implementation fidelity. Emot Behav Disorders Youth. 2004;4(83-86):99–105.

    Google Scholar 

  8. Elliott DS, Mihalic S. Issues in disseminating and replicating effective prevention programs. Prev Sci. 2004;5(1):47–53. https://doi.org/10.1023/B:PREV.0000013981.28071.52.

    Article  PubMed  Google Scholar 

  9. Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3):327–50. https://doi.org/10.1007/s10464-008-9165-0.

    Article  PubMed  Google Scholar 

  10. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76. https://doi.org/10.1007/s10488-010-0319-7.

    Article  Google Scholar 

  11. Chambers D, Glasgow R, Stange K. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117. https://doi.org/10.1186/1748-5908-8-117.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Klinga C, Hasson H, Sachs MA, Hansson J. Understanding the dynamics of sustainable change: A 20-year case study of integrated health and social care. BMC Health Serv Res. 2018;18(1):400. https://doi.org/10.1186/s12913-018-3061-6.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Strehlernert H. From policy to practice: exploring the implementation of a national policy for improving health and social care. Stockholm: Karolinska Institutet; 2017.

    Google Scholar 

  14. Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126–53. https://doi.org/10.1177/0163278705284445.

    Article  PubMed  Google Scholar 

  15. Bond GR, Becker DR, Drake RE. Measurement of fidelity of implementation of evidence-based practices: case example of the IPS Fidelity Scale. Clin Psychol Sci Pract. 2011;18(2):126–41.

    Article  Google Scholar 

  16. Joyner MJ, Paneth N. Seven questions for personalized medicine. JAMA. 2015;314(10):999–1000. https://doi.org/10.1001/jama.2015.7725.

    Article  CAS  PubMed  Google Scholar 

  17. Jones Rhodes WC, Ritzwoller DP, Glasgow RE. Stakeholder perspectives on costs and resource expenditures: tools for addressing economic issues most relevant to patients, providers, and clinics. Transl Behav Med. 2018;8(5):675–82. https://doi.org/10.1093/tbm/ibx003.

    Article  PubMed  Google Scholar 

  18. Kakeeto M, Lundmark R, Hasson H, von Thiele Schwarz U. Meeting patient needs trumps adherence. A cross-sectional study of adherence and adaptations when national guidelines are used in practice. J Eval Clin Pract. 2017;23(4):830–8. https://doi.org/10.1111/jep.12726.

    Article  PubMed  Google Scholar 

  19. Aarons GA, Green AE, Palinkas LA, Self-Brown S, Whitaker DJ, Lutzker JR, et al. Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implement Sci. 2012;7(1):32. https://doi.org/10.1186/1748-5908-7-32.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Mühlbacher A, Johnson FR. Choice experiments to quantify preferences for health and healthcare: state of the practice. Appl Health Econ Health Policy. 2016;14(3):253–66. https://doi.org/10.1007/s40258-016-0232-7.

    Article  PubMed  Google Scholar 

  21. Korlén S, Amer-Wåhlin I, Lindgren P, von Thiele Schwarz U. Professionals’ perspectives on a market-inspired policy reform: a guiding light to the blind spots of measurement. Health Serv Manag Res. 2017;30(3):148–55. https://doi.org/10.1177/0951484817708941.

    Article  Google Scholar 

  22. Dubois RW, Westrich K. As value assessment frameworks evolve, are they finally ready for prime time? Value Health. 2019;22(9):977–80. https://doi.org/10.1016/j.jval.2019.06.002.

    Article  PubMed  Google Scholar 

  23. Stirman S, Miller C, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 2013;8(1):65. https://doi.org/10.1186/1748-5908-8-65.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Moore J, Bumbarger B, Cooper B. Examining adaptations of evidence-based programs in natural contexts. J Prim Prev. 2013;34(3):147–61. https://doi.org/10.1007/s10935-013-0303-6.

    Article  PubMed  Google Scholar 

  25. Aarons GA, Fettes DL, Hurlburt MS, Palinkas LA, Gunderson L, Willging CE, et al. Collaboration, negotiation, and coalescence for interagency-collaborative teams to scale-up evidence-based practice. J Clin Child Adolesc Psychol. 2014;43(6):915–28. https://doi.org/10.1080/15374416.2013.876642.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Yates JF, Tschirhart MD. Decision-making expertise. The Cambridge handbook of expertise and expert performance; 2006. p. 421–38. https://doi.org/10.1017/CBO9780511816796.024.

    Book  Google Scholar 

  27. Schoemaker PJ. The expected utility model: Its variants, purposes, evidence and limitations. J Econ Lit. 1982:529–63.

  28. Simon HA. Rational choice and the structure of the environment. Psychol Rev. 1956;63(2):129–38. https://doi.org/10.1037/h0042769.

    Article  CAS  PubMed  Google Scholar 

  29. Todd PM, Brighton H. Building the theory of ecological rationality. Mind Mach. 2016;26(1-2):9–30. https://doi.org/10.1007/s11023-015-9371-0.

    Article  Google Scholar 

  30. Gigerenzer G, Gaissmaier W. Heuristic decision making. Annu Rev Psychol. 2011;62(1):451–82. https://doi.org/10.1146/annurev-psych-120709-145346.

    Article  PubMed  Google Scholar 

  31. Lee SJ, Altschul I, Mowbray CT. Using planned adaptation to implement evidence-based programs with new populations. Am J Community Psychol. 2008;41(3-4):290–303. https://doi.org/10.1007/s10464-008-9160-5.

    Article  PubMed  Google Scholar 

  32. Wiltsey Stirman S, Gamarra JM, Bartlett BA, Calloway A, Gutner CA. Empirical examinations of modifications and adaptations to evidence-based psychotherapies: Methodologies, impact, and future directions. Clin Psychy: Science and Practice. 2017;24(4):396–420.

    Google Scholar 

  33. Castro FG, Barrera M Jr, Martinez CR Jr. The cultural adaptation of prevention interventions: Resolving tensions between fidelity and fit. Prev Sci. 2004;5(1):41–5. https://doi.org/10.1023/B:PREV.0000013980.12412.cd.

    Article  PubMed  Google Scholar 

  34. Escoffery C, Lebow-Skelley E, Udelson H, Böing EA, Wood R, Fernandez ME, et al. A scoping study of frameworks for adapting public health evidence-based interventions. Transl Behav Med. 2019;9(1):1–10. https://doi.org/10.1093/tbm/ibx067.

    Article  PubMed  Google Scholar 

  35. Fischhoff B. Cognitive processes in stated preference methods. In: Mäler K-G, Vincent JR, editors. Handbook of Environmental Economics. 2: Elsivier; 2005. p. 937–68.

  36. Mosson R, Hasson H, Wallin L, von Thiele Schwarz U. Exploring the role of line managers in implementing evidence-based practice in social services and older people care. Br J Soc Work. 2017;47(2):542–60.

    Google Scholar 

  37. Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9(11):1184–204. https://doi.org/10.1197/aemj.9.11.1184.

    Article  PubMed  Google Scholar 

  38. Campitelli G, Gobet F. Herbert Simon's decision-making approach: Investigation of cognitive processes in experts. Rev Gen Psychol. 2010;14(4):354–64. https://doi.org/10.1037/a0021256.

    Article  Google Scholar 

  39. O'cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy. 2008;13(2):92–8. https://doi.org/10.1258/jhsrp.2007.007074.

    Article  PubMed  Google Scholar 

  40. van Aar J, Leijten P, de Castro BO, Overbeek G. Sustained, fade-out or sleeper effects? A systematic review and meta-analysis of parenting interventions for disruptive child behavior. Clin Psychol Rev. 2017;51:153–63. https://doi.org/10.1016/j.cpr.2016.11.006.

    Article  PubMed  Google Scholar 

  41. Ulfsdotter M, Enebrink P, Lindberg L. Effectiveness of a universal health-promoting parenting program: a randomized waitlist-controlled trial of All Children in Focus. BMC Public Health. 2014;14(1):1083. https://doi.org/10.1186/1471-2458-14-1083.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Kling Å, Forster M, Sundell K, Melin L. A randomized controlled effectiveness trial of parent management training with varying degrees of therapist support. Behav Ther. 2010;41(4):530–42. https://doi.org/10.1016/j.beth.2010.02.004.

    Article  PubMed  Google Scholar 

  43. Cunningham C. Large group, community based, family-centered parent training. In: Barkley RA, Murphy KR, editors. Attention deficit hyperactivity disorder: A clinical workbook. New York, NY: Guilford Press; 2005. p. 480–98.

    Google Scholar 

  44. Webster-Stratton C, Reid MJ, Hammond M. Treating children with early-onset conduct problems: Intervention outcomes for parent, child, and teacher training. J Clin Child Adolesc Psychol. 2004;33(1):105–24. https://doi.org/10.1207/S15374424JCCP3301_11.

    Article  PubMed  Google Scholar 

  45. Sanders MR. Development, evaluation, and multinational dissemination of the Triple P-Positive Parenting Program. Annu Rev Clin Psychol. 2012;8(1):345–79. https://doi.org/10.1146/annurev-clinpsy-032511-143104.

    Article  PubMed  Google Scholar 

  46. Moretti M, Obsurth I. Effectiveness of an attachment-focused manualized intervention for parents of teens at risk for aggressive behaviour: The Connect Program. J Adolesc. 2009;32(6):1347–57. https://doi.org/10.1016/j.adolescence.2009.07.013.

    Article  PubMed  Google Scholar 

  47. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Admin Pol Ment Health. 2015;42(5):533–44. https://doi.org/10.1007/s10488-013-0528-y.

    Article  Google Scholar 

  48. Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health. 2021;13(2):201–16. https://doi.org/10.1080/2159676X.2019.1704846.

    Article  Google Scholar 

  49. Morgan DL. Basic and advanced focus groups: Sage Publications; 2018.

  50. Coast J, Horrocks S. Developing attributes and levels for discrete choice experiments using qualitative methods. J Health Serv Res Policy. 2007;12(1):25–30. https://doi.org/10.1258/135581907779497602.

    Article  PubMed  Google Scholar 

  51. Roux L, Ubach C, Donaldson C, Ryan M. Valuing the benefits of weight loss programs: an application of the discrete choice experiment. Obes Res. 2004;12(8):1342–51. https://doi.org/10.1038/oby.2004.169.

    Article  PubMed  Google Scholar 

  52. Grundstein-Amado R. Ethical decision-making processes used by health care providers. J Adv Nurs. 1993;18(11):1701–9. https://doi.org/10.1046/j.1365-2648.1993.18111701.x.

    Article  CAS  PubMed  Google Scholar 

  53. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. https://doi.org/10.1191/1478088706qp063oa.

    Article  Google Scholar 

  54. Braun V, Clarke V. One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qual Res Psychol. 2020:1–25. https://doi.org/10.1080/14780887.2020.1769238.

  55. Guba EG, Lincoln YS. Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco: Jossey-Bass; 1981. p. 423-xxv.

    Google Scholar 

  56. Phillips KA, Maddala T, Johnson FR. Measuring preferences for health care interventions using conjoint analysis: an application to HIV testing. Health Serv Res. 2002;37(6):1681–705. https://doi.org/10.1111/1475-6773.01115.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Kessels R, Jones B, Goos P. Bayesian optimal designs for discrete choice experiments with partial profiles. J Choice Model. 2011;4(3):52–74. https://doi.org/10.1016/S1755-5345(13)70042-3.

    Article  Google Scholar 

  58. van Helvoort-Postulart D, Van Der Weijden T, Dellaert BG, De Kok M, Von Meyenfeldt MF, Dirksen CD. Investigating the complementary value of discrete choice experiments for the evaluation of barriers and facilitators in implementation research: a questionnaire survey. Implement Sci. 2009;4(1):1–12.

    Article  Google Scholar 

  59. de Bekker-Grob EW, Donkers B, Jonker MF, Stolk EA. Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide. Patient. 2015;8(5):373–84. https://doi.org/10.1007/s40271-015-0118-z.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Mattmann M, Logar I, Brouwer R. Choice certainty, consistency, and monotonicity in discrete choice experiments. J Env Econ Policy. 2019;8(2):109–27. https://doi.org/10.1080/21606544.2018.1515118.

    Article  Google Scholar 

  61. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Admin Pol Ment Health. 2011;38(1):44–53. https://doi.org/10.1007/s10488-010-0314-z.

    Article  Google Scholar 

  62. Johnson RB, Onwuegbuzie AJ. Mixed methods research: A research paradigm whose time has come. Educ Res. 2004;33(7):14–26. https://doi.org/10.3102/0013189X033007014.

    Article  Google Scholar 

  63. de Bekker-Grob EW, Ryan M, Gerard K. Discrete choice experiments in health economics: a review of the literature. Health Econ. 2012;21(2):145–72. https://doi.org/10.1002/hec.1697.

    Article  PubMed  Google Scholar 

  64. Salloum RG, Shenkman EA, Louviere JJ, Chambers DA. Application of discrete choice experiments to enhance stakeholder engagement as a strategy for advancing implementation: a systematic review. Implement Sci. 2017;12(1):140. https://doi.org/10.1186/s13012-017-0675-8.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Cunningham CE, Barwick M, Rimas H, Mielko S, Barac R. Modeling the decision of mental health providers to implement evidence-based children’s mental health services: A discrete choice conjoint experiment. Admin Pol Ment Health. 2018;45(2):302–17. https://doi.org/10.1007/s10488-017-0824-z.

    Article  Google Scholar 

  66. Cunningham CE, Barwick M, Short K, Chen Y, Rimas H, Ratcliffe J, et al. Modeling the mental health practice change preferences of educators: A discrete-choice conjoint experiment. Sch Ment Heal. 2014;6(1):1–14. https://doi.org/10.1007/s12310-013-9110-8.

    Article  Google Scholar 

  67. Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94. https://doi.org/10.1007/s11414-015-9475-6.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Beidas RS, Volpp KG, Buttenheim AN, Marcus SC, Olfson M, Pellecchia M, et al. Transforming mental health delivery through behavioral economics and implementation science: protocol for three exploratory projects. JMIR Res Prot. 2019;8(2):e12121. https://doi.org/10.2196/12121.

    Article  Google Scholar 

Download references

Funding

This study received research funding from the Swedish Research Council for Health, Working Life and Welfare (Forte) (reference no. 2020-01223) after a competitive peer review process. The Council is one of the main national research funders in the field of health and welfare in Sweden. The decision on funding is made by the board after thorough review by national and international researchers. The acceptance rate for project funding is about 8%. The funder has no role in the design and conduct of the study, including the collection, analysis, and interpretation of the data and the reporting of findings. The content is solely the responsibility of the authors and does not necessarily represent the official views of Forte. Open Access funding provided by Mälardalen University.

Author information

Authors and Affiliations

Authors

Contributions

UvTS, HH, AL, and FG designed the project. UvTS secured funding for the project and was responsible for the application for ethics approval, with assistance from FG and KP. The authors (UvTS, AL, KP, FG, PL, and HH) jointly drafted the first version of the study protocol based on the application. All authors discussed and revised the draft and approved the final manuscript.

Corresponding author

Correspondence to Ulrica von Thiele Schwarz.

Ethics declarations

Ethics approval and consent to participate

Ethics approval for this project, including all data collection, was obtained from the Swedish Ethical Review Authority (reference no. 2021-00832). Informed consent will be obtained from all study participants.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

von Thiele Schwarz, U., Lyon, A.R., Pettersson, K. et al. Understanding the value of adhering to or adapting evidence-based interventions: a study protocol of a discrete choice experiment. Implement Sci Commun 2, 88 (2021). https://doi.org/10.1186/s43058-021-00187-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-021-00187-w

Keywords