Skip to main content

Table 1 Three examples of decision analysis processes for implementation planning

From: Using decision analysis to support implementation planning in research and practice

Step in the PROACTIVE framework

Cruden et al. [37]

Zimmerman et al. [38]

Hassmiller Lich et al. [39]

Phase 1

 P: Define the problem

Engaged stakeholders to identify a specific problem related to child maltreatment that they felt needed to be solved

Paper focused on limited EBP for PTSD reach

Paper focused on low rates of colorectal cancer screening and major disparities

 R: Reframe from other perspectives and understand objectives

Diverse stakeholders included in entire process (e.g., local health departments, state legislators, state employees, and non-profit administrators)

Engaged stakeholders from a variety of roles, including both clinic leadership and frontline staff members

Study was explicitly focused on estimating cost-effectiveness from the perspective of Medicaid

 O: Focus on the unifying objectives

Stakeholders prioritized child neglect and focused on prevention, and the team and stakeholders worked together to clarify the criteria that would be used to evaluate EBPs

Increase uptake of EBPs for PTSD and depression within the VA while acknowledging existing system constraints (e.g., staffing)

Stated objectives of analysis were to compare the impact and cost-effectiveness of colorectal cancer screening interventions in North Carolina and their potential to reduce disparities

Phase 2

 A: Collate alternatives

Research team compiled a list of 7 EBPs for comparison in an MCDA, three were prioritized for final evaluation

Contexts were interested in how to revise clinic procedures, workflows, or roles to increase the reach of EBPs. Stakeholder engagement indicated many possible ways to do this, and priority ranking was used to identify the top two for further evaluation (Table 3b in text)

Interventions defined via literature review and a series of interviews with decision-makers and local stakeholders

 C: Model consequences

Scored each EBP on criteria using information from published literature

Used system dynamics modeling to simulate how different implementation plans would affect EBP uptake within a specific context, using equations that represented the flows and accumulations of patients based on system structure

Quantitative microsimulation model was used to estimate the potential effects and costs of each screening intervention

 How costs were incorporated

Two criteria explicitly related to costs were included: benefits minus cost, and per participant per year; chances benefits will exceed costs. Other criteria also included a focus on resources required to implement, including the total timeframe to deliver the EBP, requirements for education/training of EBP deliverer, and whether ongoing support was available to facilitate implementation. Information on all was collected from the existing literature

The cost per implementation plan was not specifically included; rather, each implementation plan was subject to the existing resource and staffing constraints of local contexts

Costs to implement each intervention were explicitly enumerated from the state’s perspective, with components clearly listed and sourced (Table 2 in text)

 T: Identify trade-offs

Stakeholders engaged to develop weights for each criterion that described the relative importance of each criterion

No explicit preferences or values work was incorporated

No explicit preferences or values work was incorporated (e.g., to identify how Medicaid decision-makers might weigh costs versus effects of each policy)

Phase 3

 I: Integrate evidence to calculate expected value

A summary score was calculated for each EBP by combining criteria scores and weights (steps C and T)

Visualizing trends in EBP uptake over time allowed stakeholders to see how (and when) different implementation plans would impact outcomes

Incremental cost-effectiveness ratios were calculated and trade-offs in cost and life years up to date were shown visually using a cost-effectiveness efficiency frontier

 V: Optimize expected value

Process not focused on identifying a global optimal choice, but rather presenting scored options for stakeholders to use and identify their own optimal path accounting for personal values and preferences (by, for example, modifying weights)

Process focused on building the model and simulating different implementation plans

No decision was made as the paper’s primary focus was to generate evidence for decision-makers (shown via cost-effectiveness ratios and the cost-effectiveness efficiency frontier). The most cost-effective options were mentioned

 E: Explore assumptions and uncertainty

Sensitivity analyses used average weights (rather than individualized weights for each decision-maker) to calculate summary scores

Model was calibrated using historical data, sensitivity analyses and stakeholder review used to validate model, and sensitivity analyses were performed using different implementation decisions

Model was extremely large and probabilistic analyses not reported, and model was tested using code review, extreme-value testing, behavior-reproduction testing, and review with content experts

  1. Notes: Alignment with PROACTIVE framework performed by the authorship team based on the information reported in published work. “–” indicates that the step of the PROACTIVE framework was not an obvious focus of the published paper, illustrating how not all steps need be employed for any given problem
  2. EBP evidence-based program, MCDA multi-criteria decision analysis, PTSD post-traumatic stress disorder, VA Veterans Health Administration