Skip to main content

Development of the ASSESS tool: a comprehenSive tool to Support rEporting and critical appraiSal of qualitative, quantitative, and mixed methods implementation reSearch outcomes

Abstract

Background

Several tools to improve reporting of implementation studies for evidence-based decision making have been created; however, no tool for critical appraisal of implementation outcomes exists. Researchers, practitioners, and policy makers lack tools to support the concurrent synthesis and critical assessment of outcomes for implementation research. Our objectives were to develop a comprehensive tool to (1) describe studies focused on implementation that use qualitative, quantitative, and/or mixed methodologies and (2) assess risk of bias of implementation outcomes.

Methods

A hybrid consensus-building approach combining Delphi Group and Nominal Group techniques (NGT) was modeled after comparative methodologies for developing health research reporting guidelines and critical appraisal tools. First, an online modified NGT occurred among a small expert panel (n = 5), consisting of literature review, item generation, round robin with clarification, application of the tool to various study types, voting, and discussion. This was followed by a larger e-consensus meeting and modified Delphi process with implementers and implementation scientists (n = 32). New elements and elements of various existing tools, frameworks, and taxonomies were combined to produce the ASSESS tool.

Results

The 24-item tool is applicable to a broad range of study designs employed in implementation science, including qualitative studies, randomized-control trials, non-randomized quantitative studies, and mixed methods studies. Two key features are a section for assessing bias of the implementation outcomes and sections for describing the implementation strategy and intervention implemented. An accompanying explanation and elaboration document that identifies and describes each of the items, explains the rationale, and provides examples of reporting and appraising practice, as well as templates to allow synthesis of extracted data across studies and an instructional video, has been prepared.

Conclusions

The comprehensive, adaptable tool to support both reporting and critical appraisal of implementation science studies including quantitative, qualitative, and mixed methods assessment of intervention and implementation outcomes has been developed. This tool can be applied to a methodologically diverse and growing body of implementation science literature to support reviews or meta-analyses that inform evidence-based decision-making regarding processes and strategies for implementation.

Peer Review reports

Background

Implementation research applies a diverse range of study designs to increase translation of research evidence into policies and practice [1,2,3,4,5,6,7]. It allows us to conceptualize and evaluate successful implementation of interventions, particularly via assessment of implementation outcomes, which are the effects of implementation strategies, or deliberate and purposive actions to implement a new treatment, practice, or service [8]. As a poorly implemented program or policy will not have the intended interventional impact [8], robust implementation outcomes are also crucial to achieve the desired population health impact [8,9,10]. Implementation science studies may use quantitative, qualitative, and/or mixed-methodologies to assess these implementation outcomes (i.e., acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, or sustainability) or intervention outcomes (i.e., effectiveness, efficiency, equity, patient-centeredness, safety, or timeliness), particularly within hybrid effectiveness-implementation designs [1]. However, researchers, practitioners, and policy makers lack tools to support the concurrent synthesis and critical assessment of implementation outcomes. Tools are needed that can support systematic reviews or meta-analyses comparing multiple types of implementation outcomes across diverse study designs.

No tool to support critical assessment of implementation outcomes exists. The product of the process of critical assessment is knowledge, usually based on appraisal of study methods that provides a level of confidence in study findings. This is an important part of evidence-based decision making, as having only an understanding of the magnitude of the success of an intervention and its implementation without an understanding of one’s confidence in study findings limits the capacity for knowledge translation. Ultimately, comprehensive identification, synthesis, and appraisal of implementation outcomes, will improve understanding of implementation processes and allow comparison of the effectiveness of different implementation strategies. Indeed, previous research has shown the need for pragmatic measures in implementation practice (including assessment of implementation context, processes, and outcomes) [11] that should be useful, compatible, acceptable, and easy [12]. Researchers have established there remains a dearth of psychometrically valid survey assessment tools for implementation outcomes, and this area of investigation is ongoing [13, 14]. Some efforts have been made to generate valid, brief assessment surveys for feasibility, acceptability, and appropriateness [15].

A tool is needed to support systematic reviews and meta-analyses of studies using qualitative, quantitative, and/or mixed methods assessment to inform evidence-based decision making on implementation. We have developed ASSESS, a comprehensive 24-item tool that (1) can describe studies evaluating implementation outcomes using qualitative, quantitative, and/or mixed methodologies and (2) can provide a rubric to grade the risk of bias of implementation outcomes.

Methods

The development of the ASSESS tool was modeled after recommended methodologies for developing health research reporting guidelines and critical appraisal tools [16,17,18,19]. A completed checklist of the recommended steps for developing a health research reporting guideline is available as an additional file [16]. We utilized a hybrid consensus-building approach combining e-Delphi Group and Nominal Group techniques (NGT). This approach builds on the strengths of these different techniques, namely the opportunity for discussion and efficient information exchange among a smaller group of experts that is characteristic of the NGT and the process of structured documentation and consensus meeting with a larger group that is characteristic of the Delphi method [18, 19].

This hybrid process is mapped by phase in Fig. 1, with each phase described in detail below: an online modified NGT among a small expert panel (phase 1), an e-Consensus meeting among a larger panel (phase 2), and post-meeting activities (phase 3).

Fig. 1
figure 1

Phases of modified nominal group technique and e-consensus meeting. Adapted from McMillan SS, King M, Tully MP. How to use the nominal group and Delphi techniques. Int J Clin Pharm. 2016;38(3):655-662. doi:10.1007/s11096-016-0257-x and Moher D, Schulz KF, Simera I, Altman DG. Guidance for Developers of Health Research Reporting Guidelines. PLOS Medicine. 2010;7(2):e1000217

Phase 1: Nominal group technique modified for online interaction

From February to October 2020 when social distancing guidelines for the COVID-19 pandemic prohibited in-person meetings, a panel of five public health professionals and implementation researchers carried out bimonthly online meetings using a NGT to conceptualize, reflect upon, develop, discuss, and refine the tool. The group’s experience and expertise was in epidemiology (n = 4); implementation science (n = 4); quantitative (n = 5), qualitative (n = 3), and mixed methodology (n = 3); and library science (n = 1). Different public health specialty areas included non-communicable disease, epigenetics, maternal health, and global health. Members were predominantly female (n = 4) and were working as faculty (n = 2), post-doctoral fellows (n = 2), or a public health doctoral candidate (n = 1). Phase 1 entailed reviewing the literature and brainstorming to generate items, followed by multiple rounds of independent assessment of items through structured data collection among this panel. Independent ratings were compiled, summarized, distributed, and discussed. This process continued until convergence of ratings was achieved.

Literature review and idea generation

Panel members conducted a thorough literature search of several databases (i.e., PubMed, PsycInfo, CINAHL, EMBASE, Web of Science, and Google Scholar) to inform the rationale for and conceptualization of the ASSESS tool. This review was carried out in February 2020 to begin the NGT process and was revisited eight months later to ensure no recent, relevant publications had been missed. Review findings included material on the development of tools for the purpose of reporting interventions [20, 21], reporting implementation strategies [20], the adaptation of interventions and/or their delivery [22], and identifying potential sources of bias in relevant studies using quantitative, qualitative, or mixed method assessment [23]. A search of the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network’s library for health research reporting (http://www.equator-network.org) confirmed there were no tools for critical appraisal of implementation outcomes, thus solidifying the need for this tool. To develop our tool, a list of new items and items from existing tools were combined, including elements from the TIDieR checklist [21], StaRI checklist [20], MMAT tool [23], implementation outcomes taxonomy [8], and FRAME framework [22]. These tools are described briefly in Table 1. Novel elements of the tool included a section for critical appraisal of implementation outcomes and a space to indicate implementation phase (i.e., whether assessment was carried out pre-, during, or post implementation).

Table 1 Summary of tools integrated into the ASSESS tool

Round robin and clarification

After integrating existing reporting and appraisal tools with novel elements, we developed an initial shared draft of the tool in Excel 2016. During multiple online meetings, panel members were provided the opportunity to provide structured feedback on each item, its content and presentation, as well as the overall structure of the tool and its instructions. All panel members were encouraged to provide clarification on their feedback, including rationale for rankings, while one panel member took notes on a shared document (this replaced the white board upon which notes would be taken if this were an in-person meeting).

Voting and discussion

All panel members voted on the items to be included within the ASSESS tool and their presentation. The panel suggested four domains capturing implementation methods, intervention methods, implementation results, and intervention results. These domains include (i) intervention and implementation description: methods, (ii) intervention and implementation description: results, (iii) intervention and implementation evaluation: methods, and (iv) intervention and implementation evaluation: results. Panel members discussed the rationale for these domains: that they would allow users to fully describe the methods and results of the study relevant to the intervention and implementation strategy, as well as to critically appraise the outcomes relevant to the implementation strategy and the intervention being implemented. The team deliberated and agreed upon content, structure, and the addition of instructions and further explanation on using the tool.

Once an initial version was developed, the tool was applied by each panel member to articles representing various study types (i.e., randomized-control trials, non-randomized quantitative studies, qualitative studies, and mixed methods studies) and studies representing various phases of implementation. In between meetings, all panel members would apply the tool to the same articles as other panel members and take notes on that experience. Then, during meetings, the NGT process would be repeated with periods of generation of suggested modifications, round robin, clarification, voting, and discussion. Modifications made to enhance the tool as needed based on results from this process included adding further explanation of items and re-ordering the presentation of items for clarity.

Additional expert feedback

Additional expert feedback was invited on draft versions of the ASSESS tool. A draft version of the tool was shared via e-mail, along with a suggested article for application of the tool, with two experts for feedback. These experts reviewed and provided significant feedback before seeking further feedback via a larger group e-Consensus meeting. Experts shared suggestions for adding further explanation on the critical appraisal section and for re-formatting instructions for clarity.

Phase 2: e-Consensus meeting

After the iterative process incorporating feedback from panel experts and additional experts, we sought feedback from hypothetical users of the tool. Implementation researchers and implementers (N = 32) were recruited via email and invited to one of two online meetings in October 2020 during which they were introduced to the tool, the rationale for its development, and then asked for feedback. Initial feedback on usability and utility was provided by two smaller groups (n = 12 and 12) with novice implementers and implementation science researchers (i.e., less than 1 years’ experience or training in implementation science or implementation) and one experienced group (i.e., more than 1 years’ experience or training) (n = 8). Participants in these meetings represented experience and expertise across multiple relevant areas. As per recommendations [16], the proportion of content experts was greater than 25%.

Meetings began with a presentation on relevant background topics, including a summary of the evidence on existing tools and a summary of the progress in consensus building among the expert panel to develop the current items presented in the tool. Meetings were moderated by one expert panel member while 1–2 team members took notes. The discussions were recorded. Analysis of discussion notes was conducted by NR, and findings were shared with the expert panel for interpretation. In addition to verbal feedback, participants were invited to complete questionnaires (n = 9). Data management and analysis was carried out in Excel 2016. An audit trail was generated to capture the progression of the tool development and decisions that were made regarding additions or edits to its components and structure. At the end of each meeting, the expert panel sought feedback on a knowledge translation strategy.

Phase 3: Post-meeting activities

After the meetings, the expert panel reconvened with an online meeting to debrief on the larger consensus generating meetings, including voting on suggested modifications for usability and automation. The panel began work on implementing a knowledge translation strategy, including preparing publication of the tool and an explanation and elaboration document and development of a website to host the tool (https://publichealth.nyu.edu/research-scholarship/centers-labs-initiatives/isee-laboratory).

Results

The tool domains are identified below, including the description of the intervention and implementation strategy methods, the description of the intervention and implementation strategy results, the evaluation of intervention and implementation strategy methods, and the evaluation of the intervention and implementation strategy results. The instructions for its use are shared in Table 2. The 24 items are applicable to a broad range of study designs employed in implementation science, including qualitative studies, randomized-control trials, non-randomized quantitative studies, and mixed methods studies. A key feature of the tool is the dual columns for implementation strategy and intervention, within which the methods and results are described and the intervention and implementation outcomes are assessed for bias. Accompanying instructions and an elaboration document that identifies and describes each of the items, explains the rationale, and models examples of good reporting and appraising practice, as well as an instructional video were prepared.

Table 2 ASSESS tool item descriptions

Intervention and implementation description: methods

This is the first domain (items 1–19), which tasks the user with describing the implementation strategy and intervention implemented, including the following items: (1) overall review or meta-analysis question, (2) study author and publication year, (3) study title, (4) rationale, (5) aim(s), objective(s), or research question(s), (6) description of the intervention and/or implementation strategy, (7) description of any adaptation of the intervention or its delivery, (8) study design, (9) participant type(s), (10) comparison group, (11) context, (12) study sites, (13) subgroups (optional), (14) implementation phase, (15) process evaluation, (16) sample size, (17) analysis, (18) sub-group analyses (optional), and (19) outcomes (assessment). The user enters how data was collected for assessment of both implementation outcomes (i.e., acceptability, appropriateness, adoption, feasibility, fidelity, penetration, cost, and sustainability) and intervention outcomes (i.e., effectiveness, efficiency, equity, patient-centeredness, safety, and timeliness), recommended by Proctor et al. [8] as relevant.

Intervention and implementation description: results

The next domain (items 20–22) is where the user describes the results of both the implementation strategy and intervention implemented. As appropriate, the user enters [20] outcomes (implementation and intervention outcomes), [21] barriers to implementation, and [22] facilitators of implementation.

Intervention and implementation evaluation: methods

The third domain (item 23) is where the user evaluates the methods reported within the paper to assess the implementation strategy and the intervention implemented. The tool guides the user through this process in three steps. First, the user selects the study design (i.e., qualitative, randomized control trial, non-randomized quantitative study, or mixed methods). Next, the user is prompted to respond to five questions regarding the study design reported in the paper. Criteria are indicated by study design, so that criteria for qualitative studies correspond to criteria 1.1–1.5, those for quantitative RCTs correspond to 2.1–2.5, those for quantitative non-randomized studies correspond to 3.1–3.5, and those for mixed methods studies correspond to 4.1–4.5. Each question represents a quality criterion for evaluating the study design. For qualitative studies, for example, the criteria are as follows: 1.1. Is the qualitative approach appropriate to answer the research question?; 1.2. Are the qualitative data collection methods adequate to address the research question?; 1.3. Are the findings adequately derived from the data?; 1.4. Is the interpretation of results sufficiently substantiated by data?; 1.5. Is there coherence between qualitative data sources, collection, analysis and interpretation? In comparison, for the quantitative RCTs, the criteria are as follows: 2.1. Is randomization appropriately performed?; 2.2. Are the groups comparable at baseline?; 2.3. Are there complete outcome data?; 2.4. Are outcome assessors blinded to the intervention provided?; 2.5 Did the participants adhere to the assigned intervention? Finally, the user provides a score (0 or 1) to each question, to indicate whether each criterion was (1) or was not (0) met.

Intervention and implementation evaluation: results

The last domain (item 24) is where the user inputs their evaluation of the results of the implementation strategy and the intervention implemented. The user sums the score from the last step of the third domain and applies this summary score to the intervention and implementation outcomes assessed. Based on this appraisal section, the risk of bias will be higher (i.e., score of 1–2), lower (i.e., score of 3–5), or unclear (i.e., not able to be assessed). A summary score can be applied to each implementation or intervention outcome assessed in the paper, if these outcomes were assessed in different manners. For example, a study may have poorly evaluated the intervention outcome (i.e., summary score for effectiveness = 2 and for patient centeredness = 1) but appropriately evaluated the implementation outcome (i.e., summary score for adoption = 4 and for acceptability = 5). Additionally, summary scores may be compared across various studies within a review, which will provide an overall understanding of the risk of bias within the literature for each outcome of an intervention and implementation strategy. This synthesis and appraisal can be guided by the templates included as supplementary documents. As with standards for systematic reviews, it is advised that at least two reviewers independently carry out the appraisal process and compare extraction until reaching consensus or have a third reviewer resolve discordant outputs.

Usability and utility findings

Once the tool was developed, feedback was sought on its usability and utility. Our sample of 32 meeting participants were majority female, with a mix of education attainment, across various healthcare and public health disciplines, and ranged from novice to expert implementers and researchers. (Table 3) Users reported they liked the layout of the tool, its detailed instructions, and ease of use (Table 4). Many reported it was comprehensive and saw utility in being able to extract both qualitative and quantitative results, with one participant sharing “I am very excited about this tool because I am working on a literature review and have been having trouble thinking about how to organize the evidence to inform implementation science.” Participants also recognized that this made for a lengthy extraction process. Another participant shared: “I think this is really useful. It would be great to employ this in several different disciplines to see how it works in real practice.” They found the criteria scoring for critical appraisal were straightforward. Participants asked for examples of completed entries and wanted to have space to identify the individual entering information in the form, so one could assess inter-rater reliability. Many had suggestions for how to improve automation.

Table 3 Sample characteristics of consensus meeting participants regarding usability testing (N = 32)
Table 4 Participant feedback on utility and usability (N = 9)

Automation

The team is in the process of investigating existing platforms that will facilitate automation. It is noted that reviewers will have access to different tools, thus Excel will be the primary tool. This will allow the tool to interface with statistical packages to enable the generation of summary statistics when comparing across multiple extracted papers within a systematic review or meta-analysis. Editing capabilities specific to the process will be incorporated into future tools.

Discussion

The comprehensive, adaptable 24-item ASSESS tool allows for both (1) reporting of implementation strategies and the intervention being implemented and (2) critical appraisal of intervention and implementation outcomes resulting from quantitative, qualitative, or mixed methods assessment. The tool shares with the STARI checklist [20] the aim for enhancing adoption and sustainability of effective interventions by structuring reporting of implementation studies, as well as the presentation of dual strands describing the implementation strategy and the intervention that is being implemented. The ASSESS tool is novel in its inclusion of the implementation phases, which allows for comparison of studies across pre-implementation, during implementation, and post-implementation stages. These stages could then be mapped onto implementation science theoretical frameworks like the Exploration, Preparation, Implementation and Sustainment (EPIS) Framework, to generate findings related to applicability of implementation strategies and assessment of implementation outcomes at different implementation phases. The ASSESS tool is innovative in that other reporting tools generally do not assess risk of bias among implementation outcomes. Other critical appraisal tools do not provide guidance on how to separately assess quantitative and qualitative data for risk of bias. Shaped by Proctor’s taxonomy, the ASSESS tool moves from simply reporting implementation outcomes to evaluating quality of data on the outcomes and thus the risk of bias. The ASSESS tool will need to be refined in the light of the practical experience of using the tool. Further research is needed to examine how to integrate quantitative (risk of bias) and qualitative (trustworthiness) if critical appraisal findings are discordant.

This tool has various strengths and limitations. As a strength, it does not promote one study design over another, for example randomized-control trial over qualitative. It provides a way to appraise qualitative findings, which are somewhat lesser reported than quantitative appraisal. It further incorporates implementation phases. Importantly, this work presents the development of the tool and initial qualitative assessment; it is only once it is available for use that its greater utility can be subsequently assessed. Future research should examine the validity and reliability of the ASSESS tool, as has been done using a stakeholder-driven approach for pragmatic measurement of implementation outcomes, strategies, and context [11, 12, 24]. Future research should also examine tool iterations that integrate aspects of additional novel and relevant tools, such as the FRAME-IS tool for documenting modifications to implementation strategies in healthcare [25], which was published after our work was carried out and therefore did not inform the consensus building process. This tool is not designed for use with non-empirical papers (i.e., review papers, theoretical papers, gray literature where the methods are not fully described), economic studies, or diagnostic accuracy studies. Future research may examine iterations of this tool to allow application to these types of studies, as well as examine the variability of qualitative designs for critical appraisal. Although a Delphi method may provide more reliable findings, there are certain advantages to using nominal groups, including greater consensus and understanding of reasons for disagreement; therefore, elements of a modified Delphi method and a nominal group technique were combined through a hybrid method that has been previously suggested [26]. These structured methods attempt to combat cognitive biases in judgment [27], which are particularly influential in complex tasks [19], as both require an independent initial rating to anchor opinions based on an individual’s own knowledge. This hybrid method maintained a focused discussion on specific topics pertinent to the underlying validity of each item in the tool and allowed all panelists to have access to the same information regarding the tool prior to evaluating it.

Conclusion

The comprehensive, adaptable 24-item ASSESS tool allows for both (1) reporting of the implementation strategy and the intervention being implemented and (2) critical appraisal of intervention and implementation outcomes resulting from quantitative, qualitative, or mixed methods assessment. It addresses the challenge of critical assessment of a methodologically diverse and growing body of implementation science literature. This tool could prove particularly helpful for designing and carrying out systematic reviews and meta-analyses of empirical studies of implementation, examining how process and context may lead to heterogeneity of results. The ASSESS tool will be disseminated via posting on the researchers’ website (https://publichealth.nyu.edu/research-scholarship/centers-labs-initiatives/isee-laboratory) and via submission to the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network. Its use could improve the synthesis of implementation strategies, which will facilitate translation of effective public health interventions into routine practice within clinical or community settings.

Availability of data and materials

The tool will be available on a website (https://publichealth.nyu.edu/research-scholarship/centers-labs-initiatives/isee-laboratory). Templates in various forms will be made available.

Abbreviations

ASSESS:

A comprehenSive tool to Support rEporting and critical appraiSal of qualitative quantitative and mixed methods implementation reSearch outcomes

References

  1. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, et al. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci. 2014;9(1):132.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15(1):84.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Sarkies MN, Skinner EH, Bowles K-A, Morris ME, Williams C, O’Brien L, et al. A novel counterbalanced implementation study design: methodological description and application to implementation research. Implement Sci. 2019;14(1):45.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. BMJ. 2015;350:h391.

    Article  CAS  PubMed  Google Scholar 

  6. Child S, Goodwin V, Garside R, Jones-Hughes T, Boddy K, Stein K. Factors influencing the implementation of fall-prevention programmes: a systematic review and synthesis of qualitative studies. Implement Sci. 2012;7(1):91.

    Article  PubMed  PubMed Central  Google Scholar 

  7. van Dongen JM, Tompa E, Clune L, Sarnocinska-Hart A, Bongers PM, van Tulder MW, et al. Bridging the gap between the economic evaluation literature and daily practice in occupational health: a qualitative study among decision-makers in the healthcare sector. Implement Sci. 2013;8(1):57.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

    Article  Google Scholar 

  9. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Admin Pol Ment Health. 2009;36(1):24–34.

    Article  Google Scholar 

  10. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F, Burns B, et al. Implementation research: a synthesis of the literature. 2005.

    Google Scholar 

  11. Powell BJ, Stanick CF, Halko HM, Dorsey CN, Weiner BJ, Barwick MA, et al. Toward criteria for pragmatic measurement in implementation research and practice: a stakeholder-driven approach using concept mapping. Implement Sci. 2017;12(1):118.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Stanick CF, Halko HM, Dorsey CN, Weiner BJ, Powell BJ, Palinkas LA, et al. Operationalizing the ‘pragmatic’ measures construct using a stakeholder feedback and a multi-method approach. BMC Health Serv Res. 2018;18(1):882.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, Martinez RG. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci. 2015;10(1):155.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Khadjesari Z, Boufkhed S, Vitoratou S, Schatte L, Ziemann A, Daskalopoulou C, et al. Implementation outcome instruments for use in physical healthcare settings: a systematic review. Implement Sci. 2020;15(1):66.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Moher D, Schulz KF, Simera I, Altman DG. Guidance for Developers of Health Research Reporting Guidelines. PLoS Med. 2010;7(2):e1000217.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Black N, Murphy M, Lamping D, McKee M, Sanderson C, Askham J, et al. Consensus development methods: a review of best practice in creating clinical guidelines. J Health Serv Res Policy. 1999;4(4):236–48.

    Article  CAS  PubMed  Google Scholar 

  18. McMillan SS, King M, Tully MP. How to use the nominal group and Delphi techniques. Int J Clin Pharm. 2016;38(3):655–62.

    PubMed  PubMed Central  Google Scholar 

  19. Davies S, Romano PS, Schmidt EM, Schultz E, Geppert JJ, McDonald KM. Assessment of a novel hybrid Delphi and Nominal Groups technique to evaluate quality indicators. Health Serv Res. 2011;46(6pt1):2005–18.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. Bmj. 2017;356:i6795.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Bmj. 2014;348:g1687.

    Article  PubMed  Google Scholar 

  22. Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Improving the content validity of the mixed methods appraisal tool: a modified e-Delphi study. J Clin Epidemiol. 2019;111:49–59.e1.

    Article  PubMed  Google Scholar 

  24. Stanick CF, Halko HM, Nolen EA, Powell BJ, Dorsey CN, Mettert KD, et al. Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale (PAPERS). Transl Behav Med. 2021;11(1):11–20.

    Article  PubMed  Google Scholar 

  25. Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Hutchings A, Raine R, Sanderson C, Black N. A comparison of formal consensus methods used for developing clinical guidelines. J Health Serv Res Policy. 2006;11(4):218–24.

    Article  PubMed  Google Scholar 

  27. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–31.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We wish to thank our panel participants, expert reviewers, and the ISEE (Implementing Sustainable Evidence-based interventions through Engagement) lab students at New York University for their time and feedback.

Funding

This work was supported in part by the NYU CTSA grants UL1 TR0001445 and TL1 TR001447 from the National Center for Advancing Translational Sciences, National Institutes of Health.

Author information

Authors and Affiliations

Authors

Contributions

DV, NR, and EP contributed to the conceptualization of this work. NR drafted the tool. NR, DV, JG, TO, and EP provided feedback on iterations and applied the tool to various types of articles. NR, DV, JG, TO, and EP led meetings on the utility and usefulness of the tool. NR and DV developed templates for tool automation. NR drafted the manuscript, to which DV, JG, TO, DS, OO, JI, and EP contributed. All authors have reviewed and approved the submitted version.

Corresponding author

Correspondence to Nessa Ryan.

Ethics declarations

Ethics approval and consent to participate

Human subjects approval was not necessary for the purpose of tool development. Data collected from users of the tool was regarding the tool, and any demographic data was de-identified by a team member not part of the tool development and evaluation process.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ryan, N., Vieira, D., Gyamfi, J. et al. Development of the ASSESS tool: a comprehenSive tool to Support rEporting and critical appraiSal of qualitative, quantitative, and mixed methods implementation reSearch outcomes. Implement Sci Commun 3, 34 (2022). https://doi.org/10.1186/s43058-021-00236-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-021-00236-4

Keywords