This study demonstrated the majority of health researchers are aware of IS, with more than two-thirds of the population stating they would be able to describe IS to a colleague; however, comprehensive understanding of IS may not be universal. Despite the high level of self-reported awareness of IS, there may be a general misunderstanding of the scope of IS. An overwhelming majority of health researchers reported at least sometimes using elements of IS; however, when asked directly the type of methods used, only one-tenth of researchers self-identified as using IS. It is not expected that all researchers would or should identify as IS researchers; however, the gap between those identifying as IS researchers and those reporting IS use is larger than would be ideal. The disparity indicates there may be many researchers engaging in IS without being aware their methods would fit under the umbrella of IS research, consider the IS methods used as belonging to another field of research, or do not consistently use a sufficient number of IS methods to consider their work IS. This use of IS elements without identifying them as methods in the field of IS may jeopardize the rigor of the implementation research.
As a field, IS not only seeks to bring attention to the need for real-world relevance in research [14], but, through its frameworks and methods, IS seeks to improve the rigor and transparency of the methods used to examine implementation [1, 14,15,16,17,18]. Many implementation studies in published literature still have weak study designs and lack the rigor necessary to successfully answer important implementation research questions [19, 20]. The potential for the perpetuation of poor practices in implementation research is particularly important as many non-IS health researchers are now expected to incorporate components of implementation into their research [7]. A lack of sufficient awareness of IS methods and training among health researchers could explain some of the shortcomings seen in implementation research. Increasing awareness of IS methods among non-IS researchers who engage in implementation research may lead to more impactful implementation research.
Over the past two decades, considerable progress has been made conceptualizing what constitutes IS [1] and many resources to define and explain IS have been developed [2, 19]. Our study results, however, confirm previous observations that considerable confusion persists about the terminology and scope of IS [18, 21, 22]. The discordance between researchers using elements of IS and those acknowledging the use of IS methods may be partly explained by a confusion regarding what separates IS from other research methodologies. The scope of IS is broad and incorporates many methods and measures familiar to researchers in a variety of other disciplines [1]. Therefore, some health researchers may have been exposed to and using elements of IS as part of research in other fields (e.g., quality improvement).
As many IS resources have been made available only recently, the observed low levels of self-identification as using IS methods may be a result of a lag between IS resource development and dissemination to health researchers. Due to the disconnect between IS element use and the acknowledgement of IS engagement, further efforts are likely needed to disseminate IS to researchers across disciplines. To support these efforts, additional research is needed to determine whether health researchers are aware of and utilizing the currently available IS resources, as well as whether available IS resources provide adequate and sufficiently clear information to be useful for potential IS researchers.
The high prevalence of IS element use reported is at odds with the presentation of these elements in the published literature [23] where publishing even basic IS outcomes are sparse [24,25,26,27,28]. The discordance between using IS methods and what is published in literature may in part be a result of the lack of consistency in IS terminology used. Implementation studies are conducted across a broad range of disciplines and topical areas, and the terminology used to describe similar constructs often varies significantly (e.g., “fidelity” is also reported as “delivered as intended,” “adherence,” “integrity,” and “quality of program delivery”) [23, 29]. Therefore, measuring the use of IS in the literature may underreport the use of these measures. The absence of IS elements in the published literature may also be due to a lack of incentive to publish IS measures, which are often viewed as secondary outcomes for many researchers and publishers alike [30]. Increasing researcher awareness of IS, its methods, and terminology may serve to unify implementation research and increase its impact.
The results of this study support calls for the improvement of researcher training in IS [31,32,33,34]. While there are numerous IS resources available [2], it has been acknowledged there is a need for innovative solutions for disseminating such knowledge to researchers [33]. Effective training in IS is essential for the success of IS research [31, 32], and the dissemination of IS knowledge may reduce unrecognized IS engagement and consequently improve the effectiveness and impact of implementation research.
Limitations
Our study had several limitations. First, the generalizability of our study may be limited due to selection bias from the sampling frame used. NIH RePORTER is limited to researchers who have had a successful grant submission. Therefore, the survey data may not be generalizable to researchers using other non-public sources of funding, more junior researchers, or those who have been unsuccessful in getting funding. Similarly, NIH RePORTER predominantly contains USA researchers and therefore the study results may not be generalizable to researchers outside of the USA. Second, this study was likely impacted by response bias due to the nature of the survey topic. The survey invitation purposefully did not include terms associated with IS and as a result, approximately one-quarter of researchers who started the survey did not complete it, with a number of researchers expressing (through personal correspondence with the author) frustration and disinterest in completing the survey because it was not relevant to them or their research. Therefore, it is likely greater survey completion was present in researchers who were already aware of and engaging in IS. Similarly, the overall response rate was relatively low and therefore the estimates reported may not be representative of the sampling frame as a whole. However, overall, the distribution and variety of reported methods used indicate that the group that completed the survey still represents a diverse group of health researchers that are likely to be generally representative of the target population [13]. Finally, while the pilot tested, the survey measures of IS engagement have not been validated. Our results are also based on self-report of elements of IS and not actual practice or understanding of IS, which is likely to lead to an overestimation of the number of researchers engaging in implementation research. Additionally, the survey did not measure the quality of research being performed by those with unrecognized IS and more research is needed to assess actual IS practices in this population.