Skip to main content

Evaluating research co-production: protocol for the Research Quality Plus for Co-Production (RQ+ 4 Co-Pro) framework

Abstract

Background

Research co-production is an umbrella term used to describe research users and researchers working together to generate knowledge. Research co-production is used to create knowledge that is relevant to current challenges and to increase uptake of that knowledge into practice, programs, products, and/or policy. Yet, rigorous theories and methods to assess the quality of co-production are limited. Here we describe a framework for assessing the quality of research co-production—Research Quality Plus for Co-Production (RQ+ 4 Co-Pro)—and outline our field test of this approach.

Methods

Using a co-production approach, we aim to field test the relevance and utility of the RQ+ 4 Co-Pro framework. To do so, we will recruit participants who have led research co-production projects from the international Integrated Knowledge Translation Research Network. We aim to sample 16 to 20 co-production project leads, assign these participants to dyadic groups (8 to 10 dyads), train each participant in the RQ+ 4 Co-Pro framework using deliberative workshops and oversee a simulation assessment exercise using RQ+ 4 Co-Pro within dyadic groups. To study this experience, we use a qualitative design to collect participant demographic information and project demographic information and will use in-depth semi-structured interviews to collect data related to the experience each participant has using the RQ+ 4 Co-Pro framework.

Discussion

This study will yield knowledge about a new way to assess research co-production. Specifically, it will address the relevance and utility of using RQ+ 4 Co-Pro, a framework that includes context as an inseparable component of research, identifies dimensions of quality matched to the aims of co-production, and applies a systematic and transferable evaluative method for reaching conclusions. This is a needed area of innovation for research co-production to reach its full potential. The findings may benefit co-producers interested in understanding the quality of their work, but also other stewards of research co-production. Accordingly, we undertake this study as a co-production team representing multiple perspectives from across the research enterprise, such as funders, journal editors, university administrators, and government and health organization leaders.

Background

Research co-production shows great promise for connecting science to societal problems. Research co-production can be rigorous and ethical [1,2,3,4,5,6,7] and serve as a vehicle for generating and translating scientific findings into action [8]. Research on implementation science and scaling science [9] demonstrates that the use of rigorous research designs is only one consideration when implementing and scaling innovations—context, user/beneficiary perspectives, and systems matter just as much. The active involvement of users (those who may move research findings in action) and beneficiaries (those who may be affected) can be a crucial predictor of success [8,9,10,11].

Research co-production comes in many forms and under many different names. Among others, community academic partnership [11]; community-based participatory research [12]; co-creation [13]; and integrated knowledge translation [14,15,16]. A research study involving experts from a range of five research co-production traditions [17] found that the definitions and motivations of each type of co-production research were very similar. While there are many different names, engaging the users of research in the research process is a common goal. Therefore, we anticipate the results of this study to hold potential beyond the immediate sample. See Table 1 for general definitions of selected co-production traditions.

Table 1 Research co-production traditions

The need for better evaluation approaches for co-production

There is growing dissatisfaction with the approaches available for assessing the quality of research co-production. Traditional approaches to research quality assessment do not take into account engagement with knowledge users and, as such, do not address a key factor in the hypothesis behind research co-production: that meaningful researcher - knowledge user partnerships make a difference to the quality of the evidence that research produces [8, 10].

Table 2 outlines the predominant forms of research evaluation—as classified and further discussed in Chapter 4.3 of Research Coproduction in Healthcare [25]—and describes how these approaches can undervalue research co-production.

Table 2 Mainstream evaluation stacked against co-production [25]

Objectives of the RQ+ 4 Co-Pro field test

The purpose of this study is to field test the relevance and utility of an adapted research quality evaluation approach that was first developed and validated by the International Development Research Centre. This approach, called Research Quality Plus (RQ+) [29, 30], has previously been used to assess applied and use-oriented research. For a full explication, see McLean et al. [29]. With this study, we will test whether RQ+ can be adapted for assessing the quality of co-production researchFootnote 1. This prototype adaptation is called the Research Quality Plus for Co-Production, or RQ+ 4 Co-Pro, framework (25). See Fig. 1 below for key definitions of RQ+ and RQ+ 4 Co-Pro.

Fig. 1
figure 1

Key definitions

Two research questions guide the field test:

  1. 1.

    Is the RQ+ 4 Co-Pro framework relevant for the evaluation of research co-production?

  2. 2.

    Is the RQ+ 4 Co-Pro framework useful for the evaluation of research co-production?

The Research Quality Plus for Co-Production (RQ+ 4 Co-Pro) framework

The adaptation of RQ+ into the RQ+ 4 Co-Pro framework is illustrated in Fig. 2. RQ+ 4 Co-Pro was first proposed by authors of this paper following their experience designing and using the initial RQ+ framework at IDRC, doing research evaluations internationally, and doing research co-production (25). This is a prototype rendition. The study described in this manuscript aims to field test the protype. In Additional file 1: Appendix I, the fully detailed RQ+ 4 Co-Pro framework template is provided for the interested reader; it includes the definitions of each framework component and the associated evaluative rubrics. Additional file 2: Appendix II provides a crosswalk of the components of the RQ+ framework with the components of the RQ+ 4 Co-Pro framework.

Fig. 2
figure 2

The RQ+ 4 Co-Pro framework [adapted from infographic originally published by authors (RKDM, IDG, FC), and secondly in 25]

The RQ+ 4 Co-Pro framework embraces the three tenets of the RQ+ Approach. These are as follows: (1) context matters, (2) quality is multi-dimensional, and (3) assessments should be empirical and systematic. These are modified from the RQ+ framework to reflect the particularities of co-production research. Here we provide a description of how each tenet was tailored, and then introduce an all-of-framework infographic (Fig. 2) to show how the three tenets fit together.

Contextual factors

Research always occurs in a context. Research is affected by and affects the socio-economic, historical, cultural, and political contexts as well as the geographic and institutional setting.Footnote 2 Attention to context is particularly important in the evaluation of the quality of research co-production [31]. We identify three contextual factors that can be monitored and categorized in a co-production evaluation. The goal in examining contextual factors is to gather information that can help to understand and navigate the enabling environment for co-production research. Understanding context is important to research design, management, and funding decisions as it helps clarify potential risks and opportunities and might also help with the development of strategies to capitalize on these and monitor progress. The contextual factors are not intended to affect the ratings of research quality dimensions or sub-dimensions, nor is any rating of a contextual factor necessarily “better” than another. Rather, they help to provide a deeper understanding of the enabling environment.

The three RQ+ 4 Co-Pro contextual factors are as follows: (1) Knowledge Use Environment, (2) Research Environment, and (3) Capacities for Co-Production. In the International Development Research Centre’s current RQ+ framework, there are five contextual factors. Three are closely aligned to those here, given some tailoring to match co-production specifically. The additional two contextual factors, Data Environment and Maturity of the Research Field, are not included in RQ+ 4 Co-Pro as they present less immediate alignment with the aims of co-production. The decision to reduce the number and tailor the contextual factors for RQ+ 4 Co-Pro was the result of consultations between authors of this paper, and their shared experiences with co-production and co-production evaluations [25]. With this field test, we will further examine the relevance of these three contextual factors and determine the need to modify, exclude, or include new elements on grounds of relevance and/or utility (see research questions above). Additional file 2: Appendix II provides a crosswalk of the RQ+ contextual factors vis-à-vis the RQ+ 4 Co-Pro contextual factors.

Quality dimensions and sub-dimensions

To assess co-production quality, we identify three dimensions and eight sub-dimensions. These are summarized in Fig. 2 and presented in detail in Additional file 1: Appendix I. Additional file 2: Appendix II crosswalks these dimensions and sub-dimensions with those of the RQ+ framework.

As with all research, Scientific Rigour is central to co-production research and therefore comprises the first dimension. Two sub-dimensions are identified under Scientific Rigour: 1.1. Protocol which addresses issues of study design, and 1.2. Methodological Integrity which assesses the rigor and integrity of the application of the study design. Research Legitimacy is the second dimension of RQ + 4 Co-Pro. There are four sub-dimensions to Research Legitimacy that assess the fidelity of the research to the operating environment. These are as follows: 2.1 Inclusion of Local Knowledge and Ways of Knowing, 2.2. Trust, Power and Mutually Beneficial Partnerships, 2.3. Intersectionality, and 2.4. Attention to Potentially Negative Consequences. The third and final dimension is Positioning for Use. It assesses the utility of the co-production research through examining 3.1. Relevance or how well the work is aligned to a current problem, and 3.2. Openness and Actionability which addresses accessibility and usefulness of the research findings.

In the RQ+ 4 Co-Pro framework, all dimensions are interrelated and should not be considered as variables that are independent of each other. They are disaggregated to promote a deeper understanding of the multiple dimensions of research quality—ultimately, they must be considered as a set. We assign equal weight to the dimensions and sub-dimensions; others may choose to prioritize or highlight some sub-dimensions over others in any assessment they design.

Empirical and systematic appraisal

Column 3 in Fig. 2 outlines the scale to be used for measurement. RQ+ 4 Co-Pro users apply a rubric for measurement which ensures transparency in the results and promotes a systematic approach across all the research that is being assessed. A combination of qualitative explanations and quantitative measures of sub-dimensions should be used to reach conclusions about the quality of the co-production research. In the following sections, we outline how empirical evidence will be gathered in our field test.

Table 3 below outlines how RQ+ 4 Co-Pro addresses some key recommendations from studies on research co-production approaches.

Table 3 What RQ+ 4 Co-Pro can learn, benefit from, and build on from existing frameworks, experiences, and systematic reviews [25]

Study design

This field test will use a multiple method qualitative design. It will include training of participants, standardized data collection using desk-based templates, and follow-up qualitative interviews with both the assessors and those whose projects have been assessed. As well, it will include a consultative process with the project team for revising RQ+ 4 Co-Pro based on the outcomes of the field test [35, 36].

The study will take a research co-production approach. To do so, the study is being undertaken as a partnership between researchers and knowledge users. All activities and responsibilities will be shared, yet, five team members (authors: RKDM, FC, IDG, AK, CM) are primarily responsible for study design and execution. Thirteen team members (authors: ABA, RA, JB, CEC, OD, EDR, LAF, MG, AMH, RK, SK, JR, GS) hold primary responsibility for identifying knowledge uptake and use opportunities. These “knowledge user” team members represent critical stewardship roles for research co-production broadly, including funders, university administrators/leaders, research evaluation specialists, journal editors, co-production trainees, research managers, and co-production scientists. By working together to field test RQ+ 4 Co-Pro, we hope to spark reasoned and appropriate uptake of the framework into settings where current co-production evaluation techniques demand revision and innovation.

The field test will be implemented in four phases, which comprise eight steps. Figure 3 presents an illustration of the complete research life cycle.

Fig. 3
figure 3

Outline of the research life cycle

Phase 1—Study preparation

Sample recruitment and participant consent

Researchers in the Integrated Knowledge Translation Research Network (IKTRN) will be invited to submit projects for assessment and volunteer to assess another project. IKTRN is a network of researchers with an interest in both using and carrying out research on integrated knowledge translation. IKTRN is funded by a multi-year grant from the Canadian Institutes of Health Research [19]. The sample will be a convenience sample. This sample will be drawn from researchers with IKT research experience (members of the IKTRN) and who have recently completed an IKT project (have an IKT case published in the IKTRN casebook series). This invitation will be delivered by email from the study PI to eligible members of the IKTRN, until the desired sample size of 16 projects is reached, with a maximum of 20 projects. Enrolled participants will be arranged in dyads based on research topic familiarity for the assessment.

The sample range (16–20 projects) is based on two factors. The first is viability given resource requirements of past experiences using the RQ+ approach, and our own study timelines and resources for this project. The second is the anticipated saturation point of qualitative data collected in the field test [37].

Eligibility criteria

We will consider eligibility at two levels: (1) the IKT research project and (2) the individuals participating in our study, as described below in Tables 4 and 5 respectively. The research team will gather informed written consent from all participants.

Table 4 Project eligibility criteria
Table 5 Participant eligibility criteria

RQ+ 4 Co-Pro framework training

Participants will receive training in RQ+ 4 Co-Pro. Training will be provided by the core research team, with the aim to introduce the RQ+ 4 Co-Pro framework, the definitions and meaning of its components (contextual factors, quality dimensions and sub-dimensions, evaluative rubrics), and systematize the approach to its use by participants. The 2-h training will be completed prior to the initiation of all data collection.

Phase 2—Data collection

Data collection will involve four steps: (1) completion of a participant socio-demographic form, (2) completion of a project information form, (3) dyadic RQ+ 4 Co-Pro assessments, and (4) participating in an interview with the research team on the strengths and limitations of the RQ+ 4 Co-Pro framework.

Step 1—Participant socio-demographic form

All participants will be sent a link to an online socio-demographic form. This form will collect information on participant demographics and their experience/background developing and/or delivering IKT research projects. We will ask that participants complete this form prior to taking part in the training session (5 min).

Step 2—Project information form

All participants will be sent a link to a project information form. This form will be pre-populated by the study team as much as possible to profile the project included in the field test. The study participant will verify and complete any missing information on the project profile prepared by the study team (10 min).

Step 3—Dyadic project assessment exercise

Each participant in the paired dyad will provide the other, who will serve as assessor, with publicly available documentation on the project they will be assessing (inter alia publications, manuscripts, reports, briefs, blogs, etc). These assessors (study participants in dyads [38]) will review this material to gain an understanding of the project context, as well as its strengths and weaknesses (1-2 hours). Next, the assessors within each dyad will engage in an assessment interview about their projects using the RQ+ 4 Co-Pro training and the field test template (see Annex 1) provided by the research team. Assessment interviews may be done in one virtual call or split in two as the two determine. It is estimated they will last 60 min per project. The field test template will be used by the assessors for recording results of the assessment during the interview.

Step 4—Research interview with RQ+ 4 Co-Pro study team

On completion of the dyadic assessments, members of the research team (RKDM, FC) will interview the assessors (study participants) individually using a semi-structured interview guide to elicit their views as both assessor and assessed on the utility and relevance of using RQ+ 4 Co-Pro to assess the quality of IKT research [36]. The interviews will be completed by phone or video conference, depending on participant preference. Interviews will take approximately 60 min.

Phase 3—Analysis and revision

Data analysis

Data analysis will be conducted for each data source independently (Participant demographic forms, Project information forms, Interviews with study participants), and triangulation will be conducted across the independent lines of evidence for congruence as well as instances of discordance.

Step 1—Participant demographic form analysis

Frequencies will be generated for all closed-ended questions. Responses to open-ended questions will be analyzed for common and disparate themes using content analysis. Analysis will provide an overview of participants’ backgrounds and experiences brought to the field test.

Step 2—Project information form analysis

The project profile forms will be analyzed using content analysis to provide an overview of the nature of the projects included in the field test.

Step 3—Interview analysis

Qualitative interview data collection and analysis will occur simultaneously so that identified themes can be incorporated into future interviews. Interviews will be audio recorded with permission of the interviewee. Where permission for transcription or recording are not granted, the interview notes will be sent to the interviewee for review.

We will use thematic analysis [39] to identify patterns in the interview data. We will use an inductive or data-driven approach, without using a pre-existing coding frame. The coding will be modified based on new findings and in collaboration among interviewers. The first two interviews will be coded by two researchers independently and the results compared. Differences will be discussed to ensure agreement on a common approach for the remaining interviews. If agreement is not achieved between the two researchers, a third researcher will arbitrate opposing views and provide a third opinion to reach majority decision if consensus is not achieved.

Step 4—Triangulation and analysis

As a final step in data analysis, we will look for similarities and differences of note in the study data by comparing findings across the lines of evidence. We will conduct triangulation by data source and by data collection method. Data will be considered in triangulation by using identified codes and themes to compare data. For example, we may cross tabulate all projects with a timeline of more than 4 years (as identified in the project information form), by perspectives around the importance, or lack thereof, of using contextual factors in the RQ+ 4 Co-Pro framework. This is a hypothetical example. Triangulation will be driven by identified themes in the data.

Revision of the RQ+ 4 Co-Pro framework

Based on the findings of the research, we will revise the prototype version of the RQ+ 4 Co-Pro framework. To facilitate this revision, the research team will host a meeting of all team members (including our knowledge users) to review and discuss preliminary research results and how these may induce the desire for change to the framework or its components. The reasons for changing a framework component will relate to the two research questions driving the study: relevance of the framework components and utility of the framework components and its application. Following team iteration, we will prepare any required revisions and represent the revised framework to the study participants for review/member checking.

Phase 4—Results sharing

The final version of the RQ+ 4 Co-Pro framework will be published in a study findings report. This report will be submitted to an open access peer-reviewed journal for external assessment by co-production specialists. Uptake and use strategies will be developed by/among knowledge user perspectives represented on our co-production team.

Discussion

This study will yield knowledge about a new way to assess research co-production. Specifically, it will address the relevance and utility of using RQ+ 4 Co-Pro, a framework that includes context as an inseparable component of research, identifies dimensions of quality matched to the aims of co-production, and applies a systematic and transferable evaluative method for reaching conclusions. As we have argued in this paper (see Table 2), evaluation is a needed area of innovation for research co-production to reach its full potential. As we have presented (see Table 3), we are not alone in raising this call.

Limitations

There are limitations with our study design and study methods. The first is while we propose RQ+ 4 Co-Pro should apply to all research co-production approaches, we have limited our sample to IKT projects as a specific sub-domain of co-production which may limit generalization to other partnered research approaches. Second, our sample of IKTRN projects will prioritize experiences of the global North, as IKTRN membership is largely comprised of members from Canada, Australia, and the UK. Third, we further limited our sample to completed projects and so the study will not test the potential use of RQ+ 4 Co-Pro at the design and implementation stages of co-production projects. Fourth, given the evaluands in our field test are research projects, generalizability to other evaluands such as organizations, project portfolios, or grant applications should be tempered. Fifth, the approach to applying the framework will focus on discussion with principal investigators and documentary analysis. In future uses of the framework, users may wish to be more holistic and include more data sources, for example interviews with end users (however, if the framework is not considered useful to researchers, it will likely be problematic for knowledge users.) At the same time, other applications may go into less depth than we do in this field test, for example using a checklist for project design or application review. Using our design, we cannot be certain our field test experience will generalize to these other potential uses of the framework. Sixth, this field test is limited to health research projects although we recognize that co-production is an approach used in multiple domains of science. Finally, all study participants are part of the same network; this may risk a more positive assessment of each other’s projects due to social bias. However, the goal of the research is to assess the relevance and utility of the framework, not to draw a final conclusion about the quality of co-production research endeavors sampled.

Looking ahead

We expect that our findings will add to the existing options for assessing co-production research that may benefit researchers but also other stewards of research co-production. Accordingly, we undertake this study as a co-production team with varied experiences and constituents we currently represent. Some potential uses may include funders interested in new ways to select, encourage, and/or evaluate co-production, including at different phases of the research life cycle. It may also give journal editors a higher level of comfort with the quality of research co-production they publish. Research institutions, such as universities or think tanks, may benefit from assessing the quality of co-production they do using a framework tailored to their values, objectives, and context. In the final research report of this RQ+ 4 Co-Pro field test, a section discussing users and uses of RQ+ 4 Co-Pro will be elaborated. We will also tailor outputs and use strategies to the identified needs of knowledge users within our team. These efforts may not appear in peer-reviewed journals or other scholarly publication formats, but instead as use-oriented outputs and activities.

Availability of data and materials

No study data has been collected yet. Upon study completion, please contact the corresponding author for more information.

Notes

  1. Applied and use-oriented research refers to any research, however conducted, that focuses on a particular societal issue. Research co-production is also concerned with societal issues but includes the active engagement of the users in the research process itself, from design to completion to implementation of the findings.

  2. In our view this should hold for all four pillars of health research: biomedical, clinical, health systems/services, and population and public health.

References

  1. Lavery JV. Building an evidence base for stakeholder engagement. Science. 2018;6361(6402):554–5.

    Article  Google Scholar 

  2. Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1(1):1. https://doi.org/10.1186/1748-5908-1-1.

    Article  PubMed Central  Google Scholar 

  3. Barwick M, Dubrowski R, Petricca K. Knowledge Translation: The Rise of Implementation.; 2020. https://ktdrr.org/products/kt-implementation/KT-Implementation-508.pdf

    Google Scholar 

  4. Davis R, D’Lima D. Building capacity in dissemination and implementation science: a systematic review of the academic literature on teaching and training initiatives. Implement Sci. 2020;15(1). https://doi.org/10.1186/s13012-020-01051-6.

  5. Harries S. Bridging policy and delivery with knowledge: the case for intervention. In: Records Management and Knowledge Mobilisation. Oxford: Elsevier; 2012. p. 145–68. https://doi.org/10.1016/b978-1-84334-653-1.50007-x.

  6. Ivankova NV, Creswell JW, Stick SL. Using mixed-methods sequential explanatory design: from theory to practice. Field Methods. 2006;18(1):3–20. https://doi.org/10.1177/1525822X05282260.

    Article  Google Scholar 

  7. Duncan s, Oliver s. Editorial: motivations for engagement. Sci All. 2017;1(2). https://doi.org/10.18546/RFA.01.2.01.

  8. Canadian Institutes of Health Research. Evaluation of CIHR’s knowledge translation funding program. Ottawa: CIHR; 2013. [online]. Available at: https://cihr-irsc.gc.ca/e/47332.html [Accessed 30 Aug 2021]

    Google Scholar 

  9. McLean R, Gargani J. Scaling impact: innovation for the public good. London, New York and Ottawa: Routledge and International Development Research Centre; 2019. Available at: https://www.idrc.ca/en/book/scaling-impact-innovation-public-good [Accessed 30 Aug. 2021]

    Book  Google Scholar 

  10. Mancuso A, Malm SA, Sharkey A, Shahabuddin ASM, Shroff ZC. Cross-cutting lessons from the decision-maker led implementation research initiative. Health Res Policy Sys. 2021;10(Suppl 2):83. https://doi.org/10.1186/s12961-021-00706-0.

    Article  Google Scholar 

  11. Drahota A, Meza RD, Brikho B, Naaf M, Estabillo JA, Gomez ED, et al. Community-academic partnerships: a systematic review of the state of the literature and recommendations for future research. Milbank Q. 2016. https://doi.org/10.1111/1468-0009.12184.

  12. Jull J, Giles A, Graham ID. Community-based participatory research and integrated knowledge translation: advancing the co-creation of knowledge. Implement Sci. 2017;12:150. https://doi.org/10.1186/s13012-017-0696-3.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Greenhalgh T, Hinton L, Finlay T, Macfarlane A, Fahy N, Clyde B, et al. Frameworks for supporting patient and public involvement in research: systematic review and co-design pilot. Health Expect. 2019;22(4):785–801.

    Article  Google Scholar 

  14. Gagliardi AR, Berta W, Kothari A, et al. Integrated knowledge translation (IKT) in health care: a scoping review. Implement Sci. 2015;11:38. https://doi.org/10.1186/s13012-016-0399-1.

    Article  Google Scholar 

  15. Kothari A, Mccutcheon C, Graham ID. Defining integrated knowledge translation and moving forward: a response to recent commentaries. Int J Health Policy Manag. 2017;6(5):299–300. https://doi.org/10.15171/ijhpm.2017.15.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Graham I, Tetroe J, Pearson A. Turning knowledge into action: practical guidance on how to do integrated knowledge translation, vol. 21. Adelaide: Lippincott-Joanna Briggs Institute Synthesis Science in Healthcare Series; 2014.

  17. Nguyen T, Graham ID, Mrklas KJ, Bowen S, Cargo M, Estabrooks CA, et al. How does integrated knowledge translation (IKT) compare to other collaborative research approaches to generating and translating knowledge? Learning from experts in the field. Health Res Policy Syst. 2020;18:35. https://doi.org/10.1186/s12961-020-0539-6.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  18. Jason LA, Keys CB, Suarez-Balcazar Y, Taylor RR, Davis MI. Participatory community research: theories and methods in action. Washington D.C.: American Psychological Association; 2004. https://doi.org/10.1037/10726-000.

  19. Integrated Knowledge Translation Research Network site. https://iktrn.ohri.ca/aboutus/what-is-ikt/. Accessed 30 Aug 2021.

  20. Beaulieu M, Breton M, Brousselle A. Conceptualizing 20 years of engaged scholarship: a scoping review. PLoS One. 2018;13(2). https://doi.org/10.1371/journal.pone.0193201.

  21. Gibbons M, Limoges C, Nowotny H, Schwartzman S, Scott P, Trow M. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage Publications; 1994.

    Google Scholar 

  22. Mitchell AS. Mode-2 knowledge production within community-based sustainability projects: applying textual and thematic analytics to action research conversations. Adm Sci. 2020. https://doi.org/10.3390/admsci10040090.

  23. Noel L, Phillips F, Tossas-Milligan K, Spear K, Vanderford NL, Winn RA, et al. Community-academic partnerships: approaches to engagement. Am Soc Clin Oncol. 2019. https://doi.org/10.1200/EDBK_246229.

  24. Graham ID, Rycroft-Malone J, Kothari A, McCutcheon C. Research Coproduction in Healthcare. 2022. Hoboken: Wiley.

  25. McLean RKD, Graham ID, Carden F. Evaluating Research Coproduction. In Graham ID, Rycroft-Malone J, Kothari A, Mccutcheon C. Research Coproduction in Healthcare. 2022. Hoboken: Wiley.

  26. McLean RKD, Graham ID, Bosompra K, Choudhry Y, Coen SE, MacLeod M, et al. Understanding the performance and impact of public knowledge translation funding interventions: Protocol for an evaluation of Canadian Institutes of Health Research knowledge translation funding programs. Implement Sci. 2012;7:57. https://doi.org/10.1186/1748-5908-7-57.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Russell J, Fudge N, Greenhalgh T. The impact of public involvement in health research: what are we measuring? Why are we measuring it? Should we stop measuring it? Res Involv Engage. 2020;6(1).

  28. Greenhalgh T, Fahy N. Research impact in the community-based health sciences: an analysis of 162 case studies from the 2014 UK Research Excellence Framework. BMC Med. 2015;13:232. https://doi.org/10.1186/s12916-015-0467-4.

    Article  PubMed  PubMed Central  Google Scholar 

  29. McLean R, Ofir Z, Etherington E, Acevedo M, Feinstein O. Research Quality Plus (RQ+): evaluating research differently. Ottawa: International Development Research Centre; 2022. Available at https://www.idrc.ca/en/rqplus.

  30. Lebel J, McLean RKD. A better measure of research from the Global South. Nature. 2018;559(7712):23–6. https://doi.org/10.1038/d41586-018-05581-4.

    CAS  Article  PubMed  Google Scholar 

  31. Kreindler SA. Advancing the evaluation of integrated knowledge translation. Health Res Policy Syst. 2018;16(1).

  32. Ward M, Schulz AJ, Israel BA, Rice K, Martenies SE, Markarian E. A conceptual framework for evaluating health equity promotion within community-based participatory research partnerships. Eval Program Plann. 2018;70:25–34.

    Article  Google Scholar 

  33. Beckett K, Farr M, Kothari A, Wye L, Le May A. Embracing complexity and uncertainty to create impact: exploring the processes and transformative potential of co-produced research through development of a social impact model. Health Res Policy Syst. 2018;16(1).

  34. Boivin A, L’Espérance A, Gauvin F-P, Dumez V, Macaulay AC, Lehoux P, et al. Patient and public engagement in research and health system decision making: a systematic review of evaluation tools. Health Expect. 2018b;21(6).

  35. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

    Article  Google Scholar 

  36. Phillippi J, Lauderdale J. A guide to field notes for qualitative research: context and conversation. Qual Health Res. 2017. https://doi.org/10.1177/1049732317697182.

  37. Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59–82. https://doi.org/10.1177/1525822X05279903.

    Article  Google Scholar 

  38. Morgan DL, Ataie J, Carder P, Hoffman K. Introducing dyadic interviews as a method for collecting qualitative data. Qual Health Res. 2013;23(9):1276–84. https://doi.org/10.1177/1049732313501889.

    Article  PubMed  Google Scholar 

  39. Nowell LS, Norris JM, White DE, Moules NJ. Thematic analysis: striving to meet the trustworthiness criteria. Int J Qual Methods. 2017;16(1):1609406917733847. https://doi.org/10.1177/1609406917733847.

    Article  Google Scholar 

Download references

Acknowledgements

We acknowledge the support of the members of the Integrated Knowledge Translation Research Network and its members who have encouraged this study.

Funding

Funding for this study is provided by the Canadian Institutes of Health Research Foundation Grant (FDN #143237) entitled, Moving knowledge into action for more effective practice, programs and policy: A research program focusing on integrated knowledge translation.

Author information

Affiliations

Authors

Contributions

RKDM, FC, and IDG conceptualized the research and study protocol. RKDM and FC drafted the manuscript and facilitated all reviews and revisions following co-author feedback. All authors (RKDM, FC, IDG, ABA, RA, JB, CEC, OD, ED, LAF, MG, AMH, RK, AK, SK, CM, JR, GS) made contributions to the conception and design of the work and reviewed, contributed to, and approved the submitted manuscript.

Corresponding author

Correspondence to Robert K. D. McLean.

Ethics declarations

Ethics approval and consent to participate

Approval has been obtained from the Research Ethics Board of the Ottawa Health Science Network (OHSN-REB 20210642-01H). All participants will sign consent to participate forms and are free to leave the study at any point, for any reason. Confidentiality is assured for study participants.

Consent for publication

Not applicable.

Competing interests

IDG is the scientific director of the IKTRN, AK is the deputy director of the IKTRN, CM is the manager of the IKTRN, JR is the research coordinator of the IKTRN. AMH is protocols editor at Implementation Science.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

McLean, R.K.D., Carden, F., Graham, I.D. et al. Evaluating research co-production: protocol for the Research Quality Plus for Co-Production (RQ+ 4 Co-Pro) framework. Implement Sci Commun 3, 28 (2022). https://doi.org/10.1186/s43058-022-00265-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43058-022-00265-7