Skip to main content

Table 2 Mainstream evaluation stacked against co-production [25]

From: Evaluating research co-production: protocol for the Research Quality Plus for Co-Production (RQ+ 4 Co-Pro) framework

Evaluation approach Challenges for co-production
What form does it take?
Peer-review at proposal, ethics, publication, and sharing stages of research.
Peer-review relies on researchers, not users or beneficiaries, to judge a proposal or a project in terms of scientific criteria. With few exceptions co-production proposals are assessed by scientific peers, not knowledge users (who are not considered peers). (See for example the work of PCORI ( or the former Knowledge Translation Funding Program at CIHR [8, 26] for examples of ‘Merit Review’ in practice.) Further, they use scientific criteria and scientist perspectives to determine whether, 1) a study is ethical for participants on behalf of participants (through REB procedures), and, 2) if a study contains publishable results, not actionable results. In our view, scientists’ expertise can identify the knowledge gaps the work aims to fill and critique the strength of the methods that will be used to produce it. Yet, without including knowledge users and beneficiaries’ significant evaluation gaps persist, as knowledge users are best placed to assess the relevance, significance, utility, and potential impact of the research.
What form does it take?
Metrics and quantitative indices. For example, bibliometrics, altmetrics, university rankings, journal rankings.
Metrics are biased toward fields of research where productivity in creating output is paramount, largely, the scholarly paper published in a peer-reviewed, indexed journal. They are also biased toward the quantification of outputs. Metrics and their aggregations tell us little, if anything, about the quality of the engagement of users in a project. Neither do they speak to the policy or practice relevance of a research topic, or the actual implications of the work for intended beneficiaries. Moreover, they are largely blind to research results that fall outside the indices of mainstream, English-language, academic journal publishing. Similarly, real-world impact resulting from co-production typically goes uncounted with the analytic paradigm.
Research impact assessment (RIA)
What form does it take?
Retrospective reviews, often case studies with social and economic measures.
For co-producers whose aim is knowledge uptake and use, the RIA approach seems welcome at first glance. In some cases, the RIA may even privilege research co-production which can be well positioned to accelerate the uptake and impact of research by knowledge users. However, RIA is not a complete solution for research co-production quality evaluation. RIA may provide a meaningful measure for funders and organizations whose primary concern is amplifying or modifying the magnitude of impact they can demonstrate and communicate; additionally, it does not systematically recognize and study the process of user-engagement and how it can set a course and even create social change during study design and implementation [27, 28]. Furthermore, the mismatch between research funding trajectories (typically 1-5 years) and research impact trajectories (typically 10-20 years) leaves a significant gap in our knowledge of how to do better co-production.