Scientific progress demonstrates that collaboration goes far beyond direct human interaction. Knowledge in many scientific disciplines evolves in a collaborative endeavour that constantly draws on existing evidence (broadly understood), theoretical creativity, and renewed observation or testing to move its boundaries further. Collaboration in that sense does not only transcend countries and continents, but time and generations. Newton’s metaphor of “giants” on whose shoulders one can see further is currently being explored in the field of evaluation. As a young and methodologically facetted discipline in a context of rapid digitisation, broader and more systematic use of existing evidence poses specific benefits and challenges. One characteristic of such developments is that they themselves often are collaborative in nature because collecting, mapping, synthesizing, and analysing existing evidence requires a range of different capabilities not always available to one individual or organisation.
The way theses collaborations and their resulting products are structured will be critical for the future of the evaluation profession. Some key questions are:
1. What can we learn from existing evidence?
- Given that evaluations aim to generate context-specific results, what can we learn from findings in other contexts?
2. What counts as evidence?
- Which types of evaluations are included in maps and syntheses? Which are not? How can products effectively access the full breadth of evaluative evidence across languages, countries, and organisations? How can they account for different ways of knowing?
3. Who can use the evidence?
- Are the resulting products proprietary or is there a broader joint effort to create a public good of evaluative evidence? What other hurdles exist for their use?
4. How can collaborative efforts be conducted in a cost-effective way?
- What kind of coordination is needed to avoid duplication of effort? How can products remain elevant over time despite the rapid growth of new evidence? What can new tools like AI contribute to such efforts?
The goal of this panel is to generate insights on these questions by discussing tangible examples of inter-organisational collaborations that have engaged in efforts to map and synthesize existing evidence for use in evaluations and practice. As such, the panel aims to facilitate a discussion on both good collaboration practice and reflexive use of the evidence the “giants” before us have generated. Each of the papers included in this panel constitutes an example of collaboration, coordination, and co-creation between different actors involved in evaluations and their perspectives on evidence. The types of collaborations differ across the individual presentations.
The first presentation discusses a collaboration between 3ie and DEval to provide an overview of existing experimental and quasi-experimental evidence available for moving the 2030 Agenda along. It uses a more specific definition of evidence but provides a timely overview of the full breadth of evidence readily available to improve development practice across the SDGs.
A second example is comprised of a presentation of the Global SDG Synthesis Coalition, presented by UNDP. This initiative brought together dozens of UN agencies, UN Member States, and CSOs to investigate which evidence exists on SDG pillars.
A third example of using evidence collaboratively is the West Africa Capacity-Building and Impact Evaluation (WACIE) program, a 3ie regional initiative bringing together high-level policymakers from the eight countries of the West African Economic and Monetary Union to support and promote the evidence-informed decision-making.
Finally, 3ie and DEval have taken a sectoral approach and collaborated on mapping the evidence in the area of sexual and reproductive health and rights (SRHR). Subsamples of the evidence mapped are synthesised and results compared with findings from the German SRHR development cooperation portfolio, as part of a DEval evaluation.