Posted on 08/08/2024
Dear colleagues,
it is exciting to see such an insightful and passionate discussion. I represent CGIAR’s evaluation function and we developed and have implemented EAs with Amy under advisory of Rick Davies. Rationale for conducting EAs in CGIAR is summed up in the blog. Complementing and in response to some items raised here, it is important to note that all contexts are not created equal and level of unquestionable preparedness for an evaluation cannot be assumed in any type and size of intervention in any context. In the evolving contexts, some aspects may not be prioritized in time and to meet the needs of everyone involved. EA thus they have already brought us one step closer to facilitate learning and ensure that accountability mechanisms, you can even call it an evaluative baseline, are in place before an intervention is launched or progresses too far. EAs have helped builds confidence among stakeholders, including MEL professionals (often disempowered) and funders, that the aspirational goals and objectives are not only feasible to implement and measure, but also that MEL colleagues are a key stakeholder for us and towards reconciling accountability and transparency with those who fund. EAs have helped enhance the sense of ownership and credibility of processes and results, and thus can be crucial for securing funding and support. By recommending funders around timing and evaluation scope expectation can be set properly also at which level the evaluation can be funded, to set up expectations on the depth of inquiry and evaluative learning.
Italy
Svetlana I Negroustoueva
Lead, Evaluation Function
CGIAR
Posted on 26/05/2025
Dear colleagues,
Thank you for bringing up such an important topic and all the insights from the COP members. I am offering insights from CGIAR, a consortium of 13 agricultural research centers which are primarily based in the global South. Through and with partners, including national agricultural research systems, CGIAR centers work collaboratively and evaluations of CGIAR’s portfolio then prioritize approaches and methods to capture elements of, albeit not necessarily in the context of a formal SSTC framework.
CGIAR’s evaluation policy (link) includes the Quality of Science eval criterion, at its core- ‘legitimacy’ through ethical research practices- a designated evaluation guidance was developed (link, also in Spanish). To operationalize QoS, process/performance evaluations often involve analyzing collaborative research, capacity-building, and knowledge exchange among researchers in the global north-south, and of research done in the global South- (see brief on partnerships). Among the specific tools tailored to SSTC, Social Network Analysis (SNA) has proven to be a useful tool (see SNA Guide from CGIAR, and example of report).
Common challenges include monitoring data scarcity, lacking documentation such as MOUs, attribution complexities in contexts of multiple interventions by CGIAR centers, and external partners to CGIAR, and limited ability to comprehensively capture depth of diversity of stakeholder perspectives. On the other hand, designated Partnerships Frameworks and Strategies greatly facilitate evaluability of such efforts.
Watch out or a designated gLOCAL session from CGIAR on use of SNA to evaluate partnerships. : https://zoom.us/webinar/register/WN_k-2ak4d9Rjef_KLo_yS6XQ Top of Form
Svetlana Negroustoueva -Lead, Evaluation Function, The Independent Advisory and Evaluation Service, CGIAR.