Evaluation, statistical, reserve design, scaling, impact, teaching
Posted on 27/05/2023
How can CGIAR support the roll-out of the Guidelines with the evaluation community and like-minded organizations?
I believe that CGIAR can help like-minded organizations use the guidelines by emphasizing its best feature—flexibility.
Flexibility is necessary. The guidelines were informed by the work of CGIAR, which is tremendously varied. A common evaluation design would not be appropriate for CGIAR. Neither would it be appropriate for most like-minded organizations.
Flexibility is a middle ground. Instead of using a common evaluation design, each project might be evaluated with one-off bespoke designs. Often this is not practical. The cost and effort of individualization limits the number, scope, and timeliness of evaluations. A flexible structure is a practical middle ground. It suggests what like-minded organizations and their stakeholders value and provides a starting place when designing an evaluation.
Flexibility serves other organizations. The very thing that makes the guidelines useful for CGIAR also makes it useful to other organizations. Organizations can adopt what is useful, then add and adapt whatever else meets their purposes and contexts.
Perhaps CGIAR could offer workshops and online resources (including examples and case studies) that suggest how to select from, adapt, and add to its criteria. It would not only be a service to the larger community, but a learning opportunity for CGIAR and its evaluation efforts.
United States of America
John Gargani
Posted on 01/01/2025
As 2024 comes to an end, I thought I would share more FREE scaling resources (see below). And I want to announce that in 2025 IDRC is planning to release a free MOOC (massive open online course) on scaling impact. The release date is TBD. In the meantime...
The Scaling Impact Book, Playbook, and More Resources at IDRC
English https://idrc-crdi.ca/en/scalingscience
French https://idrc-crdi.ca/fr/misealechelle
Spanish https://idrc-crdi.ca/es/scalingscience
Video
Scaling Impact: Five Big Ideas (Gargani, 2024)
https://www.youtube.com/watch?v=_k49xuTZyu0
Articles
Scaling Science (Gargani & McLean, 2017)
Stanford Social Innovation Review
https://ssir.org/articles/entry/scaling_science
Dynamic evaluation of agricultural research for development supports innovation and responsible scaling through high-level inclusion (Gargani, Chaminuka & McLean, 2024)
Agricultural Systems
https://www.sciencedirect.com/science/article/pii/S0308521X24001823
Website
scalingXchange
A Call to Action from the Global South
https://www.scalingxchange.org/
See you all in the new year!
John
United States of America
John Gargani
Posted on 23/12/2024
Thanks to all who contributed so far. The depth and range of experience is impressive. And it sheds light on why scaling is so difficult—it is undertaken in many ways in diverse contexts for different purposes through multiple pathways.
When Rob McLean and I wrote Scaling Impact: Innovation for the Public Good, we wanted to understand what innovators in the Global South considered successful scaling. We learned a lot. We also learned how much we still need to learn.
One area of learning is how to evaluate scaling efforts. This is distinct from evaluating whether an innovation/program/policy/etc. works (in some sense) at different scales. That is important. But as I see it, when evaluating scaling, the question is: How well has scaling been undertaken to achieve the best (in some sense) impact?
Part of my struggle in answering this evaluative question is perspective. I believe we always want to answer it from the perspective of the people who are affected by scaling. In addition, we may want to answer it from the perspective of the innovator who guides an innovation through its life cycle/pipeline/diffusion/etc. Or we might answer it from the perspective of organizations that run programs/sell products/advance policies that promote the use and/or benefits of an innovation or bundle of innovations. Then again, we might consider the many variations of an innovation that may be set loos by a discovery (for example, AI), and how collectively their competition and complementarity create impacts that, for better or worse, are often unanticipated. And then there is the larger systems perspective in which the innovation is one of many factors. This is not a complete list.
What perspectives matter? How feasible is it to consider more than one at a time? How do we take into account that some may benefit from an innovation and others do not? Or is it simpler than this?