Skip to main content
20 contributions

Global Impact Evaluation Forum 2025: Forging evidence partnerships for effective action

Posted on 25/11/2025 by Ola Eltoukhi
Global Impact Evaluation Forum 2025
WFP OEV

Background and rationale

The United Nations marks its 80th anniversary at a time when the world faces a convergence of complex crises, coupled with declining Official Development Assistance. It now has to demonstrate maximum value where it matters most: in the lives of the people it aims to serve.

In March, UN Under-Secretary-General for Humanitarian Affairs and Emergency Relief Coordinator, Tom Fletcher, called for a Humanitarian Reset setting the tone for a more agile, unified, and cost-effective UN system. This requires not only robust evidence of what works but a fundamental shift in how we work together.

Impact evaluations complement other monitoring, evaluation and research exercises, by helping identify what interventions are most effective. However, rigorous causal evidence from humanitarian contexts remains scarce.[1],[2]

Against this backdrop, the challenge is clear: to meet the demands of this new era, the UN system must decisively bridge the gap between evidence and action. But doing so in a timely and collaborative manner, specifically in fragile settings, remains a persistent challenge. 

The forthcoming Global Impact Evaluation Forum, hosted by WFP in partnership with BMZ and Norad, builds on the momentum of the inaugural 2023 WFP Forum, and the 2024 WFP-UNICEF Forum, to pivot from exploration to joint action across the UN system and beyond.

Discussion objectives

The Forum will convene practitioners, policymakers, and partners to explore how impact evaluation can align with these larger objectives, specifically in:

  • Maximizing value to drive cost-effectiveness in the UN System
  • Delivering global support for local action: building evidence for the localization agenda
  • Unifying action: Using evidence to support UN reforms and
  • Connecting efforts and evidence in the humanitarian, development, and peace nexus

The purpose of this online discussion, hosted on EvalforEarth and led by WFP’s impact evaluation team, is to build momentum for key themes of the forum, to be continued online during the event itself from 9-11 December 2025. 

If you’re interested in joining the Global Impact Evaluation forum later in December ONLINE, please register here.

The discussion is open for contributions until 8 December 2025

Guiding questions

  1. Bridging evidence and action: What are the most effective ways the UN and its partners can strengthen the link between impact evaluation findings and real-time decision-making?
  2. Localizing evidence: How can impact evaluations be designed and used to better serve the localization agenda, ensuring that local priorities, capacities, and contexts inform policy and programmes
  3. Supporting UN reform: How can the impact evaluation community collectively contribute towards goals of coherency and cost-effectiveness in the UN system?
  4. Connecting evidence: How can various UN agencies and partners, with diverse mandates, align evidence agendas in the humanitarian, development, and peace nexus?
     

[1] A WFP-World Bank literature review highlights the scarcity of rigorous impact evaluations in cash and in-kind transfers.

[2] A Global Prioritisation Exercise by Elrha identified challenges including insufficient feedback loops from research to scaling interventions and a specific need for more cost-effectiveness data.

This discussion is now closed. Please contact info@evalforearth.org for any further information.

Chamisa Innocent

Zimbabwe

Chamisa Innocent

EvalforEarth CoP Facilitator/Coordinator

EvalforEarth CoP

Posted on 12/12/2025
Dear colleagues,
Thank you for your valuable contributions to this online discussion in the lead-up to the Global Impact Evaluation Forum 2025. Your reflections, and examples enriched the conversation on how the UN and its partners can better connect impact evaluation, decision-making and UN reform.
A few key messages stood out across the contributions:
  • Impact evaluations must be designed for decisions, not just for reports. Many of you stressed the importance of starting from concrete policy and programming questions, embedding learning loops, and ensuring that evidence is generated and communicated at the speed of operations, rather than arriving “too late” to inform real-time choices.
  • Localising evidence requires shifting power to local actors. Contributors highlighted that evaluations are most useful when questions, methods, data and interpretation are co-created with national institutions, local governments, civil society, and communities, and when evidence systems strengthen national capacities and data systems rather than bypass them.
Over the coming days, we will prepare and share a short discussion summary capturing the main insights, as well as a blog that will highlight key lessons and messages.
Thank you once again for your engagement and for advancing this important conversation.
Hailu Negu Bedhane

Ethiopia

Hailu Negu Bedhane

cementing engineer

Ethiopian electric power

Posted on 12/12/2025

Advanced Message for the Global Impact Evaluation Forum 2025

Colleagues, partners,

Our goal is to create alliances for successful action. This necessitates a fundamental change: integrating impact evaluation (IE) as a strategic compass for real-time navigation instead of viewing it as a recurring audit of the past.

  1. Linking Evidence and Action: From Feedback Loops to Reports 
    Better feedback systems, not better reports, will increase the connection between evaluation and decision-making. Three procedures need to be institutionalized by the UN and its partners:
  • Light-touch, embedded IE units: Within programmatic arms, such as humanitarian clusters or nation platforms, small, committed teams use predictive analytics and quick mixed methods to evaluate hypotheses during implementation rather than after.
  • Decision-Based Costing: requiring a specific, significant budget line for adaptive management and real-time evidence gathering in every significant program proposal. As a result, evidence becomes an essential part of the program rather than an afterthought.
  • Leadership Dashboards: Going beyond narrative reports, these dynamic, data-visualization tools allow executives to view the "vital signs" of a portfolio and make course corrections by comparing key impact indicators versus theory-of-change milestones.
  1. Localizing Evidence: Inverting the Credibility Hierarchy 
    The implicit hierarchy that favors exterior "rigor" over local relevance must be dismantled in order to represent local goals and contexts. 
  • Co-Design from Inception: Local stakeholders, including governments, community leaders, and CSOs, must collaborate to create the assessment questions and define "impact" in their particular context. This is shared ownership, not consultation.
  • Make Local Analytical Ecosystem Investments: Funding and collaborating with regional institutions, think tanks, and data science collectives is the most sustainable approach to localizing evidence. This preserves intellectual capital domestically, increases capacity, and guarantees language and cultural nuance.
  • Adopt Pluralistic Approaches: RCTs are important, but we also need to give systems mapping, participatory action research, and qualitative approaches with a cultural foundation equal weight. The "gold standard" is the one that provides the most urgent local solution.
  1. Encouraging UN Reform: A Group "Evidence Compact" 
    By functioning as a cohesive, system-wide profession, the impact evaluation community can serve as the catalyst for coherence and cost-effectiveness. 
  • Standardization is not the same as common standards: Create a UN System-wide "Evidence Compact"—a concise consensus on shared platforms for meta-analysis and principles (such as open data, ethics, and quality thresholds). By doing this, we can compare what works across sectors and eliminates repetition.
  • Pooled Evaluation Funds: We should establish pooled funds at the regional or thematic level rather than having each agency commission tiny, dispersed studies. Larger, more strategic, cross-mandate assessments that address intricate, system-wide issues like social protection or climate adaptation are made possible by this.
  • A "What Works" Knowledge Platform: A single, easily available, and well-curated digital platform that links findings from UNICEF's education RCTs, UNDP's governance evaluations, UNHCR's protection analysis, and WFP's food security research. In doing so, agency-specific evidence becomes a public good of the UN.
  1. Linking Evidence Throughout the Nexus: Make the Intersections Mandatory 
    The goal of alignment in humanitarian, development, and peace efforts is to require careful investigation at their intersections rather than to harmonize objectives at their core. 

    Support and Assess "Triple Nexus" Pilots: Impact evaluations that expressly target initiatives that aim to bridge two or all three pillars must be jointly designed and funded by agencies. The main inquiry is: "Do integrated approaches yield greater sustainability and resilience impact than sequential or parallel efforts?" 
    Establish Nexus IE Fellowships: Impact evaluation experts should be rotated throughout UN agencies (for example, from FAO to OCHA to DPPA). This creates a group of experts who are proficient in several mandate "languages" and capable of creating assessments that track results along the peace, development, and humanitarian spectrum.
  • Adopt a Resilience Lens: Focus evaluation questions on enhancing system and community resilience. This offers a unifying paradigm that is pertinent to peacebuilders (social cohesiveness), development actors (chronic vulnerability), and humanitarian responders (shock absorption).

To sum up, creating evidence partnerships for successful action involves creating a networked learning system. It necessitates changing our investments from isolated research to networked learning infrastructures, from hiring experts to expanding local ecosystems, and from directing group adaptation for common objectives to proving attribution for individual projects. 
Instead of calling for additional evidence, let's end this discussion with a pledge to create the channels, platforms, and collaborations necessary to provide the appropriate evidence to decision-makers—from UN country teams to community councils—in a timely manner.
I'm grateful.

 

John Hugh Weatherhogg

Italy

John Hugh Weatherhogg

Agricultural Economist

ex-FAO

Posted on 08/12/2025

The onus for ensuring that evaluation findings are take into account in new project designs has to lie with the financing or development institutions.

Any project preparation should start with desk review of past experience with similar projects, not only to get a better idea of what works and what doesn't but also to make sure that you are not "re-inventing the wheel", i.e. not likely to put forward approaches which have been tried and failed 30 or more years earlier.

The responsibility of evaluators is to make sure that their reports are duly filed and readily available  to those involved in the creation of new projects. 

Anibal Velasquez

Peru

Anibal Velasquez

Senior Policy Advisor for Public Policy and Partnerships at the World Food Programme

WFP

Posted on 08/12/2025

Drawing on WFP’s experience in Peru, I would argue that “connecting evidence and action in real time” is not primarily a technical challenge of evaluation. It is, above all, about how the UN positions itself within political cycles, how it serves as a knowledge intermediary, and how it accompanies governments throughout decision-making processes. Based on what we observed in anemia reduction, rice fortification and shock-responsive social protection, at least seven concrete pathways can help the UN and its partners strengthen this evidence–decision link.

1. Design evaluations for decisions, not for publication

Impact evaluations must begin with very specific policy questions tied directly to upcoming decisions:

  • Is the proposed solution scalable under the country’s fiscal and administrative constraints?
  • What effect size would justify changing a regulation, launching a new programme or reallocating public funds?

In Peru, community health worker pilots in Ventanilla, Sechura and Áncash were not designed as academic exercises, but as policy laboratories explicitly linked to decisions: municipal anemia targets, MINSA technical guidelines, the financing of Meta 4 and later Compromiso 1. Once the pilots demonstrated clear results, the government financed community health home visits nationwide for the first time. Subsequent evaluation of the national programme showed that three percentage points of annual reduction in childhood anemia could be attributed to this intervention.

For the UN, this means co-designing pilots and evaluations with ministries of finance, social sectors and subnational governments, agreeing from the outset on:

  • what evidence is needed,
  • who will use it, and
  • what specific decision will be unlocked if results are positive.

2. Embed adaptive learning cycles into the country’s operational cycle

In Peru, WFP integrated evidence generation and use into its operational cycle, creating explicit moments for reflection, contextual analysis and strategic adjustment. This logic can be scaled across the UN system:

  • move beyond endline-only evaluations and use formative assessments, rapid monitoring, process evaluations and adaptive management tools (PDIA, adaptive management) to course-correct in real time.

The most effective approach is to institutionalize learning loops: periodic spaces where government and UN teams jointly review data, interpret findings, and adjust norms, protocols or budgets on short, iterative cycles.

3. Position the country office as a knowledge broker (a public-sector think tank)

In Peru, WFP increasingly operated as a Knowledge-Based Policy Influence Organization (KBPIO):

  • developing pilots with public and private co-financing to test innovations and assess their technical, political, social and economic feasibility for national scale-up,
  • producing policy briefs, feasibility analyses and return-on-investment studies,
  • systematizing pilot results and translating evidence into clear messages for ministers, governors, MEF, Congress, the private sector and the public.

For the UN, strengthening the evaluation–decision link requires country offices to synthesize and translate evidence—from their own evaluations, global research and systematic reviews—into short, actionable, timely products, and to act as trusted intermediaries in the actual spaces where decisions are made: cabinet meetings, budget commissions, regional councils, boards of public enterprises and beyond.

4. Turn strategic communication into an extension of evaluation

In the Peruvian experience, evidence on anemia, rice fortification, food assistance for TB patients and shock-responsive social protection became policy only because it was strategically communicated:

  • public campaigns such as Cocina con Causa or La Sangre Llama,
  • crisis communication that protected evidence-based policies,
  • participation in expert committees,
  • consistent presence in media, social networks and business forums such as CADE.

For the UN, the lesson is unequivocal:
every major evaluation needs a communication and advocacy strategy—with defined audiences, tailored messages, designated spokespersons, windows of opportunity and specific channels. Evidence cannot remain in a 200-page report; it must become public narratives that legitimize policy change and sustain decisions over time.

5. Accompany governments across the entire policy cycle

In Peru, WFP did not merely deliver reports; it accompanied government partners throughout the full policy cycle:

  • problem framing,
  • solution design through pilots,
  • evidence generation,
  • translation of results into norms, guidelines and SOPs,
  • support for implementation, monitoring and iterative adjustments.

This accompaniment blended roles—implementer, technical partner and facilitator—depending on the moment and the institutional need. For the UN, one of the most powerful ways to connect evaluation with decision-making is to embed technical teams in key ministries (health, social development, finance, planning) to work side by side with decision-makers, helping interpret results and convert them into concrete regulatory, budgetary and operational instruments.

6. Build and sustain alliances and champions who “move the needle”

In Peru, evidence became public policy because strategic champions existed: vice-ministers, governors, heads of social programmes, the MCLCP, and committed private-sector partners. The UN and its partners can strengthen the evidence–decision nexus by:

  • deliberately identifying, nurturing and supporting champions,
  • providing them with evidence, international experiences (SSTC), visibility and political negotiation tools.

In this sense, evaluations—impact evaluations, formative assessments, cost-effectiveness studies—become assets that empower national actors in their own political environments, rather than external products belonging solely to the UN.

7. Invest in national data systems and evaluative capacities

Real-time decision-making is impossible when data arrive two years late. In Peru, part of WFP’s added value was supporting improvements in information systems, monitoring and analytical capacity at national and regional levels.

For the UN, this translates into:

  • supporting data infrastructure, interoperability and administrative records,
  • strengthening government M&E units,
  • promoting norms and budgeting frameworks that require evidence-based justification for policy and spending decisions.

8. Understand how decisions are made to ensure recommendations are actually used

Understanding real-world decision-making is essential for evaluations to have influence. Decisions are embedded in political incentives, institutional constraints and specific temporal windows. The Cynefin framework is useful for distinguishing whether a decision environment is:

  • simple (requiring clear standards),
  • complicated (requiring deep technical analysis),
  • complex (requiring experimentation and pilots), or
  • chaotic (requiring rapid information for stabilisation).

An evaluation that ignores this landscape may be technically sound but politically unfeasible or operationally irrelevant. Evaluating is not only about producing evidence; it is about interpreting how decisions are made and tailoring recommendations to that reality.

In synthesis

From the Peru experience, the most effective way for the UN and its partners to connect evaluations with real-time decision-making is to stop treating evaluation as an isolated event and instead turn it into a continuous, political–technical process that:

  • is designed jointly with decision-makers,
  • is embedded in adaptive policy cycles,
  • is supported by country offices acting as knowledge brokers,
  • is amplified by strategic communication and alliances,
  • and is anchored in national data and evaluative capacities.

In short: fewer reports that arrive too late, and more living evidence—co-produced, communicated and politically accompanied—at the service of governments and the people we aim to reach.

 Anibal Velásquez, MD, MSc, is Senior Policy Advisor for Public Policy and Partnerships at the World Food Programme in Peru since 2018. A medical epidemiologist with over 20 years of experience, he has led research, social program evaluations, and the design of health and food security policies. He contributed to Peru’s child malnutrition strategy, health sector reform, and key innovations such as adaptive social protection for emergencies, fortified rice, and community health workers for anemia. Dr. Velásquez previously served as Peru’s Minister of Health (2014–2016), Director of Evaluation and Monitoring at the Ministry of Development and Social Inclusion (MIDIS), and Director of the National Institute of Health, as well as consultancy positions across 16 countries in Latin America and the Caribbean.

Daniel Ticehurst

United Kingdom

Daniel Ticehurst

Monitoring > Evaluation Specialist

freelance

Posted on 08/12/2025

Thanks for posting this. Here are my - rather terse - answers to two of the above questions

  1. Question 1: Bridging Evidence and Action: Strengthening the Link Between Impact Evaluation and Real-Time Decision-Making
    For impact evaluation to inform real-time decisions, it must be purpose-built for use, not produced as a retrospective record. As Patton would emphasise, the initial step is decision mapping: recognise what decisions lie ahead, who will make them, and what evidence they require. Evaluations must match operational pace—not outlive it. Evaluations must align with operational tempo—not outlast it. This requires:

    a) Embedded evaluators who sit with programme teams and leadership, translating emerging findings into operational options;
    b) Short, iterative learning products—rapid briefs, real-time analysis, adaptive monitoring—that inform decisions long before the final report emerges; and 
    c) Structured feedback loops that carry findings directly into planning, budgeting, and resource allocation.
     

    Arguably, none of this matters unless the system is willing to let evidence disrupt comfort. If findings never unsettle a meeting, challenge a budget line, or puncture institutional vanity, they are not evidence but décor. Evidence influences action only when it is permitted to disturb complacency and contradict consensus.

  2. Localizing Evidence: Making Impact Evaluations Serve Local Priorities and Power Structures
    Localization begins with local ownership of purpose, not merely participation in process. Impact evaluations should be co-created with national governments, civil society, and communities, ensuring that questions reflect local priorities and political realities. They must strengthen national data systems, not bypass them with parallel UN structures. And interpretation of findings must occur in-country, with local actors who understand the lived context behind the indicators.
    Yet there is a harder truth: calling something “localized” while designing it in Geneva and validating it in New York is an exercise in bureaucratic self-deception. True localisation demands surrendering control—over agenda-setting, determining the objectives and evaluation questions, data ownership, and interpretive authority. Anything less is a ritual performance of inclusion rather than a redistribution of power.

     

  3. Conclusion
    To bridge evidence and action, and to localize evaluation meaningfully, the UN must pair the discipline of use with a discipline of honesty. Evidence must be designed for decisions, delivered at the speed of operations, and empowered to unsettle institutional habits. And localization must shift from rhetoric to reality by making national actors—not UN agencies—the primary authors, owners, and users of impact evaluation.
    Otherwise, the system risks perfecting its internal coherence while leaving the world it serves largely unchanged.
Jean de Dieu BIZIMANA

Rwanda

Jean de Dieu BIZIMANA

Senior Consultant

EIG RWANDA

Posted on 08/12/2025

Dear Dr Uzodinma, Adirieje  

sure, the message you share is clear, However more action is needed before starting evaluation. There is a need for definition and terms and conditions, in addition methodology approaches also need to be well defined.
During this time, we need to clarify the usage of appropriate technologies such as AI ,GIS, Etc. Additionally communication is required as outcome of the evaluation.
Thank you for your consideration

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posted on 08/12/2025

In connecting evidence across the Humanitarian–Development–Peace (HDP) Nexus, aligning evidence agendas across humanitarian, development, and peace pillars requires intentional systems that move beyond sectoral silos toward holistic, context-responsive learning. In my experience at the intersection of PCVE, gender, and governance in county settings, valuable data exists across all three pillars—yet fragmentation prevents it from shaping a unified understanding of risk, resilience, and long-term community wellbeing.

One way to strengthen coherence is through shared learning frameworks built around harmonized indicators, aligned theories of change, and interoperable data systems. Humanitarian actors collecting early warning signals, development teams gathering socio-economic data, and peacebuilding practitioners tracking governance and cohesion trends can feed insights into a common evidence ecosystem. Joint sense-making platforms across UN agencies, county governments, and civil society further ensure interpretation and adaptation occur collectively.

Supporting local CSOs to build capacity in Core Humanitarian Standards (CHS) of Quality Assurance is critical. When local actors understand and apply CHS, their data becomes more reliable and compatible with UN and donor systems. Co-creating evaluation tools, monitoring frameworks, and learning agendas with these CSOs strengthens ownership and ensures evidence reflects local realities.

In African contexts, incorporating “Made in Africa Evaluation” (MAE) approaches, published and championed by our very own, Africa Evaluation Association (AfEA), can further decolonize practice by integrating local values, culture (such as Ubuntu), and conditions. By combining MAE principles with CHS, UN and donor systems can leverage contextually relevant methodologies, strengthen local capacity, and promote governance and accountability in a culturally grounded manner.

Finally, stronger Donor–CSO networking structures—learning hubs, joint review forums, and communities of practice—deepen understanding of scope, stabilize transitions of project ownership, and support long-term collaboration. Connecting evidence, capacities, and local approaches ensures HDP programs are coherent, context-sensitive, and impactful for the communities they serve.

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posted on 08/12/2025

The impact evaluation community can play a critical role in advancing the UN’s reform agenda, particularly the goals of coherence, cost-effectiveness, and system-wide alignment. In my work across multi-partner consortia and county-level government structures, I have seen how fragmentation in evaluation approaches often leads to duplication, inconsistent standards, and heavy reporting burdens for local actors. Supporting UN reform begins with harmonizing evaluation frameworks across agencies so that they draw from shared theories of change, common indicators, and compatible data systems. This reduces transaction costs for implementing partners and allows evidence to be aggregated more systematically across the humanitarian–development–peace (HDP) nexus.

The evaluation community can also contribute by promoting joint or multi-agency evaluations, particularly for cross-cutting thematic areas like PCVE, gender equality, and resilience. Joint evaluations not only save resources but also produce findings that are more holistic and better suited to inter-agency coordination. Additionally, evaluation teams can support reform by emphasizing adaptive, utilization-focused methodologies that produce real-time insights and decision-relevant evidence, rather than lengthy reports that come too late to influence programming.

Cost-effectiveness can be further enhanced by investing in local evaluators, research institutions, and government systems rather than relying exclusively on external consultants. This not only builds long-term capacity but also reduces the financial and operational footprint of evaluations. The evaluation community can strengthen UN reform by championing a culture of shared accountability, collaborative learning, and strategic alignment—ensuring that evidence not only measures results but also enables the UN system to function more cohesively and effectively.

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posted on 08/12/2025

On localising evidence; designing and using impact evaluations to advance the localization agenda requires the UN and its partners to shift power toward local actors, both in defining evaluation priorities and in generating the evidence itself. From my experience supporting MEL across county governments, local CSOs, and community structures, localization succeeds when evaluations are not externally imposed but co-created with those closest to the problem. This begins with jointly defining evaluation questions that reflect community priorities and county development realities, rather than donor-driven assumptions. It also involves investing in the capacities of county departments, local researchers, and grassroots organizations to participate meaningfully in evaluation design, data collection, analysis, and interpretation.

A particularly important opportunity is the intentional integration of citizen-generated data (CGD), that I have mentioned in a previous post, and locally collected datasets into evaluation frameworks. Many local CSOs like mine, community networks, and think tanks already generate rich and credible data on governance, resilience, gender dynamics, and PCVE indicators. When validated and aligned with national standards, these data sources can complement official statistics, strengthen SDG reporting, and ensure that evaluation findings reflect lived realities. This approach not only accelerates evidence availability but also embodies the principle of “nothing about us without us.”

Localizing evidence also means ensuring that findings are communicated back to communities in accessible formats and used in county-level decision forums such as CIDP reviews, sector working groups, and community dialogues. Furthermore, evaluations should include iterative sense-making sessions with local actors so they can directly shape programme adaptation. Ultimately, localization is not just about generating evidence locally—it is about shifting ownership, elevating local expertise, and ensuring that impact evaluations meaningfully inform policies and programmes at the levels where change is most felt.

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posted on 08/12/2025

On brigding evidence with action; strengthening the link between impact evaluation findings and real-time decision-making requires the UN and its partners to embrace learning-oriented systems rather than compliance-driven evaluation cultures. From my experience leading MEL across multi-county PCVE and governance programmes, evidence becomes actionable only when it is intentionally embedded into programme management—through continuous feedback loops, co-created interpretation sessions, and adaptive planning processes. Structured learning forums where evaluators, implementers, government stakeholders, and community representatives jointly analyse emerging findings are particularly effective for translating insights into operational shifts.

In the PCVE space, real-time evidence use is especially critical due to the fast-evolving nature of threats and community dynamics. A recent example is my organisation’s Submission to the UN Special Rapporteur under the UNOCT call for inputs on definitions of terrorism and violent extremism, where we highlighted how grounding global guidance in locally generated evidence improves both relevance and uptake. This experience reaffirmed that when evaluation findings are aligned with practitioner insights and local contextual knowledge, global frameworks become more actionable on the ground.

Additionally, the UN can strengthen evidence uptake by integrating citizen-generated data (CGD) into SDG indicator ecosystems—particularly where local CSOs and think tanks already generate credible, validated datasets. Leveraging CGD not only accelerates access to real-time insights but also strengthens community ownership and localization.

Ultimately, bridging evidence and action requires mixed-method evaluations, rapid dissemination tools, psychological safety for honest learning, and a UN culture where evidence is viewed as a shared resource for collective decision-making, not merely an accountability requirement.

Richard Tinsley

United States of America

Richard Tinsley

Professor Emeritus

Colorado State University

Posted on 29/11/2025

As requested my contribution to the discussion is re-posted below:

Before I get into some of my pet concerns on evaluations please allow me a couple comments on Binod Chapagain remarks. His comment concerning the evaluations being too late in a project’s life to effectively adjust the approach, emphasis one of my frequent comments. That is the biggest contribution of evaluations are how they impact future projects design to better serve the intended beneficiaries. Along with this concern is how difficult it is to adjust ongoing projects. Please note that most major projects, particularly those with external funding and expatriate advisors, have lead times of more than two years and preparation costs exceeding one million dollars, before the advisory team can be contracted, fielded, and finally have detailed interaction with the beneficiaries to determine what their real needs might be. With that much time and effort committed, no one wants to finally learn the project was not fully welcomed by the community it was intended to serve. Also, by the time the implementing team is on the ground, most of the fundamental as to what type of innovation is going to be undertaken with staff, particularly expatriate staff, recruited committed to these innovations. This again limits any major adjustments to project programs. Perhaps a little tweaking the approach but no major adjustments. Again, the evaluations are most effective in guiding future projects.

Also, Binod mentioned that most evaluations were designed to document compliance with original documents as to what was to be accomplished. These evaluations by their very nature usually mostly internal and aimed at appeasing the donors and thus illustrate the project appears successful. This is necessary to ensure project extensions and future projects for the contractor. Thus, the evaluation report needs to be taken with a measure of skepticism for which a few simple computations or analysis could quickly show the flaws. They often rely on the recipient of evaluation reports being too tied up with project management or designing future projects to give the evaluation as reported the scrutiny need to guide future projects, just happy to see what appears as a positive result.

Also, I noticed Tamarie Magaisa mentioned targets for evaluation criteria. This again I fully endorse as necessary to separate successful from unsuccessful projects. Without well established and published targets, stated at the projects conception, projects can easily be proclaimed successful when by most criteria they are total failures. More about this later.

Please now allow me to briefly vent some of my concerns where evaluations have failed to assist smallholder farming communities by overlooking critical criteria or covering up innovations that should have been quickly identified as failure. 

  1. The first is the failure to recognize that most of our innovations in farm production are more labor-intensive, when most smallholder farmers are in a severely labor stressed environment. Thus, while our innovations are a good physical fit to the environment, highly desirable for various reasons, they are not operationally feasible over the entire community. The result is with manual operations it takes some 8 weeks for basic crop establishment, rendering most mid-season activities null and void resulting in declining potential yields until smallholder farmers cannot meet their domestic family food security needs. An evaluation could easily address this concern with some simple field observations, plus some simple questions, rather than blaming limited acceptance on poor extension to farmers with limited formal educations. Once the operational limitations are recognized, projects could more effectively concentrate on enhancing the operational capacity of smallholder communities with such solutions as facilitating access to contract mechanization has occurred in paddy producing Asia some 30 years ago with the shift from water buffalo to power tillers.

 

  1. A major part of this is that most smallholder farmers have sever dietary caloric deficit to undertake a full day of agronomic field work. A full day of agronomic field work requires a diet of more than 4000 kcal/day while they are lucky to have access to 2500 kcal/day which allowing 2000 kcal/day for basic metabolism allows only 500 kcal/day for work energy. That would allow only a couple hours of diligent effort perhaps paced over a couple more with less diligence. No wonder it takes 8 weeks for basic crop establishment. How easy would it have been for an evaluation to identify diet as a major hinderance to crop management? We have known for decades that smallholder farmers were poor, and perhaps hungry, but never related that to crop husbandry. Why??  Will we be able to guide smallholders out of poverty without first addressing this issue? Yet, a recent FAO webinar discussing the degree of malnutrition and under nutrition only allocated light work for smallholders factoring in only 1800 kcal/day.

 

  1. My other major issue is the overreliance on producer organizations to assist smallholder communities with their marketing. This is where evaluation appeasement reporting has failed to identify how few farmers are participating in the producer organizations and even then, they will side-sell the bulk of their produce to the often vilified private traders. The result is producer organizations attract only about 10% of potential members and a market share of a trivial >5% of the communities’ market production. By all business standards a scandalous failure, but they have remained a proclaimed success and the primary means to assist smallholder communities for over 30 years. How come? What will it take for evaluations to move on to more effective marketing mechanisms? 

 

With that I will place in the reference an article that I prepared for a symposium here at Colorado State University a little while ago reflecting on my 50+ years trying to assist smallholder communities. If is written from an emeritus perspective no longer relying on the system, and thus free to write more freely. The article is more factually accurate than politically correct and provides factual examples of what has been discussed above including:

  1. Summary of the project development process explaining the over two years pre-implementation time and effort
  2. Need to address the operational feasibility of innovations
  3. The horrors and tough choices associated with dietary energy deficit
  4. Necessity of facilitating access to mechanization to alleviate poverty in smallholder communities, and
  5. The limits of evaluations to identify and address these concerns.

I hope you can take an hour or so to download and read the 30-page illustrated article, include some of the linked webpages, and it stimulates you in adjusting in your programs to better serve your beneficiaries be they smallholder farmers or other impoverished people. The weblink to the article is: https://agsci.colostate.edu/smallholderagriculture/wp-content/uploads/sites/77/2023/03/Reflections.pdf

Thank you.

Dick Tinsley

Professor Emeritus

Soil & Crop Sciences Department

Colorado State University

Uzodinma Adirieje

Nigeria

Uzodinma Adirieje

National President

Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association

Posted on 07/12/2025

Bridging evidence and action: What are the most effective ways for the UN and partners to strengthen the link between impact evaluation and real-time decision-making?  

Bridging evidence and action is essential for ensuring that civil society interventions deliver measurable results and remain responsive to community needs. To strengthen the link between impact evaluation and real-time decision-making, the UN and partners must prioritize institutionalizing timely data use through integrated monitoring, evaluation, and learning (MEL) systems. Embedding rapid feedback loops—such as adaptive dashboards, mobile-based reporting, and community scorecards—allows frontline workers and decision-makers to track progress, identify gaps, and adjust tactics quickly.

 Co-creation of evidence with civil society organizations (CSOs) and affected populations ensures that evaluations reflect local realities. When communities participate directly in defining indicators, generating data, and interpreting findings, the resulting evidence becomes more relevant, trusted, and actionable.

 It imperative that the UN should invest in civil society capacity strengthening, enabling CSOs to conduct rigorous yet agile evaluations, apply digital tools, and communicate insights clearly. This includes training on qualitative and quantitative methods, data visualization, and scenario-based planning.

 Finally, fostering collaborative learning platforms—such as UN-CSO evidence hubs and South-South knowledge exchanges—helps transform evaluation results into shared action. When evidence is democratized and accessible, decision-making becomes faster, more inclusive, and more accountable, ultimately enhancing development impact.

 

Dr. Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA

Ola Eltoukhi

Egypt

Ola Eltoukhi

Impact Evaluation Consultant

WFP

Posted on 01/12/2025

Thank you all for your valuable insights and contributions. You raised many important points about strengthening the link between Impact Evaluation findings and real-time decision-making. This week, we would love to hear more from you, particularly on localizing evidence and exploring How can impact evaluations be designed and used to better serve the localization agenda, ensuring that local priorities, capacities, and contexts inform policy and programmes We look forward to your reflections and continued engagement.

Uzodinma Adirieje

Nigeria

Uzodinma Adirieje

National President

Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association

Posted on 07/12/2025

Localizing evidence is fundamental for ensuring that impact evaluations truly inform development outcomes and empower civil society. To achieve this, evaluations must begin with local priority-setting, where communities, traditional institutions, women’s groups, youth networks, and vulnerable populations jointly define what success means and which outcomes matter most. This grounds evaluations in lived realities rather than externally imposed frameworks.

 Consequently. evaluations should be designed around local capacities, blending scientific rigor with context-appropriate methods—such as participatory action research, community scorecards, sentinel monitoring, and rapid feedback mechanisms. Simplified data tools, mobile technologies, and culturally appropriate communication channels help reduce barriers to participation.

 Strengthening civil society’s technical capacity is crucial. Training CSOs in MEL principles, data literacy, and adaptive learning equips them to generate, interpret, and use evidence effectively. Partnerships with universities and research institutes further enhance credibility.

 Eventually, the evaluation findings must be translated into actionable, locally relevant insights, using storytelling, visual dashboards, and feedback forums that resonate with community stakeholders. When evidence reflects local voices, respects cultural contexts, and supports practical problem-solving, it becomes a powerful driver of inclusive, accountable, and sustainable development.

 

Dr. Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA

Uzodinma Adirieje

Nigeria

Uzodinma Adirieje

National President

Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association

Posted on 07/12/2025

Supporting UN Reform – Global Impact Evaluation Forum 2025 (Civil Society Perspective)

Supporting the ongoing UN reform agenda—especially its focus on coordination, efficiency, and country-level impact—is essential for improving the lives of rural and poor urban populations in Africa and other resource-poor regions. The impact evaluation community can play a transformative role by strengthening coherence, accountability, and value for money across the UN system.

 First, evaluators must champion harmonized measurement frameworks that reduce duplication and align UN agencies, governments, and civil society around shared indicators. Common frameworks improve comparability, promote joint planning, and ensure that results reflect community priorities rather than institutional silos.

 Second, the community should support country-led, context-sensitive evaluations that amplify local voices. By embedding participatory approaches, engaging community-based organizations, and acknowledging traditional knowledge systems, evaluations become more relevant and actionable for marginalized populations.

 Third, evaluators can foster cost-effective programming by generating evidence on what works, what does not, and why. This requires strengthening real-time monitoring, adaptive learning, and the use of digital tools (including AI) to track performance and inform timely course corrections. Clear communication of findings—through policy briefs, dashboards, and community dialogues—helps optimize resource allocation.

Fourth, the community should reinforce collaboration across UN agencies, promoting joint evaluations, shared data platforms, and cross-sector learning. These reduce transaction costs and enhance integrated delivery of health, climate, livelihood, and social protection services.

Ultimately, by advancing evidence-driven reforms, the impact evaluation community can help the UN deliver more coherent, inclusive, and cost-effective solutions—ensuring that rural and poor urban populations receive equitable attention and lasting development gains.

Dr. Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA

Elie COULIBALY

Mali

Elie COULIBALY

Technical MEAL Advisor

Catholic Relief Services

Posted on 07/12/2025

Dear all,

  1. the most effective ways for the UN and partners to strengthen the link between impact evaluation and real-time decision-making is to organize regular meetings with all stakeholders and let them know the results of impact
  2. impact evaluations can be better designed and used to reflect local priorities, capacities, and contexts with the participation of all stakeholders
  3. The impact evaluation community collectively can contribute to coherence and cost-effectiveness across the system with good policy of dissemination and learning 
  4. Agencies with diverse mandates can align their impact evaluation agendas across the humanitarian, development, and peace nexusa collaborative approach that emphasizes shared objectives, integrated evaluation methods, and capacity building
Tamarie Magaisa

South Africa

Tamarie Magaisa

MUSANGEYACONSULTING

Posted on 28/11/2025

I find the question of 'linking impact evaluation findings with real-time decision making' interesting. The question may need to be further unpacked what it means by ‘real-time’. Unless organization-level decisions are made, I find it challenging to use the evaluation findings in many cases at project level because the endline evaluations are conducted at the end of the project. Regardless, when there are organizations make their stands after the evaluation findings are presented, there are some examples where organizations have utilized the evaluation findings by taking actions like: 1. Evaluation-informed new program/project development. 2. Improvement in the communication and dissemination of the evaluation findings at different levels, including local communities, informing various partners about the evaluation findings and their implications. 3. Fostering the organizational learning culture, and reviewing the evaluation report carefully at the organization level and agreeing the organizational actions, based on the evaluation findings. While using evaluation findings, a significant factor that weakens the link is when evaluations are conducted primarily to satisfy a compliance requirement rather than to drive genuine organizational learning or immediate operational change. This needs management commitment!

Binod Chapagain

Cambodia

Binod Chapagain

Technical Advisor

UNDP

Posted on 02/12/2025

Connecting evidence: How can various UN agencies and partners, with diverse mandates, align evidence agendas in the humanitarian, development, and peace nexus?

Answer

Aligning evidence agendas across the Humanitarian-Development-Peace (HDP) nexus is one of the UN's most complex challenges. It requires merging three distinct "languages" of evidence: the (a) needs-based data of humanitarians, the (b) systems-based metrics of development actors, and the (c)political/risk-based analysis of peacebuilders.

There should be an agreement before sharing evidence on what to measure, share and agree on what that data is for. The most important is to have a primary strategic tool for collective outcome

What it is: A jointly agreed-indicator targets (e.g., "Reduce food insecurity in Region X by 30% over 5 years") that transcends any single agency's mandate.

How it aligns evidence: Instead of each agency measuring its own output (e.g., WFP measuring food delivered vs. UNDP measuring seeds distributed), they align their evidence to track the shared result.

Just to give an example-the Sudan (Protracted Crisis) Humanitarian data showed where people were hungry, but not why (root causes). a joint evidence approach would be for UNDP and humanitarians used shared "locality profiles" that mapped service gaps (development) alongside displacement figures (humanitarian) to target investments.

Jointly agreed indicators and targets will help to also produce an evaluation joint report that however take context into consideration
 

Sibongile Sithole

South Africa

Sibongile Sithole

Evaluation Consultant

Posted on 07/12/2025

My experience with the Three Horizons (3H) Initiative at the International Evaluation Academy (IEAc) has reinforced a key lesson: evaluation practice must become more transformational and shift away from conventional approaches that have long been shaped by Global North ideologies.”

In trying to find solutions on how evaluations can be fit for purpose, various points on localising evidence were brought up.

Firstly, localising impact evaluations means shifting power towards local actors and local knowledge. Components such as evaluation questions should be shaped by communities, local governments, and indigenous knowledge holders. 


Secondly, particularly in the Global South context, approaches such as storytelling, oral histories, communal dialogue, and participatory narrative methods should sit alongside quantitative and experimental designs. These reflect how many African communities make sense of change and offer culturally grounded insights that traditional methods often miss.


Last but not least, respect for cultural protocols, indigenous and community consent ensures that the evaluation serves the people it studies, not external agendas.

Using the Three Horizons framework while centring African and indigenous knowledge can help create evaluations that are culturally rooted, locally owned, and better aligned with the futures communities themselves envision.

Ola Eltoukhi

Egypt

Ola Eltoukhi

Impact Evaluation Consultant

WFP

Posted on 25/11/2025

Thanks everyone for joining this discussion! The discussion note raises several important points about how evidence partnerships can strengthen coherence, improve decision-making, and support the broader UN reform agenda. I’m opening this thread so we can unpack these ideas together.

Please feel free to jump in with your questions, reflections, or practical examples from your own work. This week, we want to hear more from you what are the most effective ways the UN and its partners can strengthen the link between impact evaluation findings and real-time decision-making. If you have examples, it would be great to hear them!