Aller au contenu principal
20 contributions

Le forum mondial de l'évaluation de l'impact 2025: nouer des partenariats en matière de preuves pour une action efficace

Posté le 25/11/2025 by Ola Eltoukhi
Global Impact Evaluation Forum 2025
WFP OEV

Contexte et justification

Les Nations Unies célèbrent leur 80e anniversaire à un moment où le monde est confronté à une convergence de crises complexes, associées au déclin de l'aide au développement officielle. Elles doivent maintenant montrer une valeur maximale là où c'est le plus important: dans la vie des personnes qu'elles ont pour objectif de servir.

En mars, le Sous-Secrétaire général des Nations Unies pour les affaires humanitaires et Coordinateur des secours d'urgence, Tom Fletcher, a appelé à une réinitialisation humanitaire, donnant le ton pour un système des Nations Unies plus agile, unifié et efficace en termes de coût. Cela implique non seulement des preuves solides sur ce qui fonctionne mais aussi une modification fondamentale de la manière dont nous travaillons ensemble.

Les évaluations de l'impact complètent les autres activités de suivi, évaluation et recherche, en permettant d'identifier quelles sont les interventions les plus efficaces. Toutefois, les preuves causales rigoureuses dans les contextes humanitaires restent rares[1],[2].

Dans ce contexte, le défi est clair: pour répondre aux exigences de cette nouvelle époque, le système des Nations Unies doit combler de manière décisive le fossé entre les preuves et l'action. Y parvenir dans les temps impartis et de manière collaborative, en particulier dans les contextes fragiles, reste un défi de taille.

Le prochain Forum mondial de l'évaluation de l'impact, organisé par le PAM en partenariat avec le BMZ et Norad, s'inscrit dans la lancée du premier Forum du PAM 2023 et du Forum PAM-UNICEF 2024, pour passer de l'exploration à l'action commune dans le système des Nations Unies et au-delà.

Objectifs de la discussion

Le Forum rassemblera les praticiens, les décideurs politiques et les partenaires pour examiner comment l'évaluation de l'impact peut s'aligner sur des objectifs plus amples, et en particulier:

  • maximiser la valeur afin d'encourager l'efficacité en termes de coûts dans le système des Nations Unies;
  • fournir un soutien mondial pour l'action locale: en créant des preuves pour le programme de localisation;
  • unifier l'action: en utilisant les preuves pour soutenir les réformes des Nations Unies;
  • relier les efforts et les preuves dans le nexus humanitaire, développement et paix.

L'objectif de cette discussion en ligne, disponible sur EvalforEarth et conduite par l'équipe de l'évaluation de l'impact du PAM, est de susciter une dynamique autour des principaux thèmes du Forum, à poursuivre en ligne durant l'évènement lui-même du 9 au 11 décembre.

Si vous êtes intéressé à participer au Forum mondial de l'évaluation de l'impact en décembre prochain EN LIGNE, veuillez vous enregistrer ici.

La discussion est ouverte aux contributions jusqu'au 8 décembre 2025.

Questions directrices

  1. Combler le fossé entre les preuves et l'action: Quelles sont les manières les plus efficaces pour que les Nations Unies et ses partenaires puissent renforcer le lien entre les résultats des évaluations de l'impact et la prise de décision en temps réel?
  2. Localiser les preuves: Comment les évaluations de l'impact peuvent-elles être conçues et utilisées pour mieux servir le programme de localisation, en veillant à ce que les priorités, les capacités et les contextes locaux éclairent les politiques et les programmes?
  3. Soutenir la réforme des Nations Unies: Comment la communauté de l'évaluation de l'impact contribue-t-elle collectivement à atteindre les objectifs de cohérence et d'efficacité en termes de coûts dans le système des Nations Unies?
  4. Relier les preuves: Comment les différentes agences des Nations Unies et leurs partenaires, dont les mandats diffèrent, font-ils converger leurs programmes de collecte de preuves dans le nexus humanitaire, développement et paix? 

Cette discussion sera suivie par un blog copublié en janvier 2026.

[1] Un examen de la littérature du Programme alimentaire mondial et de la Banque mondiale (WFP-World Bank literature review) montre la rareté des évaluations de l'impact rigoureuses concernant les transferts monétaires et en nature.

[2] Un exercice mondial de hiérarchisation des priorités mené par Elrha (Global Prioritisation Exercise) a identifié les problématiques, comprenant des boucles de rétroaction insuffisantes de la recherche jusqu'aux interventions de mise à l'échelle et la nécessité spécifique de disposer de plus de données sur l'efficacité en termes de coût.

This discussion is now closed. Please contact info@evalforearth.org for any further information.

Chamisa Innocent

Zimbabwe

Chamisa Innocent

EvalforEarth CoP Facilitator/Coordinator

EvalforEarth CoP

Posté le 12/12/2025
Dear colleagues,
Thank you for your valuable contributions to this online discussion in the lead-up to the Global Impact Evaluation Forum 2025. Your reflections, and examples enriched the conversation on how the UN and its partners can better connect impact evaluation, decision-making and UN reform.
A few key messages stood out across the contributions:
  • Impact evaluations must be designed for decisions, not just for reports. Many of you stressed the importance of starting from concrete policy and programming questions, embedding learning loops, and ensuring that evidence is generated and communicated at the speed of operations, rather than arriving “too late” to inform real-time choices.
  • Localising evidence requires shifting power to local actors. Contributors highlighted that evaluations are most useful when questions, methods, data and interpretation are co-created with national institutions, local governments, civil society, and communities, and when evidence systems strengthen national capacities and data systems rather than bypass them.
Over the coming days, we will prepare and share a short discussion summary capturing the main insights, as well as a blog that will highlight key lessons and messages.
Thank you once again for your engagement and for advancing this important conversation.
Hailu Negu Bedhane

Ethiopia

Hailu Negu Bedhane

cementing engineer

Ethiopian electric power

Posté le 12/12/2025

Advanced Message for the Global Impact Evaluation Forum 2025

Colleagues, partners,

Our goal is to create alliances for successful action. This necessitates a fundamental change: integrating impact evaluation (IE) as a strategic compass for real-time navigation instead of viewing it as a recurring audit of the past.

  1. Linking Evidence and Action: From Feedback Loops to Reports 
    Better feedback systems, not better reports, will increase the connection between evaluation and decision-making. Three procedures need to be institutionalized by the UN and its partners:
  • Light-touch, embedded IE units: Within programmatic arms, such as humanitarian clusters or nation platforms, small, committed teams use predictive analytics and quick mixed methods to evaluate hypotheses during implementation rather than after.
  • Decision-Based Costing: requiring a specific, significant budget line for adaptive management and real-time evidence gathering in every significant program proposal. As a result, evidence becomes an essential part of the program rather than an afterthought.
  • Leadership Dashboards: Going beyond narrative reports, these dynamic, data-visualization tools allow executives to view the "vital signs" of a portfolio and make course corrections by comparing key impact indicators versus theory-of-change milestones.
  1. Localizing Evidence: Inverting the Credibility Hierarchy 
    The implicit hierarchy that favors exterior "rigor" over local relevance must be dismantled in order to represent local goals and contexts. 
  • Co-Design from Inception: Local stakeholders, including governments, community leaders, and CSOs, must collaborate to create the assessment questions and define "impact" in their particular context. This is shared ownership, not consultation.
  • Make Local Analytical Ecosystem Investments: Funding and collaborating with regional institutions, think tanks, and data science collectives is the most sustainable approach to localizing evidence. This preserves intellectual capital domestically, increases capacity, and guarantees language and cultural nuance.
  • Adopt Pluralistic Approaches: RCTs are important, but we also need to give systems mapping, participatory action research, and qualitative approaches with a cultural foundation equal weight. The "gold standard" is the one that provides the most urgent local solution.
  1. Encouraging UN Reform: A Group "Evidence Compact" 
    By functioning as a cohesive, system-wide profession, the impact evaluation community can serve as the catalyst for coherence and cost-effectiveness. 
  • Standardization is not the same as common standards: Create a UN System-wide "Evidence Compact"—a concise consensus on shared platforms for meta-analysis and principles (such as open data, ethics, and quality thresholds). By doing this, we can compare what works across sectors and eliminates repetition.
  • Pooled Evaluation Funds: We should establish pooled funds at the regional or thematic level rather than having each agency commission tiny, dispersed studies. Larger, more strategic, cross-mandate assessments that address intricate, system-wide issues like social protection or climate adaptation are made possible by this.
  • A "What Works" Knowledge Platform: A single, easily available, and well-curated digital platform that links findings from UNICEF's education RCTs, UNDP's governance evaluations, UNHCR's protection analysis, and WFP's food security research. In doing so, agency-specific evidence becomes a public good of the UN.
  1. Linking Evidence Throughout the Nexus: Make the Intersections Mandatory 
    The goal of alignment in humanitarian, development, and peace efforts is to require careful investigation at their intersections rather than to harmonize objectives at their core. 

    Support and Assess "Triple Nexus" Pilots: Impact evaluations that expressly target initiatives that aim to bridge two or all three pillars must be jointly designed and funded by agencies. The main inquiry is: "Do integrated approaches yield greater sustainability and resilience impact than sequential or parallel efforts?" 
    Establish Nexus IE Fellowships: Impact evaluation experts should be rotated throughout UN agencies (for example, from FAO to OCHA to DPPA). This creates a group of experts who are proficient in several mandate "languages" and capable of creating assessments that track results along the peace, development, and humanitarian spectrum.
  • Adopt a Resilience Lens: Focus evaluation questions on enhancing system and community resilience. This offers a unifying paradigm that is pertinent to peacebuilders (social cohesiveness), development actors (chronic vulnerability), and humanitarian responders (shock absorption).

To sum up, creating evidence partnerships for successful action involves creating a networked learning system. It necessitates changing our investments from isolated research to networked learning infrastructures, from hiring experts to expanding local ecosystems, and from directing group adaptation for common objectives to proving attribution for individual projects. 
Instead of calling for additional evidence, let's end this discussion with a pledge to create the channels, platforms, and collaborations necessary to provide the appropriate evidence to decision-makers—from UN country teams to community councils—in a timely manner.
I'm grateful.

 

John Hugh Weatherhogg

Italy

John Hugh Weatherhogg

Agricultural Economist

ex-FAO

Posté le 08/12/2025

The onus for ensuring that evaluation findings are take into account in new project designs has to lie with the financing or development institutions.

Any project preparation should start with desk review of past experience with similar projects, not only to get a better idea of what works and what doesn't but also to make sure that you are not "re-inventing the wheel", i.e. not likely to put forward approaches which have been tried and failed 30 or more years earlier.

The responsibility of evaluators is to make sure that their reports are duly filed and readily available  to those involved in the creation of new projects. 

Anibal Velasquez

Peru

Anibal Velasquez

Senior Policy Advisor for Public Policy and Partnerships at the World Food Programme

WFP

Posté le 08/12/2025

Drawing on WFP’s experience in Peru, I would argue that “connecting evidence and action in real time” is not primarily a technical challenge of evaluation. It is, above all, about how the UN positions itself within political cycles, how it serves as a knowledge intermediary, and how it accompanies governments throughout decision-making processes. Based on what we observed in anemia reduction, rice fortification and shock-responsive social protection, at least seven concrete pathways can help the UN and its partners strengthen this evidence–decision link.

1. Design evaluations for decisions, not for publication

Impact evaluations must begin with very specific policy questions tied directly to upcoming decisions:

  • Is the proposed solution scalable under the country’s fiscal and administrative constraints?
  • What effect size would justify changing a regulation, launching a new programme or reallocating public funds?

In Peru, community health worker pilots in Ventanilla, Sechura and Áncash were not designed as academic exercises, but as policy laboratories explicitly linked to decisions: municipal anemia targets, MINSA technical guidelines, the financing of Meta 4 and later Compromiso 1. Once the pilots demonstrated clear results, the government financed community health home visits nationwide for the first time. Subsequent evaluation of the national programme showed that three percentage points of annual reduction in childhood anemia could be attributed to this intervention.

For the UN, this means co-designing pilots and evaluations with ministries of finance, social sectors and subnational governments, agreeing from the outset on:

  • what evidence is needed,
  • who will use it, and
  • what specific decision will be unlocked if results are positive.

2. Embed adaptive learning cycles into the country’s operational cycle

In Peru, WFP integrated evidence generation and use into its operational cycle, creating explicit moments for reflection, contextual analysis and strategic adjustment. This logic can be scaled across the UN system:

  • move beyond endline-only evaluations and use formative assessments, rapid monitoring, process evaluations and adaptive management tools (PDIA, adaptive management) to course-correct in real time.

The most effective approach is to institutionalize learning loops: periodic spaces where government and UN teams jointly review data, interpret findings, and adjust norms, protocols or budgets on short, iterative cycles.

3. Position the country office as a knowledge broker (a public-sector think tank)

In Peru, WFP increasingly operated as a Knowledge-Based Policy Influence Organization (KBPIO):

  • developing pilots with public and private co-financing to test innovations and assess their technical, political, social and economic feasibility for national scale-up,
  • producing policy briefs, feasibility analyses and return-on-investment studies,
  • systematizing pilot results and translating evidence into clear messages for ministers, governors, MEF, Congress, the private sector and the public.

For the UN, strengthening the evaluation–decision link requires country offices to synthesize and translate evidence—from their own evaluations, global research and systematic reviews—into short, actionable, timely products, and to act as trusted intermediaries in the actual spaces where decisions are made: cabinet meetings, budget commissions, regional councils, boards of public enterprises and beyond.

4. Turn strategic communication into an extension of evaluation

In the Peruvian experience, evidence on anemia, rice fortification, food assistance for TB patients and shock-responsive social protection became policy only because it was strategically communicated:

  • public campaigns such as Cocina con Causa or La Sangre Llama,
  • crisis communication that protected evidence-based policies,
  • participation in expert committees,
  • consistent presence in media, social networks and business forums such as CADE.

For the UN, the lesson is unequivocal:
every major evaluation needs a communication and advocacy strategy—with defined audiences, tailored messages, designated spokespersons, windows of opportunity and specific channels. Evidence cannot remain in a 200-page report; it must become public narratives that legitimize policy change and sustain decisions over time.

5. Accompany governments across the entire policy cycle

In Peru, WFP did not merely deliver reports; it accompanied government partners throughout the full policy cycle:

  • problem framing,
  • solution design through pilots,
  • evidence generation,
  • translation of results into norms, guidelines and SOPs,
  • support for implementation, monitoring and iterative adjustments.

This accompaniment blended roles—implementer, technical partner and facilitator—depending on the moment and the institutional need. For the UN, one of the most powerful ways to connect evaluation with decision-making is to embed technical teams in key ministries (health, social development, finance, planning) to work side by side with decision-makers, helping interpret results and convert them into concrete regulatory, budgetary and operational instruments.

6. Build and sustain alliances and champions who “move the needle”

In Peru, evidence became public policy because strategic champions existed: vice-ministers, governors, heads of social programmes, the MCLCP, and committed private-sector partners. The UN and its partners can strengthen the evidence–decision nexus by:

  • deliberately identifying, nurturing and supporting champions,
  • providing them with evidence, international experiences (SSTC), visibility and political negotiation tools.

In this sense, evaluations—impact evaluations, formative assessments, cost-effectiveness studies—become assets that empower national actors in their own political environments, rather than external products belonging solely to the UN.

7. Invest in national data systems and evaluative capacities

Real-time decision-making is impossible when data arrive two years late. In Peru, part of WFP’s added value was supporting improvements in information systems, monitoring and analytical capacity at national and regional levels.

For the UN, this translates into:

  • supporting data infrastructure, interoperability and administrative records,
  • strengthening government M&E units,
  • promoting norms and budgeting frameworks that require evidence-based justification for policy and spending decisions.

8. Understand how decisions are made to ensure recommendations are actually used

Understanding real-world decision-making is essential for evaluations to have influence. Decisions are embedded in political incentives, institutional constraints and specific temporal windows. The Cynefin framework is useful for distinguishing whether a decision environment is:

  • simple (requiring clear standards),
  • complicated (requiring deep technical analysis),
  • complex (requiring experimentation and pilots), or
  • chaotic (requiring rapid information for stabilisation).

An evaluation that ignores this landscape may be technically sound but politically unfeasible or operationally irrelevant. Evaluating is not only about producing evidence; it is about interpreting how decisions are made and tailoring recommendations to that reality.

In synthesis

From the Peru experience, the most effective way for the UN and its partners to connect evaluations with real-time decision-making is to stop treating evaluation as an isolated event and instead turn it into a continuous, political–technical process that:

  • is designed jointly with decision-makers,
  • is embedded in adaptive policy cycles,
  • is supported by country offices acting as knowledge brokers,
  • is amplified by strategic communication and alliances,
  • and is anchored in national data and evaluative capacities.

In short: fewer reports that arrive too late, and more living evidence—co-produced, communicated and politically accompanied—at the service of governments and the people we aim to reach.

 Anibal Velásquez, MD, MSc, is Senior Policy Advisor for Public Policy and Partnerships at the World Food Programme in Peru since 2018. A medical epidemiologist with over 20 years of experience, he has led research, social program evaluations, and the design of health and food security policies. He contributed to Peru’s child malnutrition strategy, health sector reform, and key innovations such as adaptive social protection for emergencies, fortified rice, and community health workers for anemia. Dr. Velásquez previously served as Peru’s Minister of Health (2014–2016), Director of Evaluation and Monitoring at the Ministry of Development and Social Inclusion (MIDIS), and Director of the National Institute of Health, as well as consultancy positions across 16 countries in Latin America and the Caribbean.

Daniel Ticehurst

United Kingdom

Daniel Ticehurst

Monitoring > Evaluation Specialist

freelance

Posté le 08/12/2025

Thanks for posting this. Here are my - rather terse - answers to two of the above questions

  1. Question 1: Bridging Evidence and Action: Strengthening the Link Between Impact Evaluation and Real-Time Decision-Making
    For impact evaluation to inform real-time decisions, it must be purpose-built for use, not produced as a retrospective record. As Patton would emphasise, the initial step is decision mapping: recognise what decisions lie ahead, who will make them, and what evidence they require. Evaluations must match operational pace—not outlive it. Evaluations must align with operational tempo—not outlast it. This requires:

    a) Embedded evaluators who sit with programme teams and leadership, translating emerging findings into operational options;
    b) Short, iterative learning products—rapid briefs, real-time analysis, adaptive monitoring—that inform decisions long before the final report emerges; and 
    c) Structured feedback loops that carry findings directly into planning, budgeting, and resource allocation.
     

    Arguably, none of this matters unless the system is willing to let evidence disrupt comfort. If findings never unsettle a meeting, challenge a budget line, or puncture institutional vanity, they are not evidence but décor. Evidence influences action only when it is permitted to disturb complacency and contradict consensus.

  2. Localizing Evidence: Making Impact Evaluations Serve Local Priorities and Power Structures
    Localization begins with local ownership of purpose, not merely participation in process. Impact evaluations should be co-created with national governments, civil society, and communities, ensuring that questions reflect local priorities and political realities. They must strengthen national data systems, not bypass them with parallel UN structures. And interpretation of findings must occur in-country, with local actors who understand the lived context behind the indicators.
    Yet there is a harder truth: calling something “localized” while designing it in Geneva and validating it in New York is an exercise in bureaucratic self-deception. True localisation demands surrendering control—over agenda-setting, determining the objectives and evaluation questions, data ownership, and interpretive authority. Anything less is a ritual performance of inclusion rather than a redistribution of power.

     

  3. Conclusion
    To bridge evidence and action, and to localize evaluation meaningfully, the UN must pair the discipline of use with a discipline of honesty. Evidence must be designed for decisions, delivered at the speed of operations, and empowered to unsettle institutional habits. And localization must shift from rhetoric to reality by making national actors—not UN agencies—the primary authors, owners, and users of impact evaluation.
    Otherwise, the system risks perfecting its internal coherence while leaving the world it serves largely unchanged.
Jean de Dieu BIZIMANA

Rwanda

Jean de Dieu BIZIMANA

Senior Consultant

EIG RWANDA

Posté le 08/12/2025

Dear Dr Uzodinma, Adirieje  

sure, the message you share is clear, However more action is needed before starting evaluation. There is a need for definition and terms and conditions, in addition methodology approaches also need to be well defined.
During this time, we need to clarify the usage of appropriate technologies such as AI ,GIS, Etc. Additionally communication is required as outcome of the evaluation.
Thank you for your consideration

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posté le 08/12/2025

In connecting evidence across the Humanitarian–Development–Peace (HDP) Nexus, aligning evidence agendas across humanitarian, development, and peace pillars requires intentional systems that move beyond sectoral silos toward holistic, context-responsive learning. In my experience at the intersection of PCVE, gender, and governance in county settings, valuable data exists across all three pillars—yet fragmentation prevents it from shaping a unified understanding of risk, resilience, and long-term community wellbeing.

One way to strengthen coherence is through shared learning frameworks built around harmonized indicators, aligned theories of change, and interoperable data systems. Humanitarian actors collecting early warning signals, development teams gathering socio-economic data, and peacebuilding practitioners tracking governance and cohesion trends can feed insights into a common evidence ecosystem. Joint sense-making platforms across UN agencies, county governments, and civil society further ensure interpretation and adaptation occur collectively.

Supporting local CSOs to build capacity in Core Humanitarian Standards (CHS) of Quality Assurance is critical. When local actors understand and apply CHS, their data becomes more reliable and compatible with UN and donor systems. Co-creating evaluation tools, monitoring frameworks, and learning agendas with these CSOs strengthens ownership and ensures evidence reflects local realities.

In African contexts, incorporating “Made in Africa Evaluation” (MAE) approaches, published and championed by our very own, Africa Evaluation Association (AfEA), can further decolonize practice by integrating local values, culture (such as Ubuntu), and conditions. By combining MAE principles with CHS, UN and donor systems can leverage contextually relevant methodologies, strengthen local capacity, and promote governance and accountability in a culturally grounded manner.

Finally, stronger Donor–CSO networking structures—learning hubs, joint review forums, and communities of practice—deepen understanding of scope, stabilize transitions of project ownership, and support long-term collaboration. Connecting evidence, capacities, and local approaches ensures HDP programs are coherent, context-sensitive, and impactful for the communities they serve.

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posté le 08/12/2025

The impact evaluation community can play a critical role in advancing the UN’s reform agenda, particularly the goals of coherence, cost-effectiveness, and system-wide alignment. In my work across multi-partner consortia and county-level government structures, I have seen how fragmentation in evaluation approaches often leads to duplication, inconsistent standards, and heavy reporting burdens for local actors. Supporting UN reform begins with harmonizing evaluation frameworks across agencies so that they draw from shared theories of change, common indicators, and compatible data systems. This reduces transaction costs for implementing partners and allows evidence to be aggregated more systematically across the humanitarian–development–peace (HDP) nexus.

The evaluation community can also contribute by promoting joint or multi-agency evaluations, particularly for cross-cutting thematic areas like PCVE, gender equality, and resilience. Joint evaluations not only save resources but also produce findings that are more holistic and better suited to inter-agency coordination. Additionally, evaluation teams can support reform by emphasizing adaptive, utilization-focused methodologies that produce real-time insights and decision-relevant evidence, rather than lengthy reports that come too late to influence programming.

Cost-effectiveness can be further enhanced by investing in local evaluators, research institutions, and government systems rather than relying exclusively on external consultants. This not only builds long-term capacity but also reduces the financial and operational footprint of evaluations. The evaluation community can strengthen UN reform by championing a culture of shared accountability, collaborative learning, and strategic alignment—ensuring that evidence not only measures results but also enables the UN system to function more cohesively and effectively.

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posté le 08/12/2025

On localising evidence; designing and using impact evaluations to advance the localization agenda requires the UN and its partners to shift power toward local actors, both in defining evaluation priorities and in generating the evidence itself. From my experience supporting MEL across county governments, local CSOs, and community structures, localization succeeds when evaluations are not externally imposed but co-created with those closest to the problem. This begins with jointly defining evaluation questions that reflect community priorities and county development realities, rather than donor-driven assumptions. It also involves investing in the capacities of county departments, local researchers, and grassroots organizations to participate meaningfully in evaluation design, data collection, analysis, and interpretation.

A particularly important opportunity is the intentional integration of citizen-generated data (CGD), that I have mentioned in a previous post, and locally collected datasets into evaluation frameworks. Many local CSOs like mine, community networks, and think tanks already generate rich and credible data on governance, resilience, gender dynamics, and PCVE indicators. When validated and aligned with national standards, these data sources can complement official statistics, strengthen SDG reporting, and ensure that evaluation findings reflect lived realities. This approach not only accelerates evidence availability but also embodies the principle of “nothing about us without us.”

Localizing evidence also means ensuring that findings are communicated back to communities in accessible formats and used in county-level decision forums such as CIDP reviews, sector working groups, and community dialogues. Furthermore, evaluations should include iterative sense-making sessions with local actors so they can directly shape programme adaptation. Ultimately, localization is not just about generating evidence locally—it is about shifting ownership, elevating local expertise, and ensuring that impact evaluations meaningfully inform policies and programmes at the levels where change is most felt.

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posté le 08/12/2025

On brigding evidence with action; strengthening the link between impact evaluation findings and real-time decision-making requires the UN and its partners to embrace learning-oriented systems rather than compliance-driven evaluation cultures. From my experience leading MEL across multi-county PCVE and governance programmes, evidence becomes actionable only when it is intentionally embedded into programme management—through continuous feedback loops, co-created interpretation sessions, and adaptive planning processes. Structured learning forums where evaluators, implementers, government stakeholders, and community representatives jointly analyse emerging findings are particularly effective for translating insights into operational shifts.

In the PCVE space, real-time evidence use is especially critical due to the fast-evolving nature of threats and community dynamics. A recent example is my organisation’s Submission to the UN Special Rapporteur under the UNOCT call for inputs on definitions of terrorism and violent extremism, where we highlighted how grounding global guidance in locally generated evidence improves both relevance and uptake. This experience reaffirmed that when evaluation findings are aligned with practitioner insights and local contextual knowledge, global frameworks become more actionable on the ground.

Additionally, the UN can strengthen evidence uptake by integrating citizen-generated data (CGD) into SDG indicator ecosystems—particularly where local CSOs and think tanks already generate credible, validated datasets. Leveraging CGD not only accelerates access to real-time insights but also strengthens community ownership and localization.

Ultimately, bridging evidence and action requires mixed-method evaluations, rapid dissemination tools, psychological safety for honest learning, and a UN culture where evidence is viewed as a shared resource for collective decision-making, not merely an accountability requirement.

Richard Tinsley

United States of America

Richard Tinsley

Professor Emeritus

Colorado State University

Posté le 29/11/2025

Comme demandé, ma contribution à la discussion est reproduite ci-dessous.

Avant d’aborder certaines de mes préoccupations concernant les évaluations, permettez-moi de commenter les remarques de Binod Chapagain. Son observation selon laquelle les évaluations arrivent trop tard dans le cycle des projets pour permettre un ajustement efficace représente l’une de mes préoccupations récurrentes. La principale contribution des évaluations réside dans leur capacité à orienter la conception de futurs projets afin de mieux servir les bénéficiaires. Un autre défi concerne la difficulté d’adapter les projets en cours. Il convient de rappeler que la plupart des grands projets, en particulier ceux financés par des bailleurs externes et appuyés par des conseillers expatriés, ont des délais de préparation de plus de deux ans et des coûts de préparation dépassant un million de dollars, avant que l’équipe d’experts ne soit mobilisée, déployée et en mesure d’interagir suffisamment avec les communautés pour comprendre leurs besoins réels. Avec autant de temps et d’efforts investis, personne ne souhaite découvrir que le projet n’est pas pleinement accepté par la communauté qu’il vise à servir. De plus, au moment où l’équipe est opérationnelle, l’essentiel des choix relatifs aux innovations à mettre en œuvre est déjà arrêté, avec le personnel, en particulier expatrié, recruté sur cette base. Cela limite fortement la possibilité d’apporter des ajustements majeurs. De légers ajustements sont parfois possibles, mais rarement plus. Ainsi, les évaluations sont surtout utiles pour éclairer les projets futurs.

Binod a également souligné que de nombreuses évaluations sont conçues pour vérifier la conformité par rapport aux documents initiaux. Ces évaluations, souvent internes et destinées à rassurer les bailleurs, tendent à montrer que les projets sont réussis. Cela contribue à obtenir des prolongations de projets et de nouveaux financements pour les opérateurs. Les rapports d’évaluation doivent donc être lus avec prudence, car quelques calculs simples pourraient révéler leurs faiblesses. Ils reposent souvent sur le fait que les destinataires des rapports, absorbés par la gestion quotidienne ou la conception de nouveaux projets, n’ont pas le temps de les examiner de manière critique et se satisfont de résultats apparemment positifs.

J’ai également remarqué que Tamarie Magaisa a mentionné la nécessité d’établir des cibles pour les critères d’évaluation. J’y souscris pleinement, car ces cibles permettent de distinguer les projets réussis de ceux qui échouent. Sans objectifs clairs et publiés dès la conception, il est facile de déclarer comme réussis des projets qui, selon la plupart des critères, sont des échecs.

Permettez-moi maintenant d’exprimer brièvement quelques préoccupations concernant les évaluations dans le contexte des petits exploitants agricoles, où des critères essentiels ont été ignorés ou des innovations manifestement défaillantes ont été présentées comme des réussites.

La première est l’incapacité à reconnaître que de nombreuses innovations agricoles que nous introduisons exigent davantage de main-d’œuvre, alors que la plupart des petits exploitants travaillent dans un contexte de pénurie sévère de main-d’œuvre. Les innovations peuvent être adaptées physiquement à l’environnement, mais elles ne sont pas opérationnellement réalisables à l’échelle des communautés. Avec des opérations manuelles, il faut environ huit semaines pour établir les cultures, ce qui annule les activités en milieu de saison et réduit le rendement potentiel jusqu’à compromettre la sécurité alimentaire des ménages. Une évaluation pourrait aisément identifier ce problème au moyen d’observations simples ou de quelques questions. Une fois ces limites reconnues, les projets pourraient se concentrer davantage sur le renforcement de la capacité opérationnelle, par exemple en facilitant l’accès à la mécanisation, comme cela s’est produit en Asie avec le passage du buffle aux motoculteurs.

Un autre enjeu majeur est que de nombreux petits exploitants souffrent de déficits caloriques importants, incompatibles avec une journée complète de travail agricole. Le travail agricole exige plus de 4000 kcal par jour alors que beaucoup ne disposent que de 2500 kcal, dont 2000 sont nécessaires au métabolisme de base. Il reste donc seulement 500 kcal pour le travail, l’équivalent de quelques heures d’effort soutenu. Il n’est donc pas surprenant que l’établissement des cultures prenne huit semaines. Depuis des décennies, nous savons que les petits exploitants sont pauvres, voire sous-alimentés, mais nous n’avons jamais relié cela à la gestion des cultures. Comment les aider à sortir de la pauvreté si cette question n’est pas abordée en premier lieu ?

Une autre préoccupation est la dépendance excessive envers les organisations de producteurs pour la commercialisation. Les évaluations n’ont pas reconnu que seule une faible proportion de producteurs y adhère réellement et que même ceux qui y participent vendent la majorité de leur production aux commerçants privés. Les organisations de producteurs n’attirent qu’environ dix pour cent des membres potentiels et ne contrôlent qu’environ cinq pour cent des volumes commercialisés, ce qui constitue un échec manifeste selon les standards commerciaux. Pourtant, elles continuent d’être présentées comme une réussite depuis plus de trente ans. Pourquoi ? Et que faudra-t-il pour que les évaluations proposent des mécanismes plus efficaces ?

Je joins à cette contribution un article que j’ai rédigé pour un symposium à l’Université d’État du Colorado, où je reviens sur mes cinquante années d’expérience auprès des petits exploitants agricoles. Cet article, illustré et factuel, développe les points soulevés ici, notamment les délais de préparation des projets, la nécessité d’évaluer la faisabilité opérationnelle, les contraintes énergétiques, l’importance de la mécanisation et les limites des évaluations actuelles.

Je vous invite à le télécharger et à le lire si vous en avez le temps. Le lien est le suivant
https colon slash slash agsci colostate edu slash smallholderagriculture slash wp content slash uploads slash sites slash 77 slash 2023 slash 03 slash Reflections pdf

Je vous remercie.

Dick Tinsley
Professeur émérite
Département des sciences du sol et des cultures
Université d’État du Colorado

Uzodinma Adirieje

Nigeria

Uzodinma Adirieje

National President

Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association

Posté le 07/12/2025

Relier les données probantes à l’action est essentiel pour garantir que les interventions de la société civile produisent des résultats mesurables et restent adaptées aux besoins des communautés. Pour renforcer le lien entre l’évaluation d’impact et la prise de décision en temps réel, les Nations Unies et leurs partenaires doivent donner la priorité à l’institutionnalisation de l’utilisation rapide des données au moyen de systèmes intégrés de suivi, d’évaluation et d’apprentissage. L’intégration de mécanismes de rétroaction rapide, tels que des tableaux de bord adaptatifs, des rapports mobiles et des fiches d’évaluation communautaires, permet aux acteurs de terrain et aux décideurs de suivre les progrès, d’identifier les lacunes et d’ajuster les stratégies rapidement.

La co-création des données probantes avec les organisations de la société civile et les populations concernées garantit que les évaluations reflètent les réalités locales. Lorsque les communautés participent directement à la définition des indicateurs, à la production des données et à l’interprétation des résultats, les données probantes deviennent plus pertinentes, plus crédibles et plus faciles à utiliser.

Il est impératif que les Nations Unies investissent dans le renforcement des capacités des organisations de la société civile, afin de leur permettre de mener des évaluations rigoureuses mais agiles, d’utiliser des outils numériques et de communiquer clairement les résultats. Cela inclut des formations sur les méthodes qualitatives et quantitatives, la visualisation des données et la planification fondée sur des scénarios.

Enfin, la promotion de plateformes d’apprentissage collaboratif, telles que des centres de données probantes ONU-OSC et des échanges de connaissances Sud-Sud, contribue à transformer les résultats des évaluations en actions partagées. Lorsque les données probantes sont accessibles et démocratisées, la prise de décision devient plus rapide, plus inclusive et plus responsable, ce qui améliore l’impact du développement.

Dr Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA

Ola Eltoukhi

Egypt

Ola Eltoukhi

Impact Evaluation Consultant

WFP

Posté le 01/12/2025
Merci à toutes et à tous pour vos contributions et vos analyses très pertinentes. Vous avez soulevé plusieurs points importants concernant le renforcement du lien entre les résultats des évaluations d’impact et la prise de décision en temps réel.
Cette semaine, nous aimerions recueillir davantage de vos points de vue, en particulier sur la localisation des données probantes et sur la manière dont les évaluations d’impact peuvent être conçues et utilisées pour mieux soutenir l’agenda de la localisation, en veillant à ce que les priorités, les capacités et les contextes locaux orientent les politiques et les programmes.
Nous attendons vos réflexions avec intérêt et vous remercions pour votre engagement continu.
Uzodinma Adirieje

Nigeria

Uzodinma Adirieje

National President

Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association

Posté le 07/12/2025

la localisation des données probantes est essentielle pour garantir que les évaluations d’impact éclairent réellement les résultats du développement et renforcent le rôle de la société civile. Pour y parvenir, les évaluations doivent commencer par une définition des priorités au niveau local, où les communautés, les institutions traditionnelles, les groupes de femmes, les réseaux de jeunes et les populations vulnérables définissent ensemble ce que signifie la réussite et quels résultats comptent le plus. Cela permet d’ancrer les évaluations dans les réalités vécues plutôt que dans des cadres externes imposés.

Les évaluations doivent ensuite être conçues en fonction des capacités locales, en combinant la rigueur scientifique avec des méthodes adaptées au contexte, telles que la recherche-action participative, les fiches d’évaluation communautaires, le suivi sentinelle et les mécanismes de rétroaction rapide. Des outils de collecte de données simplifiés, des technologies mobiles et des canaux de communication culturellement appropriés réduisent les obstacles à la participation.

Le renforcement des capacités techniques de la société civile est crucial. La formation des organisations de la société civile aux principes de suivi, évaluation et apprentissage, à la littératie des données et à l’apprentissage adaptatif les aide à produire, interpréter et utiliser efficacement les données probantes. Les partenariats avec les universités et les instituts de recherche renforcent davantage la crédibilité.

Enfin, les résultats de l’évaluation doivent être traduits en informations exploitables et pertinentes au niveau local, en utilisant des récits, des tableaux de bord visuels et des forums de rétroaction qui résonnent auprès des parties prenantes communautaires. Lorsque les données probantes reflètent les voix locales, respectent les contextes culturels et soutiennent la résolution pratique des problèmes, elles deviennent un moteur puissant d’un développement inclusif, responsable et durable.

Dr Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA

Uzodinma Adirieje

Nigeria

Uzodinma Adirieje

National President

Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association

Posté le 07/12/2025

Supporting UN Reform – Global Impact Evaluation Forum 2025 (Civil Society Perspective)

Supporting the ongoing UN reform agenda—especially its focus on coordination, efficiency, and country-level impact—is essential for improving the lives of rural and poor urban populations in Africa and other resource-poor regions. The impact evaluation community can play a transformative role by strengthening coherence, accountability, and value for money across the UN system.

 First, evaluators must champion harmonized measurement frameworks that reduce duplication and align UN agencies, governments, and civil society around shared indicators. Common frameworks improve comparability, promote joint planning, and ensure that results reflect community priorities rather than institutional silos.

 Second, the community should support country-led, context-sensitive evaluations that amplify local voices. By embedding participatory approaches, engaging community-based organizations, and acknowledging traditional knowledge systems, evaluations become more relevant and actionable for marginalized populations.

 Third, evaluators can foster cost-effective programming by generating evidence on what works, what does not, and why. This requires strengthening real-time monitoring, adaptive learning, and the use of digital tools (including AI) to track performance and inform timely course corrections. Clear communication of findings—through policy briefs, dashboards, and community dialogues—helps optimize resource allocation.

Fourth, the community should reinforce collaboration across UN agencies, promoting joint evaluations, shared data platforms, and cross-sector learning. These reduce transaction costs and enhance integrated delivery of health, climate, livelihood, and social protection services.

Ultimately, by advancing evidence-driven reforms, the impact evaluation community can help the UN deliver more coherent, inclusive, and cost-effective solutions—ensuring that rural and poor urban populations receive equitable attention and lasting development gains.

Dr. Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA

Elie COULIBALY

Mali

Elie COULIBALY

Technical MEAL Advisor

Catholic Relief Services

Posté le 07/12/2025

Chères et chers collègues,

  1. Les moyens les plus efficaces pour que les Nations Unies et leurs partenaires renforcent le lien entre l’évaluation d’impact et la prise de décision en temps réel consistent à organiser des réunions régulières avec toutes les parties prenantes et à partager systématiquement les résultats des évaluations d’impact.
  2. Les évaluations d’impact peuvent être mieux conçues et utilisées lorsqu’elles reflètent les priorités, les capacités et les contextes locaux, grâce à la participation de l’ensemble des parties prenantes.
  3. La communauté de l’évaluation d’impact peut contribuer collectivement à renforcer la cohérence et la rentabilité au sein du système, grâce à de bonnes pratiques de diffusion et d’apprentissage.
  4. Les agences ayant des mandats divers peuvent aligner leurs agendas d’évaluation d’impact dans les domaines humanitaire, du développement et de la paix à travers une approche collaborative fondée sur des objectifs communs, des méthodes d’évaluation intégrées et le renforcement des capacités.
Tamarie Magaisa

South Africa

Tamarie Magaisa

MUSANGEYACONSULTING

Posté le 28/11/2025

Bonjour à toutes et à tous,

Je trouve la question du « lien entre les résultats des évaluations d’impact et la prise de décision en temps réel » très intéressante. La question mérite peut-être d’être davantage précisée, notamment sur ce que l’on entend par « en temps réel ».
À moins qu’il ne s’agisse de décisions prises au niveau organisationnel, j’estime qu’il est souvent difficile d’utiliser les résultats d’évaluation au niveau des projets, puisque les évaluations finales sont réalisées à la fin du projet.

Néanmoins, lorsque des organisations prennent position après la présentation des résultats d’évaluation, on observe certains exemples d’utilisation effective des résultats, notamment :

  1. L’élaboration de nouveaux programmes/projets éclairée par les évaluations.
  2. L’amélioration de la communication et de la diffusion des résultats d’évaluation à différents niveaux, y compris au sein des communautés locales, en informant différents partenaires des conclusions et de leurs implications.
  3. Le renforcement d’une culture d’apprentissage organisationnel, en examinant attentivement le rapport d’évaluation au niveau institutionnel et en convenant d’actions organisationnelles fondées sur les résultats.

Lors de l’utilisation des résultats d’évaluation, un facteur important qui affaiblit le lien est lorsque les évaluations sont menées principalement pour satisfaire une exigence de conformité plutôt que pour favoriser un véritable apprentissage organisationnel ou un changement opérationnel immédiat.
Cela nécessite un engagement fort de la direction !

Binod Chapagain

Cambodia

Binod Chapagain

Technical Advisor

UNDP

Posté le 02/12/2025

Connecting evidence: How can various UN agencies and partners, with diverse mandates, align evidence agendas in the humanitarian, development, and peace nexus?

Answer

Aligning evidence agendas across the Humanitarian-Development-Peace (HDP) nexus is one of the UN's most complex challenges. It requires merging three distinct "languages" of evidence: the (a) needs-based data of humanitarians, the (b) systems-based metrics of development actors, and the (c)political/risk-based analysis of peacebuilders.

There should be an agreement before sharing evidence on what to measure, share and agree on what that data is for. The most important is to have a primary strategic tool for collective outcome

What it is: A jointly agreed-indicator targets (e.g., "Reduce food insecurity in Region X by 30% over 5 years") that transcends any single agency's mandate.

How it aligns evidence: Instead of each agency measuring its own output (e.g., WFP measuring food delivered vs. UNDP measuring seeds distributed), they align their evidence to track the shared result.

Just to give an example-the Sudan (Protracted Crisis) Humanitarian data showed where people were hungry, but not why (root causes). a joint evidence approach would be for UNDP and humanitarians used shared "locality profiles" that mapped service gaps (development) alongside displacement figures (humanitarian) to target investments.

Jointly agreed indicators and targets will help to also produce an evaluation joint report that however take context into consideration
 

Sibongile Sithole

South Africa

Sibongile Sithole

Evaluation Consultant

Posté le 07/12/2025

My experience with the Three Horizons (3H) Initiative at the International Evaluation Academy (IEAc) has reinforced a key lesson: evaluation practice must become more transformational and shift away from conventional approaches that have long been shaped by Global North ideologies.”

In trying to find solutions on how evaluations can be fit for purpose, various points on localising evidence were brought up.

Firstly, localising impact evaluations means shifting power towards local actors and local knowledge. Components such as evaluation questions should be shaped by communities, local governments, and indigenous knowledge holders. 


Secondly, particularly in the Global South context, approaches such as storytelling, oral histories, communal dialogue, and participatory narrative methods should sit alongside quantitative and experimental designs. These reflect how many African communities make sense of change and offer culturally grounded insights that traditional methods often miss.


Last but not least, respect for cultural protocols, indigenous and community consent ensures that the evaluation serves the people it studies, not external agendas.

Using the Three Horizons framework while centring African and indigenous knowledge can help create evaluations that are culturally rooted, locally owned, and better aligned with the futures communities themselves envision.

Ola Eltoukhi

Egypt

Ola Eltoukhi

Impact Evaluation Consultant

WFP

Posté le 25/11/2025

Merci à toutes et à tous d’avoir rejoint cette discussion. La note de réflexion soulève plusieurs points importants sur la manière dont les partenariats en matière de données probantes peuvent renforcer la cohérence, améliorer la prise de décision et appuyer l’agenda plus large de la réforme des Nations Unies. J’ouvre ce fil afin que nous puissions explorer ensemble ces idées.

N’hésitez pas à intervenir avec vos questions, réflexions ou exemples pratiques tirés de votre propre expérience. Cette semaine, nous souhaitons entendre davantage vos points de vue sur les moyens les plus efficaces pour que l’ONU et ses partenaires renforcent le lien entre les résultats des évaluations d’impact et la prise de décision en temps réel. Si vous avez des exemples, nous serions ravis de les connaître.