Antecedentes y justificación
Las Naciones Unidas celebran su octogésimo aniversario en un momento en que el mundo se enfrenta a varias crisis complejas y una reducción de la asistencia oficial para el desarrollo. Es ahora cuando deben demostrar su mayor valor donde más necesita: en las vidas de las personas a las que pretenden ayudar.
En marzo, Tom Fletcher, Secretario General Adjunto de Asuntos Humanitarios y Coordinador del Socorro de Emergencia, realizó un llamamiento en pro de un reinicio humanitario, marcando el rumbo para un sistema de las Naciones Unidas más ágil, unificado y eficaz en función del coste. Esto no solo requiere evidencias sólidas de lo que funciona, sino también un cambio fundamental en la forma en que trabajamos juntos.
Las evaluaciones de impacto complementan otras actividades de seguimiento, evaluación e investigación, ya que ayudan a identificar qué intervenciones son más eficaces. Sin embargo, apenas se dispone de evidencias causales rigurosas en contextos humanitarios.[1],[2]
En este contexto, el desafío es claro: para satisfacer las demandas de esta nueva el sistema de las Naciones Unidas debe cerrar la brecha entre evidencias y acción. Sin embargo, hacerlo de manera oportuna y colaborativa continúa siendo un reto, especialmente en entornos frágiles.
El próximo Foro Global de Evaluación de Impacto —organizado por el PMA, en colaboración con el Ministerio Federal de Cooperación Económica y Desarrollo de Alemania (BMZ, por sus siglas en alemán) y el Organismo Noruego de Cooperación para el Desarrollo (Norad)— aprovecha el impulso del Foro inaugural del PMA de 2023 y el Foro del PMA y el Fondo de las Naciones Unidas para la Infancia (UNICEF) de 2024 para pasar de la exploración a la acción conjunta en todo el sistema de las Naciones Unidas y más allá.
Objetivos de la discusión
El Foro reunirá a profesionales, responsables de políticas y asociados para explorar cómo la evaluación de impacto puede alinearse con los siguientes objetivos más amplios:
- Maximizar el valor para impulsar la eficacia en función del coste en el sistema de las Naciones Unidas
- Proporcionar apoyo mundial a la acción local: recabar evidencias para la agenda de localización
- Unificar la acción Utilizar las evidencias para apoyar las reformas de las Naciones Unidas
- Conectar los esfuerzos y las evidencias en el nexo entre acción humanitaria, desarrollo y paz
El propósito de esta discusión en línea —albergada en EvalforEarth y dirigida por el equipo de evaluación de impacto del PMA— es impulsar los temas principales del foro, que continuará en línea durante el propio acto, del 9 al 11 de diciembre de 2025.
Si está interesado en participar en el Foro Global de Evaluación de Impacto que se celebrará EN LÍNEA a finales de diciembre, por favor inscríbase aquí.
Puede enviar sus contribuciones hasta el 8 de diciembre de 2025.
Preguntas de orientación
- Conectar evidencias y acción ¿Cuáles son las formas más eficaces en que las Naciones Unidas y sus asociados pueden reforzar el vínculo entre los resultados de las evaluaciones de impacto y la adopción de decisiones en tiempo real?
- Localizar las evidencias ¿Cómo se pueden diseñar y utilizar las evaluaciones de impacto para proporcionar mejor apoyo a la agenda de localización, garantizando que las prioridades, capacidades y contextos locales fundamenten las políticas y programas?
- Apoyar la reforma de las Naciones Unidas ¿Cómo puede la comunidad de evaluación de impacto contribuir colectivamente a los objetivos de coherencia y eficacia en función del coste en el sistema de las Naciones Unidas?
- Conectar las evidencias ¿Cómo pueden los diferentes organismos y asociados de las Naciones Unidas —con mandatos diversos— alinear las agendas de evidencias en el nexo entre acción humanitaria, desarrollo y paz?
Este debate tendrá su continuación en un artículo que se publicará conjuntamente en enero de 2026.
[1] Una revisión bibliográfica del Programa Mundial de Alimentos (PMA) y el Banco Mundial destaca la escasez de evaluaciones de impacto rigurosas en transferencias en efectivo y en especie.
[2] Un ejercicio de priorización global realizado por Elrha identificó desafíos como la escasez de mecanismos de retroalimentación entre la investigación y la ampliación de las intervenciones, y la necesidad específica de contar con más datos sobre la eficacia en función del coste.
This discussion is now closed. Please contact info@evalforearth.org for any further information.
Zimbabwe
Chamisa Innocent
EvalforEarth CoP Facilitator/Coordinator
EvalforEarth CoP
Publicado el 12/12/2025
Ethiopia
Hailu Negu Bedhane
cementing engineer
Ethiopian electric power
Publicado el 12/12/2025
Advanced Message for the Global Impact Evaluation Forum 2025
Colleagues, partners,
Our goal is to create alliances for successful action. This necessitates a fundamental change: integrating impact evaluation (IE) as a strategic compass for real-time navigation instead of viewing it as a recurring audit of the past.
Better feedback systems, not better reports, will increase the connection between evaluation and decision-making. Three procedures need to be institutionalized by the UN and its partners:
The implicit hierarchy that favors exterior "rigor" over local relevance must be dismantled in order to represent local goals and contexts.
By functioning as a cohesive, system-wide profession, the impact evaluation community can serve as the catalyst for coherence and cost-effectiveness.
The goal of alignment in humanitarian, development, and peace efforts is to require careful investigation at their intersections rather than to harmonize objectives at their core.
Support and Assess "Triple Nexus" Pilots: Impact evaluations that expressly target initiatives that aim to bridge two or all three pillars must be jointly designed and funded by agencies. The main inquiry is: "Do integrated approaches yield greater sustainability and resilience impact than sequential or parallel efforts?"
Establish Nexus IE Fellowships: Impact evaluation experts should be rotated throughout UN agencies (for example, from FAO to OCHA to DPPA). This creates a group of experts who are proficient in several mandate "languages" and capable of creating assessments that track results along the peace, development, and humanitarian spectrum.
To sum up, creating evidence partnerships for successful action involves creating a networked learning system. It necessitates changing our investments from isolated research to networked learning infrastructures, from hiring experts to expanding local ecosystems, and from directing group adaptation for common objectives to proving attribution for individual projects.
Instead of calling for additional evidence, let's end this discussion with a pledge to create the channels, platforms, and collaborations necessary to provide the appropriate evidence to decision-makers—from UN country teams to community councils—in a timely manner.
I'm grateful.
Italy
John Hugh Weatherhogg
Agricultural Economist
ex-FAO
Publicado el 08/12/2025
The onus for ensuring that evaluation findings are take into account in new project designs has to lie with the financing or development institutions.
Any project preparation should start with desk review of past experience with similar projects, not only to get a better idea of what works and what doesn't but also to make sure that you are not "re-inventing the wheel", i.e. not likely to put forward approaches which have been tried and failed 30 or more years earlier.
The responsibility of evaluators is to make sure that their reports are duly filed and readily available to those involved in the creation of new projects.
Peru
Anibal Velasquez
Senior Policy Advisor for Public Policy and Partnerships at the World Food Programme
WFP
Publicado el 08/12/2025
Drawing on WFP’s experience in Peru, I would argue that “connecting evidence and action in real time” is not primarily a technical challenge of evaluation. It is, above all, about how the UN positions itself within political cycles, how it serves as a knowledge intermediary, and how it accompanies governments throughout decision-making processes. Based on what we observed in anemia reduction, rice fortification and shock-responsive social protection, at least seven concrete pathways can help the UN and its partners strengthen this evidence–decision link.
1. Design evaluations for decisions, not for publication
Impact evaluations must begin with very specific policy questions tied directly to upcoming decisions:
In Peru, community health worker pilots in Ventanilla, Sechura and Áncash were not designed as academic exercises, but as policy laboratories explicitly linked to decisions: municipal anemia targets, MINSA technical guidelines, the financing of Meta 4 and later Compromiso 1. Once the pilots demonstrated clear results, the government financed community health home visits nationwide for the first time. Subsequent evaluation of the national programme showed that three percentage points of annual reduction in childhood anemia could be attributed to this intervention.
For the UN, this means co-designing pilots and evaluations with ministries of finance, social sectors and subnational governments, agreeing from the outset on:
2. Embed adaptive learning cycles into the country’s operational cycle
In Peru, WFP integrated evidence generation and use into its operational cycle, creating explicit moments for reflection, contextual analysis and strategic adjustment. This logic can be scaled across the UN system:
The most effective approach is to institutionalize learning loops: periodic spaces where government and UN teams jointly review data, interpret findings, and adjust norms, protocols or budgets on short, iterative cycles.
3. Position the country office as a knowledge broker (a public-sector think tank)
In Peru, WFP increasingly operated as a Knowledge-Based Policy Influence Organization (KBPIO):
For the UN, strengthening the evaluation–decision link requires country offices to synthesize and translate evidence—from their own evaluations, global research and systematic reviews—into short, actionable, timely products, and to act as trusted intermediaries in the actual spaces where decisions are made: cabinet meetings, budget commissions, regional councils, boards of public enterprises and beyond.
4. Turn strategic communication into an extension of evaluation
In the Peruvian experience, evidence on anemia, rice fortification, food assistance for TB patients and shock-responsive social protection became policy only because it was strategically communicated:
For the UN, the lesson is unequivocal:
every major evaluation needs a communication and advocacy strategy—with defined audiences, tailored messages, designated spokespersons, windows of opportunity and specific channels. Evidence cannot remain in a 200-page report; it must become public narratives that legitimize policy change and sustain decisions over time.
5. Accompany governments across the entire policy cycle
In Peru, WFP did not merely deliver reports; it accompanied government partners throughout the full policy cycle:
This accompaniment blended roles—implementer, technical partner and facilitator—depending on the moment and the institutional need. For the UN, one of the most powerful ways to connect evaluation with decision-making is to embed technical teams in key ministries (health, social development, finance, planning) to work side by side with decision-makers, helping interpret results and convert them into concrete regulatory, budgetary and operational instruments.
6. Build and sustain alliances and champions who “move the needle”
In Peru, evidence became public policy because strategic champions existed: vice-ministers, governors, heads of social programmes, the MCLCP, and committed private-sector partners. The UN and its partners can strengthen the evidence–decision nexus by:
In this sense, evaluations—impact evaluations, formative assessments, cost-effectiveness studies—become assets that empower national actors in their own political environments, rather than external products belonging solely to the UN.
7. Invest in national data systems and evaluative capacities
Real-time decision-making is impossible when data arrive two years late. In Peru, part of WFP’s added value was supporting improvements in information systems, monitoring and analytical capacity at national and regional levels.
For the UN, this translates into:
8. Understand how decisions are made to ensure recommendations are actually used
Understanding real-world decision-making is essential for evaluations to have influence. Decisions are embedded in political incentives, institutional constraints and specific temporal windows. The Cynefin framework is useful for distinguishing whether a decision environment is:
An evaluation that ignores this landscape may be technically sound but politically unfeasible or operationally irrelevant. Evaluating is not only about producing evidence; it is about interpreting how decisions are made and tailoring recommendations to that reality.
In synthesis
From the Peru experience, the most effective way for the UN and its partners to connect evaluations with real-time decision-making is to stop treating evaluation as an isolated event and instead turn it into a continuous, political–technical process that:
In short: fewer reports that arrive too late, and more living evidence—co-produced, communicated and politically accompanied—at the service of governments and the people we aim to reach.
Anibal Velásquez, MD, MSc, is Senior Policy Advisor for Public Policy and Partnerships at the World Food Programme in Peru since 2018. A medical epidemiologist with over 20 years of experience, he has led research, social program evaluations, and the design of health and food security policies. He contributed to Peru’s child malnutrition strategy, health sector reform, and key innovations such as adaptive social protection for emergencies, fortified rice, and community health workers for anemia. Dr. Velásquez previously served as Peru’s Minister of Health (2014–2016), Director of Evaluation and Monitoring at the Ministry of Development and Social Inclusion (MIDIS), and Director of the National Institute of Health, as well as consultancy positions across 16 countries in Latin America and the Caribbean.
United Kingdom
Daniel Ticehurst
Monitoring > Evaluation Specialist
freelance
Publicado el 08/12/2025
Thanks for posting this. Here are my - rather terse - answers to two of the above questions
Question 1: Bridging Evidence and Action: Strengthening the Link Between Impact Evaluation and Real-Time Decision-Making
For impact evaluation to inform real-time decisions, it must be purpose-built for use, not produced as a retrospective record. As Patton would emphasise, the initial step is decision mapping: recognise what decisions lie ahead, who will make them, and what evidence they require. Evaluations must match operational pace—not outlive it. Evaluations must align with operational tempo—not outlast it. This requires:
a) Embedded evaluators who sit with programme teams and leadership, translating emerging findings into operational options;
b) Short, iterative learning products—rapid briefs, real-time analysis, adaptive monitoring—that inform decisions long before the final report emerges; and
c) Structured feedback loops that carry findings directly into planning, budgeting, and resource allocation.
Arguably, none of this matters unless the system is willing to let evidence disrupt comfort. If findings never unsettle a meeting, challenge a budget line, or puncture institutional vanity, they are not evidence but décor. Evidence influences action only when it is permitted to disturb complacency and contradict consensus.
Localizing Evidence: Making Impact Evaluations Serve Local Priorities and Power Structures
Localization begins with local ownership of purpose, not merely participation in process. Impact evaluations should be co-created with national governments, civil society, and communities, ensuring that questions reflect local priorities and political realities. They must strengthen national data systems, not bypass them with parallel UN structures. And interpretation of findings must occur in-country, with local actors who understand the lived context behind the indicators.
Yet there is a harder truth: calling something “localized” while designing it in Geneva and validating it in New York is an exercise in bureaucratic self-deception. True localisation demands surrendering control—over agenda-setting, determining the objectives and evaluation questions, data ownership, and interpretive authority. Anything less is a ritual performance of inclusion rather than a redistribution of power.
To bridge evidence and action, and to localize evaluation meaningfully, the UN must pair the discipline of use with a discipline of honesty. Evidence must be designed for decisions, delivered at the speed of operations, and empowered to unsettle institutional habits. And localization must shift from rhetoric to reality by making national actors—not UN agencies—the primary authors, owners, and users of impact evaluation.
Otherwise, the system risks perfecting its internal coherence while leaving the world it serves largely unchanged.
Rwanda
Jean de Dieu BIZIMANA
Senior Consultant
EIG RWANDA
Publicado el 08/12/2025
Dear Dr Uzodinma, Adirieje
sure, the message you share is clear, However more action is needed before starting evaluation. There is a need for definition and terms and conditions, in addition methodology approaches also need to be well defined.
During this time, we need to clarify the usage of appropriate technologies such as AI ,GIS, Etc. Additionally communication is required as outcome of the evaluation.
Thank you for your consideration
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Publicado el 08/12/2025
In connecting evidence across the Humanitarian–Development–Peace (HDP) Nexus, aligning evidence agendas across humanitarian, development, and peace pillars requires intentional systems that move beyond sectoral silos toward holistic, context-responsive learning. In my experience at the intersection of PCVE, gender, and governance in county settings, valuable data exists across all three pillars—yet fragmentation prevents it from shaping a unified understanding of risk, resilience, and long-term community wellbeing.
One way to strengthen coherence is through shared learning frameworks built around harmonized indicators, aligned theories of change, and interoperable data systems. Humanitarian actors collecting early warning signals, development teams gathering socio-economic data, and peacebuilding practitioners tracking governance and cohesion trends can feed insights into a common evidence ecosystem. Joint sense-making platforms across UN agencies, county governments, and civil society further ensure interpretation and adaptation occur collectively.
Supporting local CSOs to build capacity in Core Humanitarian Standards (CHS) of Quality Assurance is critical. When local actors understand and apply CHS, their data becomes more reliable and compatible with UN and donor systems. Co-creating evaluation tools, monitoring frameworks, and learning agendas with these CSOs strengthens ownership and ensures evidence reflects local realities.
In African contexts, incorporating “Made in Africa Evaluation” (MAE) approaches, published and championed by our very own, Africa Evaluation Association (AfEA), can further decolonize practice by integrating local values, culture (such as Ubuntu), and conditions. By combining MAE principles with CHS, UN and donor systems can leverage contextually relevant methodologies, strengthen local capacity, and promote governance and accountability in a culturally grounded manner.
Finally, stronger Donor–CSO networking structures—learning hubs, joint review forums, and communities of practice—deepen understanding of scope, stabilize transitions of project ownership, and support long-term collaboration. Connecting evidence, capacities, and local approaches ensures HDP programs are coherent, context-sensitive, and impactful for the communities they serve.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Publicado el 08/12/2025
The impact evaluation community can play a critical role in advancing the UN’s reform agenda, particularly the goals of coherence, cost-effectiveness, and system-wide alignment. In my work across multi-partner consortia and county-level government structures, I have seen how fragmentation in evaluation approaches often leads to duplication, inconsistent standards, and heavy reporting burdens for local actors. Supporting UN reform begins with harmonizing evaluation frameworks across agencies so that they draw from shared theories of change, common indicators, and compatible data systems. This reduces transaction costs for implementing partners and allows evidence to be aggregated more systematically across the humanitarian–development–peace (HDP) nexus.
The evaluation community can also contribute by promoting joint or multi-agency evaluations, particularly for cross-cutting thematic areas like PCVE, gender equality, and resilience. Joint evaluations not only save resources but also produce findings that are more holistic and better suited to inter-agency coordination. Additionally, evaluation teams can support reform by emphasizing adaptive, utilization-focused methodologies that produce real-time insights and decision-relevant evidence, rather than lengthy reports that come too late to influence programming.
Cost-effectiveness can be further enhanced by investing in local evaluators, research institutions, and government systems rather than relying exclusively on external consultants. This not only builds long-term capacity but also reduces the financial and operational footprint of evaluations. The evaluation community can strengthen UN reform by championing a culture of shared accountability, collaborative learning, and strategic alignment—ensuring that evidence not only measures results but also enables the UN system to function more cohesively and effectively.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Publicado el 08/12/2025
On localising evidence; designing and using impact evaluations to advance the localization agenda requires the UN and its partners to shift power toward local actors, both in defining evaluation priorities and in generating the evidence itself. From my experience supporting MEL across county governments, local CSOs, and community structures, localization succeeds when evaluations are not externally imposed but co-created with those closest to the problem. This begins with jointly defining evaluation questions that reflect community priorities and county development realities, rather than donor-driven assumptions. It also involves investing in the capacities of county departments, local researchers, and grassroots organizations to participate meaningfully in evaluation design, data collection, analysis, and interpretation.
A particularly important opportunity is the intentional integration of citizen-generated data (CGD), that I have mentioned in a previous post, and locally collected datasets into evaluation frameworks. Many local CSOs like mine, community networks, and think tanks already generate rich and credible data on governance, resilience, gender dynamics, and PCVE indicators. When validated and aligned with national standards, these data sources can complement official statistics, strengthen SDG reporting, and ensure that evaluation findings reflect lived realities. This approach not only accelerates evidence availability but also embodies the principle of “nothing about us without us.”
Localizing evidence also means ensuring that findings are communicated back to communities in accessible formats and used in county-level decision forums such as CIDP reviews, sector working groups, and community dialogues. Furthermore, evaluations should include iterative sense-making sessions with local actors so they can directly shape programme adaptation. Ultimately, localization is not just about generating evidence locally—it is about shifting ownership, elevating local expertise, and ensuring that impact evaluations meaningfully inform policies and programmes at the levels where change is most felt.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Publicado el 08/12/2025
On brigding evidence with action; strengthening the link between impact evaluation findings and real-time decision-making requires the UN and its partners to embrace learning-oriented systems rather than compliance-driven evaluation cultures. From my experience leading MEL across multi-county PCVE and governance programmes, evidence becomes actionable only when it is intentionally embedded into programme management—through continuous feedback loops, co-created interpretation sessions, and adaptive planning processes. Structured learning forums where evaluators, implementers, government stakeholders, and community representatives jointly analyse emerging findings are particularly effective for translating insights into operational shifts.
In the PCVE space, real-time evidence use is especially critical due to the fast-evolving nature of threats and community dynamics. A recent example is my organisation’s Submission to the UN Special Rapporteur under the UNOCT call for inputs on definitions of terrorism and violent extremism, where we highlighted how grounding global guidance in locally generated evidence improves both relevance and uptake. This experience reaffirmed that when evaluation findings are aligned with practitioner insights and local contextual knowledge, global frameworks become more actionable on the ground.
Additionally, the UN can strengthen evidence uptake by integrating citizen-generated data (CGD) into SDG indicator ecosystems—particularly where local CSOs and think tanks already generate credible, validated datasets. Leveraging CGD not only accelerates access to real-time insights but also strengthens community ownership and localization.
Ultimately, bridging evidence and action requires mixed-method evaluations, rapid dissemination tools, psychological safety for honest learning, and a UN culture where evidence is viewed as a shared resource for collective decision-making, not merely an accountability requirement.
United States of America
Richard Tinsley
Professor Emeritus
Colorado State University
Publicado el 29/11/2025
Como se solicitó, mi contribución al debate se reproduce a continuación.
Antes de comentar algunas de mis preocupaciones sobre las evaluaciones, permítanme responder a las observaciones de Binod Chapagain. Su comentario sobre que las evaluaciones llegan demasiado tarde en el ciclo del proyecto para ajustar eficazmente el enfoque coincide con una de mis preocupaciones frecuentes. La mayor contribución de las evaluaciones reside en su capacidad para influir en el diseño de futuros proyectos con el fin de servir mejor a los beneficiarios. Además, realizar ajustes en proyectos en marcha suele ser muy difícil. Conviene recordar que la mayoría de los proyectos grandes, especialmente los financiados externamente y con asesores expatriados, requieren más de dos años de preparación y costos superiores a un millón de dólares antes de que el equipo asesor pueda ser contratado, desplegado y finalmente interactuar con los beneficiarios para conocer sus necesidades reales. Con tanto tiempo y recursos ya invertidos, nadie quiere descubrir que el proyecto no es bien recibido por la comunidad que debía beneficiar. Además, para cuando el equipo de implementación llega al terreno, ya se han decidido la mayoría de las innovaciones y el personal, sobre todo expatriado, ha sido contratado en función de ellas. Esto limita profundamente cualquier ajuste importante. Tal vez se pueda modificar algún detalle, pero no más. Por ello, las evaluaciones son más valiosas para orientar los proyectos futuros.
Binod también señaló que muchas evaluaciones se diseñan para documentar el cumplimiento respecto a los documentos iniciales. Estas evaluaciones suelen ser internas y buscan tranquilizar a los donantes, mostrando que el proyecto es exitoso. Esto es necesario para lograr prórrogas y nuevos proyectos. En consecuencia, los informes deben ser leídos con cautela, ya que algunos cálculos simples pueden poner en evidencia sus debilidades. A menudo se basan en que los destinatarios de los informes, ocupados con la gestión de proyectos o el diseño de nuevos programas, no tienen tiempo para examinarlos críticamente.
También observé que Tamarie Magaisa mencionó la importancia de establecer metas para los criterios de evaluación. Concuerdo plenamente, ya que estas metas permiten distinguir proyectos exitosos de aquellos que no lo son. Sin objetivos claros y publicados desde la concepción, es fácil declarar como exitosos proyectos que en realidad serían considerados fracasos según la mayoría de los criterios.
Permítanme ahora exponer algunas de mis inquietudes sobre cómo las evaluaciones han fallado en apoyar a las comunidades agrícolas de pequeña escala al pasar por alto criterios esenciales o presentar como exitosas innovaciones que claramente no lo son.
La primera es la falta de reconocimiento de que muchas de las innovaciones propuestas requieren mayor mano de obra, cuando la mayoría de los pequeños productores ya se encuentran en condiciones de estrés laboral severo. Las innovaciones pueden adaptarse bien al entorno físico, pero no son operativamente viables en toda la comunidad. Con operaciones completamente manuales, se necesitan unas ocho semanas para el establecimiento del cultivo, lo que impide actividades de mitad de temporada y reduce el rendimiento hasta comprometer la seguridad alimentaria familiar. Una evaluación podría identificar fácilmente este problema mediante observación directa o con algunas preguntas simples. Reconocer estas limitaciones permitiría que los proyectos se enfocaran en mejorar la capacidad operativa, por ejemplo facilitando el acceso a la mecanización, como ocurrió en Asia con el cambio del búfalo a los motocultores.
Otro problema importante es el déficit calórico de los pequeños agricultores. Una jornada completa de trabajo agrícola requiere más de 4000 kcal diarias, pero muchos productores solo disponen de 2500 kcal, de las cuales 2000 se destinan al metabolismo basal. Esto deja únicamente 500 kcal para el trabajo, el equivalente a pocas horas de esfuerzo. No sorprende que el establecimiento del cultivo tome ocho semanas. Sabemos desde hace décadas que los pequeños agricultores son pobres e incluso padecen hambre, pero raramente relacionamos esto con el manejo de cultivos. ¿Cómo ayudarlos a salir de la pobreza si no abordamos primero esta realidad?
Otra preocupación es la dependencia excesiva de las organizaciones de productores para la comercialización. Las evaluaciones no han señalado lo limitado de su alcance, ya que solo alrededor de diez por ciento de los agricultores se afilian y estos, incluso, venden la mayor parte de su producción a comerciantes privados. Las organizaciones solo captan aproximadamente cinco por ciento de la producción comercializable, lo que constituye un fracaso evidente según estándares comerciales. Sin embargo, siguen siendo presentadas como un éxito desde hace más de treinta años. ¿Por qué? ¿Qué se necesita para que las evaluaciones propongan mecanismos de comercialización más eficaces?
Adjunto a esta reflexión un artículo que preparé para un simposio en la Universidad Estatal de Colorado, donde reflexiono sobre más de cincuenta años de trabajo con comunidades de pequeños productores. El artículo, ilustrado y factual, desarrolla los puntos mencionados aquí, incluidos los largos periodos de preparación de los proyectos, la necesidad de evaluar la viabilidad operativa, los déficits energéticos, la importancia de la mecanización y las limitaciones de las evaluaciones actuales.
Los invito a descargarlo y leerlo cuando puedan. El enlace es el siguiente
https colon slash slash agsci colostate edu slash smallholderagriculture slash wp content slash uploads slash sites slash 77 slash 2023 slash 03 slash Reflections pdf
Gracias.
Dick Tinsley
Profesor emérito
Departamento de Suelos y Ciencias de Cultivos
Universidad Estatal de Colorado
Nigeria
Uzodinma Adirieje
National President
Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association
Publicado el 07/12/2025
Vincular la evidencia con la acción es fundamental para garantizar que las intervenciones de la sociedad civil generen resultados medibles y permanezcan alineadas con las necesidades de las comunidades. Para fortalecer la relación entre la evaluación de impacto y la toma de decisiones en tiempo real, las Naciones Unidas y sus socios deben priorizar la institucionalización del uso oportuno de datos mediante sistemas integrados de seguimiento, evaluación y aprendizaje. La incorporación de mecanismos de retroalimentación rápida, como paneles adaptativos, informes basados en dispositivos móviles y fichas de evaluación comunitaria, permite al personal de primera línea y a los responsables de la toma de decisiones seguir el progreso, identificar brechas y ajustar estrategias con rapidez.
La co-creación de evidencia con organizaciones de la sociedad civil y con las poblaciones afectadas asegura que las evaluaciones reflejen las realidades locales. Cuando las comunidades participan directamente en la definición de indicadores, la generación de datos y la interpretación de los resultados, la evidencia se vuelve más pertinente, más confiable y más fácil de utilizar.
Es imprescindible que las Naciones Unidas inviertan en el fortalecimiento de capacidades de las organizaciones de la sociedad civil, de modo que puedan realizar evaluaciones rigurosas pero ágiles, aplicar herramientas digitales y comunicar claramente los resultados. Esto incluye formación en métodos cualitativos y cuantitativos, visualización de datos y planificación basada en escenarios.
Finalmente, promover plataformas de aprendizaje colaborativo, como centros de evidencia ONU-OSC y espacios de intercambio de conocimientos Sur-Sur, ayuda a transformar los resultados de las evaluaciones en acciones compartidas. Cuando la evidencia es accesible y está democratizada, la toma de decisiones se vuelve más rápida, más inclusiva y más responsable, lo que mejora el impacto del desarrollo.
Dr Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA
Egypt
Ola Eltoukhi
Impact Evaluation Consultant
WFP
Publicado el 01/12/2025
Nigeria
Uzodinma Adirieje
National President
Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association
Publicado el 07/12/2025
La localización de la evidencia es fundamental para garantizar que las evaluaciones de impacto realmente informen los resultados del desarrollo y fortalezcan a la sociedad civil. Para lograrlo, las evaluaciones deben comenzar con la definición de prioridades a nivel local, donde las comunidades, las instituciones tradicionales, los grupos de mujeres, las redes juveniles y las poblaciones vulnerables definan conjuntamente qué significa el éxito y cuáles son los resultados que más importan. Esto permite que las evaluaciones se basen en realidades vividas y no en marcos externos impuestos.
En consecuencia, las evaluaciones deben diseñarse de acuerdo con las capacidades locales, combinando el rigor científico con métodos apropiados al contexto, como la investigación acción participativa, las fichas de evaluación comunitaria, el monitoreo centinela y los mecanismos de retroalimentación rápida. El uso de herramientas de datos simplificadas, tecnologías móviles y canales de comunicación culturalmente apropiados ayuda a reducir las barreras de participación.
El fortalecimiento de la capacidad técnica de la sociedad civil es esencial. La formación de las organizaciones de la sociedad civil en principios de seguimiento, evaluación y aprendizaje, alfabetización de datos y aprendizaje adaptativo les permite generar, interpretar y utilizar la evidencia de manera efectiva. Las alianzas con universidades e institutos de investigación aportan mayor credibilidad.
Finalmente, los resultados de la evaluación deben traducirse en conclusiones prácticas y relevantes a nivel local, utilizando narrativas, paneles visuales y foros de retroalimentación que conecten con los actores comunitarios. Cuando la evidencia refleja las voces locales, respeta los contextos culturales y apoya la resolución práctica de problemas, se convierte en un motor poderoso de un desarrollo inclusivo, responsable y sostenible.
Dr Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA
Nigeria
Uzodinma Adirieje
National President
Nigerian Association of Evaluators (NAE) and Afrihealth Optonet Association
Publicado el 07/12/2025
Supporting UN Reform – Global Impact Evaluation Forum 2025 (Civil Society Perspective)
Supporting the ongoing UN reform agenda—especially its focus on coordination, efficiency, and country-level impact—is essential for improving the lives of rural and poor urban populations in Africa and other resource-poor regions. The impact evaluation community can play a transformative role by strengthening coherence, accountability, and value for money across the UN system.
First, evaluators must champion harmonized measurement frameworks that reduce duplication and align UN agencies, governments, and civil society around shared indicators. Common frameworks improve comparability, promote joint planning, and ensure that results reflect community priorities rather than institutional silos.
Second, the community should support country-led, context-sensitive evaluations that amplify local voices. By embedding participatory approaches, engaging community-based organizations, and acknowledging traditional knowledge systems, evaluations become more relevant and actionable for marginalized populations.
Third, evaluators can foster cost-effective programming by generating evidence on what works, what does not, and why. This requires strengthening real-time monitoring, adaptive learning, and the use of digital tools (including AI) to track performance and inform timely course corrections. Clear communication of findings—through policy briefs, dashboards, and community dialogues—helps optimize resource allocation.
Fourth, the community should reinforce collaboration across UN agencies, promoting joint evaluations, shared data platforms, and cross-sector learning. These reduce transaction costs and enhance integrated delivery of health, climate, livelihood, and social protection services.
Ultimately, by advancing evidence-driven reforms, the impact evaluation community can help the UN deliver more coherent, inclusive, and cost-effective solutions—ensuring that rural and poor urban populations receive equitable attention and lasting development gains.
Dr. Uzodinma Adirieje, DDP, CMC, CMTF, FAHOA, FIMC, FIMS, FNAE, FASI, FSEE, FICSA
Mali
Elie COULIBALY
Technical MEAL Advisor
Catholic Relief Services
Publicado el 07/12/2025
Estimadas y estimados colegas,
South Africa
Tamarie Magaisa
MUSANGEYACONSULTING
Publicado el 28/11/2025
Considero muy interesante la cuestión sobre el vínculo entre los resultados de las evaluaciones de impacto y la toma de decisiones en tiempo real. Esta pregunta requiere una mayor precisión, especialmente en cuanto a lo que se entiende por toma de decisiones en tiempo real. A menos que las decisiones se tomen a nivel organizacional, suele ser difícil utilizar los resultados de la evaluación a nivel de proyecto, ya que las evaluaciones finales se realizan generalmente al término del proyecto.
Sin embargo, cuando las organizaciones adoptan una posición después de la presentación de los resultados de la evaluación, existen varios ejemplos de uso concreto de dichos resultados, entre ellos
Un factor importante que debilita el vínculo entre los resultados de la evaluación y su utilización es cuando las evaluaciones se realizan principalmente para cumplir un requisito de conformidad y no para promover un aprendizaje organizacional genuino o un cambio operativo inmediato. Esto requiere un compromiso claro de la dirección.
Cambodia
Binod Chapagain
Technical Advisor
UNDP
Publicado el 02/12/2025
Connecting evidence: How can various UN agencies and partners, with diverse mandates, align evidence agendas in the humanitarian, development, and peace nexus?
Answer
Aligning evidence agendas across the Humanitarian-Development-Peace (HDP) nexus is one of the UN's most complex challenges. It requires merging three distinct "languages" of evidence: the (a) needs-based data of humanitarians, the (b) systems-based metrics of development actors, and the (c)political/risk-based analysis of peacebuilders.
There should be an agreement before sharing evidence on what to measure, share and agree on what that data is for. The most important is to have a primary strategic tool for collective outcome
What it is: A jointly agreed-indicator targets (e.g., "Reduce food insecurity in Region X by 30% over 5 years") that transcends any single agency's mandate.
How it aligns evidence: Instead of each agency measuring its own output (e.g., WFP measuring food delivered vs. UNDP measuring seeds distributed), they align their evidence to track the shared result.
Just to give an example-the Sudan (Protracted Crisis) Humanitarian data showed where people were hungry, but not why (root causes). a joint evidence approach would be for UNDP and humanitarians used shared "locality profiles" that mapped service gaps (development) alongside displacement figures (humanitarian) to target investments.
Jointly agreed indicators and targets will help to also produce an evaluation joint report that however take context into consideration
South Africa
Sibongile Sithole
Evaluation Consultant
Publicado el 07/12/2025
My experience with the Three Horizons (3H) Initiative at the International Evaluation Academy (IEAc) has reinforced a key lesson: evaluation practice must become more transformational and shift away from conventional approaches that have long been shaped by Global North ideologies.”
In trying to find solutions on how evaluations can be fit for purpose, various points on localising evidence were brought up.
Firstly, localising impact evaluations means shifting power towards local actors and local knowledge. Components such as evaluation questions should be shaped by communities, local governments, and indigenous knowledge holders.
Secondly, particularly in the Global South context, approaches such as storytelling, oral histories, communal dialogue, and participatory narrative methods should sit alongside quantitative and experimental designs. These reflect how many African communities make sense of change and offer culturally grounded insights that traditional methods often miss.
Last but not least, respect for cultural protocols, indigenous and community consent ensures that the evaluation serves the people it studies, not external agendas.
Using the Three Horizons framework while centring African and indigenous knowledge can help create evaluations that are culturally rooted, locally owned, and better aligned with the futures communities themselves envision.
Egypt
Ola Eltoukhi
Impact Evaluation Consultant
WFP
Publicado el 25/11/2025
Gracias a todas y todos por unirse a esta discusión. La nota de debate plantea varios puntos importantes sobre cómo las alianzas basadas en evidencia pueden fortalecer la coherencia, mejorar la toma de decisiones y apoyar la agenda más amplia de la reforma de las Naciones Unidas. Abro este hilo para que podamos analizar estas ideas juntos.
Les invitamos a participar con sus preguntas, reflexiones o ejemplos prácticos derivados de su propia experiencia. Esta semana, queremos conocer más sus opiniones sobre cuáles son las formas más eficaces para que la ONU y sus socios fortalezcan el vínculo entre los hallazgos de las evaluaciones de impacto y la toma de decisiones en tiempo real. Si cuentan con ejemplos, será un placer conocerlos