- Over 8 years of experience in Monitoring, Evaluation, Research, and Learning (MERL) across East Africa
- Expertise in:
- Quantitative, qualitative, and mixed-methods research
- Quasi-experimental and participatory evaluation designs
- Outcome harvesting and adaptive learning approaches
- Gender-responsive monitoring and policy analysis
- Thematic areas:
- Food security and rural development
- Climate change adaptation and climate justice
- Gender equality and women’s economic empowerment
- Preventing and countering violent extremism (PCVE)
- Experience leading MERL frameworks for multi-country, multi-partner programs
- Conducted evaluations for major donors and development partners (World Bank, EU, DFID, GCERF)
- Recent focus areas include:
- Evaluation of climate-smart agriculture and sustainable livelihoods interventions
- Mapping local vulnerabilities to environmental and climate-related shocks
- Strengthening MEL systems in contexts with low technology access and literacy
- Proficient in data collection and analysis tools, including NVivo, KoboToolbox, SPSS, Stata, Python and OCR-based digitization methods
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posté le 08/12/2025
In connecting evidence across the Humanitarian–Development–Peace (HDP) Nexus, aligning evidence agendas across humanitarian, development, and peace pillars requires intentional systems that move beyond sectoral silos toward holistic, context-responsive learning. In my experience at the intersection of PCVE, gender, and governance in county settings, valuable data exists across all three pillars—yet fragmentation prevents it from shaping a unified understanding of risk, resilience, and long-term community wellbeing.
One way to strengthen coherence is through shared learning frameworks built around harmonized indicators, aligned theories of change, and interoperable data systems. Humanitarian actors collecting early warning signals, development teams gathering socio-economic data, and peacebuilding practitioners tracking governance and cohesion trends can feed insights into a common evidence ecosystem. Joint sense-making platforms across UN agencies, county governments, and civil society further ensure interpretation and adaptation occur collectively.
Supporting local CSOs to build capacity in Core Humanitarian Standards (CHS) of Quality Assurance is critical. When local actors understand and apply CHS, their data becomes more reliable and compatible with UN and donor systems. Co-creating evaluation tools, monitoring frameworks, and learning agendas with these CSOs strengthens ownership and ensures evidence reflects local realities.
In African contexts, incorporating “Made in Africa Evaluation” (MAE) approaches, published and championed by our very own, Africa Evaluation Association (AfEA), can further decolonize practice by integrating local values, culture (such as Ubuntu), and conditions. By combining MAE principles with CHS, UN and donor systems can leverage contextually relevant methodologies, strengthen local capacity, and promote governance and accountability in a culturally grounded manner.
Finally, stronger Donor–CSO networking structures—learning hubs, joint review forums, and communities of practice—deepen understanding of scope, stabilize transitions of project ownership, and support long-term collaboration. Connecting evidence, capacities, and local approaches ensures HDP programs are coherent, context-sensitive, and impactful for the communities they serve.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posté le 08/12/2025
The impact evaluation community can play a critical role in advancing the UN’s reform agenda, particularly the goals of coherence, cost-effectiveness, and system-wide alignment. In my work across multi-partner consortia and county-level government structures, I have seen how fragmentation in evaluation approaches often leads to duplication, inconsistent standards, and heavy reporting burdens for local actors. Supporting UN reform begins with harmonizing evaluation frameworks across agencies so that they draw from shared theories of change, common indicators, and compatible data systems. This reduces transaction costs for implementing partners and allows evidence to be aggregated more systematically across the humanitarian–development–peace (HDP) nexus.
The evaluation community can also contribute by promoting joint or multi-agency evaluations, particularly for cross-cutting thematic areas like PCVE, gender equality, and resilience. Joint evaluations not only save resources but also produce findings that are more holistic and better suited to inter-agency coordination. Additionally, evaluation teams can support reform by emphasizing adaptive, utilization-focused methodologies that produce real-time insights and decision-relevant evidence, rather than lengthy reports that come too late to influence programming.
Cost-effectiveness can be further enhanced by investing in local evaluators, research institutions, and government systems rather than relying exclusively on external consultants. This not only builds long-term capacity but also reduces the financial and operational footprint of evaluations. The evaluation community can strengthen UN reform by championing a culture of shared accountability, collaborative learning, and strategic alignment—ensuring that evidence not only measures results but also enables the UN system to function more cohesively and effectively.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posté le 08/12/2025
On localising evidence; designing and using impact evaluations to advance the localization agenda requires the UN and its partners to shift power toward local actors, both in defining evaluation priorities and in generating the evidence itself. From my experience supporting MEL across county governments, local CSOs, and community structures, localization succeeds when evaluations are not externally imposed but co-created with those closest to the problem. This begins with jointly defining evaluation questions that reflect community priorities and county development realities, rather than donor-driven assumptions. It also involves investing in the capacities of county departments, local researchers, and grassroots organizations to participate meaningfully in evaluation design, data collection, analysis, and interpretation.
A particularly important opportunity is the intentional integration of citizen-generated data (CGD), that I have mentioned in a previous post, and locally collected datasets into evaluation frameworks. Many local CSOs like mine, community networks, and think tanks already generate rich and credible data on governance, resilience, gender dynamics, and PCVE indicators. When validated and aligned with national standards, these data sources can complement official statistics, strengthen SDG reporting, and ensure that evaluation findings reflect lived realities. This approach not only accelerates evidence availability but also embodies the principle of “nothing about us without us.”
Localizing evidence also means ensuring that findings are communicated back to communities in accessible formats and used in county-level decision forums such as CIDP reviews, sector working groups, and community dialogues. Furthermore, evaluations should include iterative sense-making sessions with local actors so they can directly shape programme adaptation. Ultimately, localization is not just about generating evidence locally—it is about shifting ownership, elevating local expertise, and ensuring that impact evaluations meaningfully inform policies and programmes at the levels where change is most felt.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posté le 08/12/2025
On brigding evidence with action; strengthening the link between impact evaluation findings and real-time decision-making requires the UN and its partners to embrace learning-oriented systems rather than compliance-driven evaluation cultures. From my experience leading MEL across multi-county PCVE and governance programmes, evidence becomes actionable only when it is intentionally embedded into programme management—through continuous feedback loops, co-created interpretation sessions, and adaptive planning processes. Structured learning forums where evaluators, implementers, government stakeholders, and community representatives jointly analyse emerging findings are particularly effective for translating insights into operational shifts.
In the PCVE space, real-time evidence use is especially critical due to the fast-evolving nature of threats and community dynamics. A recent example is my organisation’s Submission to the UN Special Rapporteur under the UNOCT call for inputs on definitions of terrorism and violent extremism, where we highlighted how grounding global guidance in locally generated evidence improves both relevance and uptake. This experience reaffirmed that when evaluation findings are aligned with practitioner insights and local contextual knowledge, global frameworks become more actionable on the ground.
Additionally, the UN can strengthen evidence uptake by integrating citizen-generated data (CGD) into SDG indicator ecosystems—particularly where local CSOs and think tanks already generate credible, validated datasets. Leveraging CGD not only accelerates access to real-time insights but also strengthens community ownership and localization.
Ultimately, bridging evidence and action requires mixed-method evaluations, rapid dissemination tools, psychological safety for honest learning, and a UN culture where evidence is viewed as a shared resource for collective decision-making, not merely an accountability requirement.