- Over 8 years of experience in Monitoring, Evaluation, Research, and Learning (MERL) across East Africa
- Expertise in:
- Quantitative, qualitative, and mixed-methods research
- Quasi-experimental and participatory evaluation designs
- Outcome harvesting and adaptive learning approaches
- Gender-responsive monitoring and policy analysis
- Thematic areas:
- Food security and rural development
- Climate change adaptation and climate justice
- Gender equality and women’s economic empowerment
- Preventing and countering violent extremism (PCVE)
- Experience leading MERL frameworks for multi-country, multi-partner programs
- Conducted evaluations for major donors and development partners (World Bank, EU, DFID, GCERF)
- Recent focus areas include:
- Evaluation of climate-smart agriculture and sustainable livelihoods interventions
- Mapping local vulnerabilities to environmental and climate-related shocks
- Strengthening MEL systems in contexts with low technology access and literacy
- Proficient in data collection and analysis tools, including NVivo, KoboToolbox, SPSS, Stata, Python and OCR-based digitization methods
Posted on 08/12/2025
In connecting evidence across the Humanitarian–Development–Peace (HDP) Nexus, aligning evidence agendas across humanitarian, development, and peace pillars requires intentional systems that move beyond sectoral silos toward holistic, context-responsive learning. In my experience at the intersection of PCVE, gender, and governance in county settings, valuable data exists across all three pillars—yet fragmentation prevents it from shaping a unified understanding of risk, resilience, and long-term community wellbeing.
One way to strengthen coherence is through shared learning frameworks built around harmonized indicators, aligned theories of change, and interoperable data systems. Humanitarian actors collecting early warning signals, development teams gathering socio-economic data, and peacebuilding practitioners tracking governance and cohesion trends can feed insights into a common evidence ecosystem. Joint sense-making platforms across UN agencies, county governments, and civil society further ensure interpretation and adaptation occur collectively.
Supporting local CSOs to build capacity in Core Humanitarian Standards (CHS) of Quality Assurance is critical. When local actors understand and apply CHS, their data becomes more reliable and compatible with UN and donor systems. Co-creating evaluation tools, monitoring frameworks, and learning agendas with these CSOs strengthens ownership and ensures evidence reflects local realities.
In African contexts, incorporating “Made in Africa Evaluation” (MAE) approaches, published and championed by our very own, Africa Evaluation Association (AfEA), can further decolonize practice by integrating local values, culture (such as Ubuntu), and conditions. By combining MAE principles with CHS, UN and donor systems can leverage contextually relevant methodologies, strengthen local capacity, and promote governance and accountability in a culturally grounded manner.
Finally, stronger Donor–CSO networking structures—learning hubs, joint review forums, and communities of practice—deepen understanding of scope, stabilize transitions of project ownership, and support long-term collaboration. Connecting evidence, capacities, and local approaches ensures HDP programs are coherent, context-sensitive, and impactful for the communities they serve.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posted on 26/04/2026
Building on my earlier reflection, I think the case for future-informed evaluation becomes even more compelling when we look at it through an East African lens. Across the region, programmes are not just operating in “complex contexts"; they are operating in structurally shifting systems. Climate variability, mobility (including refugee dynamics), demographic pressure, and decentralised governance are not external risks; they are core features of the system itself. In such contexts, the limitation of retrospective evaluation is not only that it looks backward but also that it often assumes a level of system stability that simply does not exist.
For example:
What this means in practice is that programme performance becomes highly sensitive to system shifts, making static evaluation benchmarks less meaningful. Taking this further into the Kenyan context, I’ve seen a recurring pattern: Programmes are often designed with relatively fixed theories of change, but are implemented within highly dynamic county-level ecosystems, politically, institutionally, and socially. By the time evaluation assesses “effectiveness” or “sustainability,” the underlying assumptions (on which those criteria are based) may no longer hold. This creates a subtle but important risk, we end up evaluating how well a programme performed in a past version of the system, rather than how well it is positioned for the system that is emerging.
To respond to this, I think future-informed evaluation in Kenya (and similar contexts) needs to move toward a few deliberate shifts:
Baselines should not be treated as fixed anchors, but revisited as systems evolve
Particularly at county level, where political economy and implementation realities shift rapidly
Recognising that outcomes are increasingly co-produced by multiple interacting system actors
Not as separate analyses, but as core to how we interpret findings
Ultimately, in contexts like Kenya, future-informed evaluation is not a methodological upgrade, it is a practical necessity for relevance. It allows evaluation to answer a slightly different but more useful question: Not just “Did this work?” but “Will this continue to work, and under what conditions?”
I would be interested to hear from others working in devolved or climate-vulnerable systems; "How are you adapting evaluation approaches to account for sub-national variability and rapidly shifting implementation contexts?"
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posted on 26/04/2026
One reflection I would add to this discussion is that the “hindsight vs foresight” framing is useful, but perhaps still incomplete. From practice, the deeper issue is not only that evaluation is retrospective, but that it is often temporally rigid in systems that are inherently adaptive. In many of the programmes I’ve worked on, particularly in fragile and climate-affected contexts, you can have an intervention that is highly “effective” at midline, but fundamentally misaligned with the direction the system is moving. By the time endline evaluation happens, the system has shifted, and the findings, while technically valid, have already lost decision-making value. This aligns with what the paper describes as a temporal mismatch between evaluation and reality. What this suggests is that integrating foresight is not just about adding tools like scenario planning or horizon scanning. It is about reconfiguring when and how evaluative judgement happens.
A few practical shifts that I have found useful:
(e.g., linking MEL systems with real-time decision points, not just reporting milestones)
(this helps avoid reinforcing linear assumptions in non-linear systems)
In my experience, this combination is underutilized, quantitative trend analysis helps anchor plausibility, while foresight expands the space of what we consider possible.
I also want to echo a point raised earlier in the discussion: the constraint is not primarily methodological, but institutional and cultural. As long as evaluation is commissioned primarily for accountability, even the most sophisticated foresight tools risk being absorbed into compliance logic. So perhaps the shift is less about moving from hindsight to foresight, and more about moving from: evaluation as judgement → evaluation as navigation under uncertainty.
Maybe also pose this, how do we redesign evaluation commissioning and incentives so that future-informed insights are not just produced, but actually used in decision-making cycles?