Building on my earlier reflection, I think the case for future-informed evaluation becomes even more compelling when we look at it through an East African lens. Across the region, programmes are not just operating in “complex contexts"; they are operating in structurally shifting systems. Climate variability, mobility (including refugee dynamics), demographic pressure, and decentralised governance are not external risks; they are core features of the system itself. In such contexts, the limitation of retrospective evaluation is not only that it looks backward but also that it often assumes a level of system stability that simply does not exist.
For example:
In arid and semi-arid areas, repeated drought cycles can completely reshape livelihood systems within the lifespan of a programme
In refugee-hosting regions, policy shifts and funding flows can rapidly alter service delivery structures
In devolved governance systems, priorities and implementation capacity can vary significantly across counties and over time
What this means in practice is that programme performance becomes highly sensitive to system shifts, making static evaluation benchmarks less meaningful. Taking this further into the Kenyan context, I’ve seen a recurring pattern: Programmes are often designed with relatively fixed theories of change, but are implemented within highly dynamic county-level ecosystems, politically, institutionally, and socially. By the time evaluation assesses “effectiveness” or “sustainability,” the underlying assumptions (on which those criteria are based) may no longer hold. This creates a subtle but important risk, we end up evaluating how well a programme performed in a past version of the system, rather than how well it is positioned for the system that is emerging.
To respond to this, I think future-informed evaluation in Kenya (and similar contexts) needs to move toward a few deliberate shifts:
From static baselines to dynamic reference points Baselines should not be treated as fixed anchors, but revisited as systems evolve
From endline judgement to continuous sensemaking Particularly at county level, where political economy and implementation realities shift rapidly
From “attribution under control” to “contribution under uncertainty” Recognising that outcomes are increasingly co-produced by multiple interacting system actors
Stronger integration of political economy and climate foresight into evaluation design Not as separate analyses, but as core to how we interpret findings
Ultimately, in contexts like Kenya, future-informed evaluation is not a methodological upgrade, it is a practical necessity for relevance. It allows evaluation to answer a slightly different but more useful question: Not just “Did this work?” but “Will this continue to work, and under what conditions?”
I would be interested to hear from others working in devolved or climate-vulnerable systems; "How are you adapting evaluation approaches to account for sub-national variability and rapidly shifting implementation contexts?"
RE: From Hindsight to Foresight: How Evaluation Can Become Future-Informed
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posted on 26/04/2026
Building on my earlier reflection, I think the case for future-informed evaluation becomes even more compelling when we look at it through an East African lens. Across the region, programmes are not just operating in “complex contexts"; they are operating in structurally shifting systems. Climate variability, mobility (including refugee dynamics), demographic pressure, and decentralised governance are not external risks; they are core features of the system itself. In such contexts, the limitation of retrospective evaluation is not only that it looks backward but also that it often assumes a level of system stability that simply does not exist.
For example:
What this means in practice is that programme performance becomes highly sensitive to system shifts, making static evaluation benchmarks less meaningful. Taking this further into the Kenyan context, I’ve seen a recurring pattern: Programmes are often designed with relatively fixed theories of change, but are implemented within highly dynamic county-level ecosystems, politically, institutionally, and socially. By the time evaluation assesses “effectiveness” or “sustainability,” the underlying assumptions (on which those criteria are based) may no longer hold. This creates a subtle but important risk, we end up evaluating how well a programme performed in a past version of the system, rather than how well it is positioned for the system that is emerging.
To respond to this, I think future-informed evaluation in Kenya (and similar contexts) needs to move toward a few deliberate shifts:
Baselines should not be treated as fixed anchors, but revisited as systems evolve
Particularly at county level, where political economy and implementation realities shift rapidly
Recognising that outcomes are increasingly co-produced by multiple interacting system actors
Not as separate analyses, but as core to how we interpret findings
Ultimately, in contexts like Kenya, future-informed evaluation is not a methodological upgrade, it is a practical necessity for relevance. It allows evaluation to answer a slightly different but more useful question: Not just “Did this work?” but “Will this continue to work, and under what conditions?”
I would be interested to hear from others working in devolved or climate-vulnerable systems; "How are you adapting evaluation approaches to account for sub-national variability and rapidly shifting implementation contexts?"