Skip to main content

RE: From Hindsight to Foresight: How Evaluation Can Become Future-Informed

Dennis Ngumi Wangombe

Kenya

Dennis Ngumi Wangombe

MEL Specialist

CHRIPS

Posted on 26/04/2026

One reflection I would add to this discussion is that the “hindsight vs foresight” framing is useful, but perhaps still incomplete. From practice, the deeper issue is not only that evaluation is retrospective, but that it is often temporally rigid in systems that are inherently adaptive. In many of the programmes I’ve worked on, particularly in fragile and climate-affected contexts, you can have an intervention that is highly “effective” at midline, but fundamentally misaligned with the direction the system is moving. By the time endline evaluation happens, the system has shifted, and the findings, while technically valid, have already lost decision-making value. This aligns with what the paper describes as a temporal mismatch between evaluation and reality. What this suggests is that integrating foresight is not just about adding tools like scenario planning or horizon scanning. It is about reconfiguring when and how evaluative judgement happens
A few practical shifts that I have found useful:

  • Embedding evaluative thinking into adaptive cycles rather than discrete evaluation moments
    (e.g., linking MEL systems with real-time decision points, not just reporting milestones)
  • Testing theories of change against multiple plausible futures, not just validating them against past evidence
    (this helps avoid reinforcing linear assumptions in non-linear systems)
  • Reframing key criteria
    • Relevance → not just alignment with current needs, but fitness under plausible future conditions
    • Sustainability → not durability of results, but resilience under stress and change
  • Blending predictive analytics with foresight
    In my experience, this combination is underutilized, quantitative trend analysis helps anchor plausibility, while foresight expands the space of what we consider possible.

I also want to echo a point raised earlier in the discussion: the constraint is not primarily methodological, but institutional and cultural. As long as evaluation is commissioned primarily for accountability, even the most sophisticated foresight tools risk being absorbed into compliance logic. So perhaps the shift is less about moving from hindsight to foresight, and more about moving from: evaluation as judgement → evaluation as navigation under uncertainty. 
Maybe also pose this, how do we redesign evaluation commissioning and incentives so that future-informed insights are not just produced, but actually used in decision-making cycles?