Rhode, thank you for this. The point about complementarity between foresight methods and predictive analytics is an important one that does not always get made explicitly. There is sometimes an implicit assumption that foresight is primarily qualitative and futures-oriented, while predictive modelling is the domain of harder data, but in practice strong evaluation benefits from both, and the logic of combining them is sound. Foresight helps us explore the uncertainty space, while predictive methods help us quantify likely trajectories where data allows.
Your point about data fragmentation is well taken and, I would argue, is itself a systemic issue that evaluation has a role in addressing. If evaluations systematically produced structured, accessible data as a matter of course, rather than siloed project-level reports, the longitudinal datasets that would support the kind of modelling you describe would gradually accumulate. National ownership, as you suggest, is one pathway. But evaluation commissioning practices within international organisations could also change in ways that support this. This seems like a concrete institutional reform worth exploring further in the discussion. I also find the AI and machine learning dimension worth tracking carefully. The capacity for cross-project learning at scale is genuinely new, and its implications for evaluation design are still being worked out .
RE: From Hindsight to Foresight: How Evaluation Can Become Future-Informed
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Rhode, thank you for this. The point about complementarity between foresight methods and predictive analytics is an important one that does not always get made explicitly. There is sometimes an implicit assumption that foresight is primarily qualitative and futures-oriented, while predictive modelling is the domain of harder data, but in practice strong evaluation benefits from both, and the logic of combining them is sound. Foresight helps us explore the uncertainty space, while predictive methods help us quantify likely trajectories where data allows.
Your point about data fragmentation is well taken and, I would argue, is itself a systemic issue that evaluation has a role in addressing. If evaluations systematically produced structured, accessible data as a matter of course, rather than siloed project-level reports, the longitudinal datasets that would support the kind of modelling you describe would gradually accumulate. National ownership, as you suggest, is one pathway. But evaluation commissioning practices within international organisations could also change in ways that support this. This seems like a concrete institutional reform worth exploring further in the discussion. I also find the AI and machine learning dimension worth tracking carefully. The capacity for cross-project learning at scale is genuinely new, and its implications for evaluation design are still being worked out .