The online discussion will remain open for contributions until 30 April 2026!
Background and Rationale
Food security, environmental, and agricultural development programmes increasingly operate in volatile, uncertain, and complex contexts. Climate disruption, ecosystem degradation, shifting geopolitical conditions, and cascading crises are no longer background noise. They shape the environment in which these programmes are designed and implemented. Evaluation in these sectors often stays focused on retrospective accountability, measuring past performance against fixed objectives, even as operating conditions keep shifting.
This temporal mismatch has practical consequences. When evaluations judge relevance, effectiveness, and sustainability against the conditions that existed at programme design, they can produce findings that are accurate about the past but less useful for guiding future decisions and navigating change. Theory of Change processes often mirror the same limitation one that does not account for the plausible futures that will determine whether current investments ultimately succeed or fail.
Strategic foresight brings forward-looking approaches that can strengthen evaluation practice. Methods such as horizon scanning, scenario planning, Futures Triangle, the Three Horizons framework, and Causal Layered Analysis help evaluators look beyond past performance and consider how programmes might perform under different future conditions. These foresight tools and frameworks can enrich evaluation at every stage, from scoping and design through to learning and use. When used alongside evaluation, they support more anticipatory governance, enabling decisions that draw on evidence, while remaining attentive to uncertainty and long-term change.
Momentum for integrating foresight and evaluation is already visible across various sectors. WFP's Anticipatory Action programmes, for example, have already introduced foresight-informed approaches into monitoring and evaluation frameworks. At the same time, organisations such as the GEF, CGIAR, and FAO are exploring how evaluation can better assess long-term resilience and systemic impacts in environment and agriculture investments. These developments are also prompting broader reflection within the evaluation community, including renewed interest in how the OECD-DAC criteria might evolve, shifting from measuring alignment with past priorities, toward assessing prospective relevance and robustness across plausible future scenarios. Despite this momentum, practical guidance for evaluators remains limited. Few evaluators have received formal exposure to foresight methods, and foresight practitioners are rarely trained in evaluation. The tools, case examples, and community of practice needed to connect these fields are not yet well established.
Discussion Purpose
This online discussion will examine how foresight methods can be integrated into evaluation practice in food security, environmental, and agricultural contexts. Drawing on practitioners’ experiences, real-world examples, and optional readings, the discussion will highlight practical insights that evaluators can use to make their work responsive to uncertainty and more useful for forward-looking decision-making.
Discussion Objectives
- To introduce key foresight concepts and tools including horizon scanning, scenario planning, Causal Layered Analysis, and the Three Horizons framework and explore how they can be applied within evaluation processes.
- To examine how foresight-informed evaluation can strengthen assessments of relevance, sustainability, and systemic impact in food security, environmental, and agricultural programmes.
- To share concrete examples of foresight and evaluation integration from across the sector, including anticipatory action, climate resilience programming, and theory of change processes.
- To identify practical entry points for evaluators to begin incorporating foresight perspectives into their work, regardless of institutional context or resource constraints.
Guiding Questions
- In contexts of climate uncertainty, rapid environmental change, and shifting geopolitical realities, where have you seen the limits of retrospective evaluation? How has this affected the use of findings?
- What foresight tools or methods have you encountered in your evaluation practice? What made them useful or difficult to apply? What foresight tools, if any, have you used personally?
- How might our interpretations of the DAC criteria (such as relevance and sustainability) through a foresight lens change what we measure, how we measure, and how we make recommendations?
- Where do you see opportunities for integrating foresight and evaluation in food security, environmental, and agricultural contexts?
- What skills, resources, and institutional changes would be needed to make foresight a regular part of evaluation design and commissioning?
Discussion Readings
Week 1: Introductory discussion on the theme and exploration of the guiding questions.
Week 2: Examine transformative foresight for the transformational imperative, via a forthcoming article in the Journal of MultiDisciplinary Evaluation, edited by Scott Chaplowe.
Week 3: Discuss “Fusing foresight and futures thinking for a new transformative evaluation paradigm” by Rose Thompson Coon, Katri Vataja, and Pinja Parkkonen (in New Directions for Evaluation, Summer 2024, Issue 183, pages 91-101)
Week 4: Explore Quality Criteria for Food Systems Foresight in Africa: A practitioner’s guide for commissioning, facilitating and evaluating foresight, a recent guide written by Katindi Sivi and launched by the Forum for Agricultural Research in Africa, in partnership with Foresight4Food, University of Oxford, and the International Development Research Centre.
This discussion is now closed. Please contact info@evalforearth.org for any further information.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 30/03/2026
Welcome to Week 2
I want to build on last week’s conversation with a short reflection from Scott Chaplowe and Joyce Mukoma’s Evaluation and the Transformational Imperative (see attachment). Their core argument is simple but important. The scale of today’s crises means evaluation cannot remain tied to business-as-usual thinking if it is going to support the wider transformational agenda reflected in the SDGs. They define transformational change not as incremental improvement, but as deep, systemic change in how a system functions.
What I find especially useful is that the article does not present transformation as a single new method. Instead, it asks what is holding evaluation back. It points to four familiar fixations: 1) project fixation, 2) temporal fixation, 3) quantitative fixation, and 4) accountability fixation. In other words, evaluation too often stays trapped inside linear projects, short funding timelines, metric-heavy logics, and compliance-oriented accountability.
Scott and Joyce then suggest several pathways forward…complexity-adaptive methods, principle-focused evaluation, new transformational criteria, data science, and alternative paradigms, including Indigenous and feminist perspectives.
So for this week, I’d like to ask, “If evaluation is to contribute to transformation, what exactly must change first…our methods, our criteria, our institutions, our underlying mindset, or something else?
Nigeria
Dr. Uzodinma Akujekwe Adirieje
CEO
Afrihealth Optonet Association (AHOA) - CSOs Network
Posted on 30/03/2026
FROM HINDSIGHT TO FORESIGHT: EXPERIENCE AT AFRIHEALTH OPTONET ASSOCIATION
by Dr. Uzodinma Adirieje
From hindsight to foresight, our experience at Afrihealth Optonet Association (AHOA) demonstrates that evaluation is most valuable when it moves beyond retrospective accountability to actively shaping future decisions under uncertainty. Three practical insights stand out.
Embed adaptive learning loops into programme design:
In Afrihealth’s health systems and climate-linked interventions, periodic reviews were not treated as endline exercises but as real-time checkpoints. Evaluators facilitated rapid feedback cycles - combining routine data, beneficiary insights, and contextual signals (e.g., policy shifts, climate events like COP29 Baku) - to inform mid-course corrections. This approach ensures programmes remain relevant even as conditions change.
Integrate mixed-methods evidence for anticipatory analysis:
Quantitative indicators alone often lag behind emerging realities. Afrihealth’s evaluations paired service delivery data with qualitative intelligence from communities and frontline workers. For example, shifts in health-seeking behaviour during economic stress were detected early through interviews and focus groups, enabling proactive adjustments in outreach and resource allocation.
Align evaluation questions with decision horizons:
Rather than asking only “what worked,” Afrihealth reframed inquiries toward “what is likely to work next, for whom, and under what conditions.” Scenario-building and contribution analysis were used to explore plausible futures, particularly in programmes intersecting with climate variability and public health risks. This made findings directly usable for strategic planning, not just reporting.
Stakeholder co-creation:
By engaging policymakers, implementers, and communities in defining evaluation priorities, Afrihealth ensured that findings addressed real decision needs. This strengthened ownership and increased the likelihood that recommendations were acted upon.
Similarly, optional readings in developmental evaluation and adaptive management further reinforce these practices, emphasising flexibility, systems thinking, and learning-oriented accountability.
This way, evaluators can enhance relevance in uncertain contexts by institutionalising real-time learning, triangulating diverse evidence, and orienting evaluations toward future-facing decisions.
Dr. Uzodinma Adirieje is a former National President of the Nigerian Association of Evaluators (NAE). He is a seasoned evaluator, health economist, and civil society leader who was the co-consultant to drafting Nigeria’s National M&E Policy. He led SDG3 evaluation synthesis, participated in national SDG 3 and SDG 4 evaluations, and provided M&E training and mentorship, advancing evidence-based, forward-looking development practice.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 28/03/2026
Summary - Week 1
This first week of discussion has made one thing very clear...the call for future-informed evaluation is not coming from a single camp or methodology. It is emerging from lived frustration across practice. Conny Rietdorf reminded us that the “L” in MEL is often the first casualty when evaluation becomes a compliance exercise rather than a space for reflection and learning. Carlos Tarazona then pushed the conversation further through FAO’s One Health evaluation, showing how retrospective analysis can be solid on the past and still insufficient for the futures now emerging. His reframing of relevance as future fitness, sustainability as resilience under change, and coherence as the ability to work across systems gave us a powerful language for thinking differently.
Other contributors sharpened the picture. Serdar Bayryyev highlighted the institutional conditions needed for this shift (i.e., capacity, practical frameworks, and organisational change). Silva Ferretti challenged us not to treat foresight as a technical fix for a deeper cultural problem, asking the more difficult question "What is evaluation for?" Alexis Adébayo grounded the discussion in climate reality, where external shocks can destabilise attribution and weaken the usefulness of findings. Rhode Early Charles reminded us that predictive analytics and foresight are not rivals but complements, especially if we can overcome fragmented data systems. Emmanuel Erick Igiha and Amy Mara brought us back to purpose. Evaluation, at its best, should help people improve, adapt, and navigate what comes next.
So the core thread emerging from Week 1 is this: the move from hindsight to foresight is methodological, yes, but also institutional and deeply cultural. It asks not only for new tools, but for a different orientation to evidence, uncertainty, learning, and change. That feels like an important place to begin.
Looking ahead: In the coming week, we will turn to the transformational imperative and examine transformative foresight through a forthcoming article in the Journal of MultiDisciplinary Evaluation. If you are not familiar with the transformational imperative within evaluation ecosystems, I have attached a short four-page brief written by Scott Chaplowe and Joyce Mukoma.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Amy, thank you for this. You've laid out the landscape beautifully. What strikes me most about your framing is the word transformation. You're not describing a tweak to evaluation methodology… you're describing a fundamental shift in what evaluation is for. Moving from verdict to navigation...from accountability to anticipation.
The four pillars you identify are each compelling on their own. But I think what makes them powerful is how they reinforce each other. Scenario analysis without stakeholder participation risks becoming a technical exercise disconnected from lived realities. Real-time monitoring without a learning culture just generates data that no one acts on. Together, though, they start to describe something that feels genuinely different: evaluation as an ongoing, living conversation with the future.
One question your post raises for me: Who drives this transformation? Evaluators can advocate for forward-looking approaches, but much depends on whether commissioners and decision-makers are willing to fund and use them. In your experience, where has the appetite for prospective evaluation been strongest, and what has made the difference?
Your contribution also makes a nice segue to our focus next week on the transformational imperative. Really glad you're part of this discussion.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Emmanuel, the framing you offer, moving from judging the past to enabling improvement for the future, captures the spirit of what forward-looking evaluation aspires to be. And the question you end with is exactly the right one to keep asking throughout this discussion.
One example that comes to mind is WFP's Anticipatory Action work, where evaluation has been used not only to assess past performance but to refine the trigger systems and scenario models that activate pre-emptive responses before crises fully unfold. That strikes me as a case where evaluation genuinely shaped future action rather than simply recording past performance. But I think the deeper insight in your contribution is about orientation and intent…a forward-looking evaluation can be conducted with largely conventional methods, if the questions it asks and the way findings are framed consistently point toward adaptation and improvement rather than verdict. That cultural shift may be as important as any methodological innovation. What has enabled that orientation in the contexts where you have seen it work?
I just completed a large foresight-informed evaluation for UNICEF, but it is too early to determine what difference it may make. Ask me in 2028!
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Rhode, thank you for this. The point about complementarity between foresight methods and predictive analytics is an important one that does not always get made explicitly. There is sometimes an implicit assumption that foresight is primarily qualitative and futures-oriented, while predictive modelling is the domain of harder data, but in practice strong evaluation benefits from both, and the logic of combining them is sound. Foresight helps us explore the uncertainty space, while predictive methods help us quantify likely trajectories where data allows.
Your point about data fragmentation is well taken and, I would argue, is itself a systemic issue that evaluation has a role in addressing. If evaluations systematically produced structured, accessible data as a matter of course, rather than siloed project-level reports, the longitudinal datasets that would support the kind of modelling you describe would gradually accumulate. National ownership, as you suggest, is one pathway. But evaluation commissioning practices within international organisations could also change in ways that support this. This seems like a concrete institutional reform worth exploring further in the discussion. I also find the AI and machine learning dimension worth tracking carefully. The capacity for cross-project learning at scale is genuinely new, and its implications for evaluation design are still being worked out .
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Thank you, Alexis, for your contribution. The example you raise…costly infrastructure rendered ineffective or destroyed by extreme climate events….puts the attribution problem in very concrete terms. It is a scenario that exposes a fundamental limitation of the logic model at the heart of most retrospective evaluation. If the causal chain is severed by an external shock, the evaluation framework itself struggles to make sense of what happened, let alone offer useful guidance for what should come next.
This connects to a broader issue in evaluation methodology, which is that our standard frameworks often assume a degree of stability in the operating environment that increasingly does not hold in climate-affected contexts. Integrated landscape management is a particularly interesting domain here, because it already operates with long time horizons and complex systems, which arguably makes it one of the areas where foresight-informed evaluation is not a luxury but a necessity. I am curious whether you have seen attempts to build scenario-based thinking into evaluation design in the contexts where you work, even informally, and whether those efforts have helped evaluators and stakeholders navigate the attribution challenges you describe
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Silva, this is a provocation worth examining throughout this discussion, and I think it contributes something that we can continue to unpack in the coming weeks. You are right that foresight tools can simply be recruited into the service of compliance, i.e., anticipating futures to confirm a Theory of Change rather than genuinely interrogating it. That would be a sophisticated version of the same problem.
The prior question you raise “what is evaluation for?” is one I believe this community needs to grapple with more directly. My own sense is that the shift from hindsight to foresight is not just technical, as it also requires a different relationship between evaluators, commissioners, and the programmes being evaluated. If evaluation is purely confirmatory, then foresight becomes window dressing. But if there is institutional appetite for evaluation as genuine exploration, then foresight tools, particularly when used participatorily as you describe, can open up the kind of reflective space that challenges rather than reinforces prevailing assumptions. Keep raising these questions Silva!
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Serdar, thank you for your contribution. The examples you have drawn from WFP, GEF, CGIAR, and FAO's own foresight work are directly relevant to the discussion. It is encouraging to see these referenced alongside each other, since it reinforces that momentum for integrating foresight and evaluation is genuine, even if practical guidance remains thin. Your three-part framework (capacity building, practical frameworks, and institutional change) reflects a sequence that I think is right. Technical tools alone will not shift practice if the institutional incentives continue to reward retrospective accountability above all else. Evaluation mandates, commissioning processes, and the expectations set by donors are all part of the system that needs to shift. That is why I included the question about what institutional changes would be needed, for it seems to me that this is where the real bottleneck sits, not in the availability of foresight methods per se.
The FAO futures of food and agriculture scenarios report you reference is a valuable resource, and it would be interesting to hear from colleagues whether and how evaluators have drawn on those scenarios in their own work, either for framing evaluations or for contextualising findings. Looking forward to continued exchange.
Italy
Carlos Tarazona
Senior Evaluation Officer
FAO
Posted on 27/03/2026
Steve, thank you for this thoughtful engagement and for bringing in Michael Quinn Patton’s work, which I also find highly relevant to this discussion.
I very much agree with your reading that a foresight lens does not necessarily require a parallel methodology, but can be embedded in how we interpret and apply existing frameworks. In that sense, your point about a “more honest application” of the DAC criteria resonates strongly with my own experience particularly in contexts like One Health, climate change adaptation, and agrifood system transformation, where systems are evolving even as we evaluate them.
At the same time, Silvia’s intervention pushes this one step further in an important way. I share the concern that if evaluation remains anchored in a compliance-oriented logic, even well-integrated foresight risks being instrumentalised used to anticipate within predefined boundaries rather than to genuinely question them. The distinction she draws between evaluation as verification versus exploration is, I think, exactly right.
In my view, however, the real constraint on integrating foresight is often not at the level of tools or criteria but much earlier, at the stage of evaluation conceptualisation.
In the FAO One Health case, the ability to incorporate a foresight perspective was enabled by an in-depth preliminary analysis and literature review conducted at the design stage. Without that early investment, it would have been significantly harder to introduce a meaningful forward-looking dimension later on. By the time questions, scope, and methods are fixed, the evaluation architecture is already path-dependent—ironically mirroring the very dynamics we are trying to assess.
So perhaps the discussion can be nuanced in three directions:
If foresight is to be more than an add-on, it needs to be designed in from the outset, not retrofitted.
This has practical implications for commissioners. If we are serious about developmental or formative approaches, foresight needs to be reflected in:
In that respect, the approach we often use at FAO, a question-driven, utilization-focused design, guided but not constrained by OECD DAC criteria does offer some flexibility. It allows us, at least in principle, to embed forward-looking dimensions early on, provided that the conceptual groundwork is strong enough.
So perhaps the challenge is not only to rethink criteria or embrace foresight tools, but also to shift attention upstream: to how evaluations are commissioned, framed, and intellectually grounded before they even begin.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 26/03/2026
Carlos, thank you for sharing this example. The FAO One Health evaluation is a genuinely instructive case, and your framing of the "temporal mismatch" between retrospective findings and forward-looking relevance captures something I think many evaluators intuitively recognise but struggle to articulate clearly in evaluation reports.
What I find particularly insightful in your reflection is the reinterpretation of existing DAC criteria through a foresight lens. Framing relevance as "future fitness," sustainability as "resilience under change," and coherence as "the ability to work across systems" is not a radical departure from the criteria. I would argue they are a more honest application of them in contexts where conditions are already shifting during programme implementation. I have been thinking along similar lines, and your example reinforces the case that foresight doesn't necessarily require a separate methodology inserted into evaluation…it can be woven into the interpretive framing we already use (see my response below to Conny).
Your point about path dependencies is also poignant. Institutional strengths become constraints when the future demands different configurations of expertise and partnership. This seems like fertile ground for scenario planning in particular, whereby it can help organisations like FAO stress-test their current operating models against emerging One Health futures.
You comment also made me think of Michael Quinn Patton’s 2020 article “Evaluation Criteria for Evaluating Transformation: Implications for the Coronavirus Pandemic and the Global Climate Emergency” (see attached). MQP critiques the DAC criteria and offers six new criteria oriented around transformation. From his article abstract:
Fundamental systems transformations are needed to address the global emergency brought on by climate change and related global trends, including the COVID-19 pandemic, which, together, pose existential threats to the future of humanity. Transformation has become the clarion call on the global stage. Evaluating transformation requires criteria. The revised Organization for Economic Cooperation and Development/ Development Assistance Committee criteria are adequate for business as usual summative and accountability evaluations but are inadequate for addressing major systems transformations. Six criteria for evaluating transformations are offered, discussed, and illustrated by applying them to the pandemic and the Global Alliance for the Future of Food. The suggested criteria illustrate possibilities. The criteria for judging any intervention should be developed in the context of and aligned with the purpose of a specific evaluation and information needs of primary intended users. This article concludes that the greatest danger for evaluators in times of turbulence is not the turbulence—it is to act with yesterday’s criteria.
I have used MQP’s transformational criteria in two evaluations. I’ll share later on how this worked and did not work in the context I was working in…a foresight lens definitely played a role…or I should say a lack thereof.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 26/03/2026
Thank you, Rama. The role of changing perceptions on sustainability over time is a rich thread worth exploring further. One of the tensions I find most interesting in this space is that sustainability is often assessed at a fixed point in time (whether at programme design or close), against conditions that may look very different five or ten years later. A foresight lens invites us to ask not just whether a programme is sustainable under current conditions, but whether it is resilient to the range of futures that are plausible given climate trajectories, political shifts, or ecosystem dynamics. Would you be willing to share an example from your own experience where changing perceptions of sustainability, perhaps across funders, governments, or communities, shaped how evaluation findings were received or acted upon?
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 26/03/2026
Thank you all for your contributions. I was not seeing any posts on Tuesday, but yesterday many of you added your insight and responses. So thank you! Let me respond to each of you throughout today and tomorrow.
Conny, thank you for starting the discussion. You are spot on. What you are describing is precisely the tension this discussion is trying to surface. The "L" in MEL/MEAL/MERL is often the first casualty when evaluation is treated as a compliance exercise rather than a genuine learning process. Your observation that evaluation findings are frequently "put aside" after delivery is one of the most persistent and frustrating patterns in our field, and it goes to the heart of why foresight integration matters…if learning isn't happening in real time, forward-looking evaluation becomes even more difficult to anchor institutionally.
Regarding your point about outcome harvesting (OH) and outcome mapping (OM) is well taken. I’ve toyed with new evaluation concepts like Anticipatory Outcome Fishing and Foresight-Infused Outcome Mapping…attempts to weave futures thinking into evaluation approaches. As you mentioned, I have found that these approaches do in fact create more active stakeholder engagement across reflection cycles, which can build the kind of evaluative culture that makes forward-looking thinking more natural…but it is not easy for some organisations to engage at this level. I would also add that the participatory dimension you describe, i.e., getting stakeholders to reflect on what worked, what didn't, and what was unexpected, is also a foundation for scenario thinking. Once people are comfortable sitting with uncertainty and identifying assumptions, introducing foresight tools like horizon scanning or the Three Horizons framework becomes a much shorter step. Looking forward to hearing more from you as the discussion evolves.
Senegal
Amy MARA
Economiste et Specialiste en Passation de Marché
Direction de la Dette Publique
Posted on 25/03/2026
Traditionally, evaluation has been viewed as a retrospective exercise aimed at analyzing the results of a project or public policy after its implementation. However, in a context marked by uncertainty and the growing complexity of public interventions, evaluation is gradually shifting toward a more forward-looking approach, focused on anticipation and continuous improvement.
First, forward-looking evaluation relies on the integration of learning mechanisms. It is no longer merely a matter of judging past performance, but also of identifying lessons learned in order to improve the design and implementation of future actions. This approach promotes adaptive management of projects and programs.
Second, forward-looking evaluation relies on the use of foresight tools such as scenario analysis, ex-ante impact assessments, and risk modeling. These tools help inform decision-making in advance and guide public policies toward sustainable outcomes.
Furthermore, future-oriented evaluation encourages real-time monitoring and continuous assessment. Through information systems and performance indicators, it becomes possible to adjust interventions as they are implemented. This dynamic enhances the responsiveness and effectiveness of projects.
Finally, the forward-looking dimension of evaluation involves greater stakeholder participation. Engaging beneficiaries, decision-makers, and experts helps identify future needs, anticipate challenges, and develop tailored solutions.
Ultimately, shifting from retrospective to prospective evaluation involves transforming evaluation into a true decision-making tool. It thus becomes a strategic lever that not only analyzes the past but, above all, prepares for the future and sustainably improves public action.
Ms. Amy MARA
Economist and procurement specialist
Ph.D. candidate in project management
United Republic of Tanzania
Emmanuel Erick Igiha
Principal M&E Specialist
Tanzania National Parks
Posted on 25/03/2026
In my experience, what really makes an evaluation "forward looking" is its focus on future improvement rather than just judging what happened in the past. A forward-looking evaluation digs into what we've learned, offers practical recommendations and helps us adapt to new challenges or opportunities that might come up. It’s not just about checking if we met our old goals, but about asking, “How can we do even better next time?” I find this approach especially valuable because it encourages ongoing learning and helps everyone involved plan more strategically for the future. What do others think—have you seen examples where a forward-looking evaluation really made a difference?
Canada
Rhode Early Charles
Posted on 25/03/2026
I find this discussion particularly relevant. In my work, I often use time series analysis and predictive modeling to estimate future trends based on historical data.
I would add that while foresight approaches that do not rely on past performance are essential, especially in contexts characterized by high uncertainty or limited data, predictive methods grounded in historical data remain among the most robust tools we have when sufficient and reliable data is available. These methods allow us to identify patterns, quantify trends, and generate evidence-based projections that can effectively complement more qualitative foresight approaches.
With the advancement of AI and machine learning, we now have the capacity to go further by integrating large volumes of data across multiple projects, regions, and even donors. This creates important opportunities to build more accurate and context-sensitive predictive models, particularly when working with similar interventions within a country or sector.
However, a major constraint remains data availability and fragmentation. Data is often siloed within individual projects or organizations, making it difficult to build sufficiently large and diverse datasets for robust modeling. In many cases, data from a single project is not sufficient to support reliable predictions.
One potential way forward would be to strengthen national ownership of project data. Governments could play a key role in consolidating data generated across projects into centralized and accessible databases. If well designed, such systems could support research, inform project design, and enable more rigorous ex-ante analysis of potential success or failure.
In that sense, I see strong complementarities between foresight methods and predictive analytics. Foresight helps us explore uncertainty and alternative futures, while predictive models help us quantify likely trends where data allows. Bringing both together could significantly strengthen evaluation practice and decision-making.
Benin
Alexis Adébayo ODOUN-IFA
Expert in MEAL
RAAF/ECOWAS
Posted on 25/03/2026
I would like to thank you for this initiative.
Retrospective evaluation is essential for assessing impact and drawing lessons. However, in contexts shaped by climate change, it presents significant limitations, particularly in integrated landscape management approaches.
For example, costly infrastructure such as dams can be destroyed or become ineffective due to extreme climate events. In such conditions, it becomes difficult to measure the real impact of a project or to attribute observed results to the intervention rather than to external factors.
This uncertainty affects the use of evaluation results, as they may be perceived as unreliable or not representative, thereby limiting their usefulness for decision-making and future planning.
Italy
Silva Ferretti
Freelance consultant
Posted on 25/03/2026
Thank you for this discussion and for the initial ideas shared!
As someone who consistently puts "forward-looking evaluation" at the centre of my proposals, I want to offer a provocation.
The framing here might suggest that what evaluation needs is better foresight tools and more capacity to anticipate the future. I'd like to challenge that, not to dismiss foresight, but to locate the real problem one level up. Because the issue is not technical. It is cultural.
The deeper question is: what is evaluation for? If it exists mainly to confirm compliance (i.e. to verify that a plan was executed as designed, that the Theory of Change held....) then adding foresight methods changes nothing. We will simply be anticipating the future in service of the same backward-looking logic and the same set of horizons. Always in "compliance mode."
Before we ask how evaluation can get better at anticipating the future, we need to ask a prior question: are we willing to set evaluation free from the obligation to confirm the plan?
Can evaluation be exploration, not verification? That means evaluations that do not just answer questions, but discover better ones, that help people think through the future, not spoon-feed it to them.
Foresight tools are valuable. I have used them. And when used in a participatory way they can be liberating, revealing that people already carry vision and insights that the very plans they are working on tend to constrain.
So this is the real issue. It is not "foresight" as a technical fix. It is about the power to adapt, challenge, and explore continuously.... rather than situating evaluation in a world where our assumptions, our theories, our plans are reference points, and not starting ideas.
Italy
Serdar Bayryyev
Senior Evaluation Officer
FAO
Posted on 25/03/2026
Thank you for initiating this important discussion. To facilitate this discussion, I would like to share some reflections.
Today’s world faces unprecedented challenges of climate change, food security, environmental sustainability, and increasing fragility due to conflicts and related crises. Agricultural development programs operate amid a backdrop of volatility, uncertainty, complexity, and ambiguity.
Traditionally, evaluation function has focused predominantly on retrospective accountability, measuring past performance against predetermined plans, objectives and targets. While valuable, this approach, in today’s rapidly changing context, often doesn’t result in valuable insights and clear, impactful messages. Evaluations that assess relevance, effectiveness, and sustainability based on the conditions at the time of design can produce accurate reflections of past actions but offer limited guidance for future decision-making.
When evaluation processes rely solely on historical benchmarks, they risk overlooking emerging trends and future challenges. For example, a program designed to improve crop yields based on a specific climate scenario may become less relevant if climate patterns shift unexpectedly. Similarly, a project assessed as sustainable under current conditions might prove vulnerable under future stressors. This gap underscores the need for evaluation methodologies that are forward-looking and capable of engaging with plausible futures.
Vairous organizations already embed foresight into their respective practices:
- The World Food Programme (WFP) has integrated foresight-informed approaches into its Anticipatory Action programs, enabling more proactive responses to food crises.
- Organizations such as GEF, CGIAR are exploring how to better assess long-term resilience and systemic impacts in their environmental and agricultural investments.
- FAO has recently published a report that aims to inspire strategic actions to transform agrifood systems into sustainable, resilient, and inclusive ones. This report ( accessible here: https://www.fao.org/global-perspectives-studies/fofa/en/) explores three different scenarios for the future of food and agriculture, based on alternative trends for key drivers, such as income growth and distribution, population growth, technical progress in agriculture, and climate change.
Strategic foresight should be based on a suite of accessible tools and approaches to address this challenge. While various tools and methods have been developed, practical guidance on their applicability remains limited. Many evaluators lack training in foresight methods. To utilize the full potential of foresight in evaluation, several steps are essential:
In an era of unprecedented change, evaluation must evolve from a retrospective mirror to a forward-looking compass. Integrating foresight methods into evaluation processes can enhance relevance, sustainability, and systemic impact assessments, ultimately supporting programs that are resilient and adaptable in the face of uncertainty.
Looking forward to further discussions and shared learning on this important topic.
Best regards,
Serdar Bayryyev, Senior Evaluation Officer
Food and Agriculture Organization
Italy
Carlos Tarazona
Senior Evaluation Officer
FAO
Posted on 25/03/2026
Good morning colleagues, and thank you for launching this very timely discussion.
I’d like to share a recent experience from the FAO Office of Evaluation where we explicitly drew on foresight principles in the design and conduct of an evaluation.
In evaluating FAO’s work on One Health, we began with a familiar retrospective lens: how did the approach evolve, and what did FAO contribute? This analysis showed a strong trajectory—leadership over 20 years, particularly in animal health, zoonotic disease control, biosecurity, and more recently antimicrobial resistance (AMR) and pandemic preparedness.
But we quickly ran into a temporal mismatch.
One Health is not a stable field. It is being reshaped by climate change, biodiversity loss, land-use pressures, AMR, and broader food system transformation. Evaluating performance against past conditions risks producing findings that are valid—but less useful for navigating what comes next.
So the question shifted: not just did FAO perform well? but is its approach fit for the futures now emerging?
That’s where a foresight lens—informally, thinking in terms of emerging risks, system shifts, and plausible futures—added value.
It helped us reinterpret a central tension. FAO’s strengths—deep expertise in animal health, strong country platforms, and operational experience—are also its path dependencies. While FAO has adopted a broader, more holistic definition of One Health, implementation still often appears animal health-centred, with ecosystem and systems dimensions less consistently integrated.
From a forward-looking perspective, this matters. Future One Health challenges are likely to be more interconnected, not less. They will require deeper integration across sectors (animals, plants, environment, food systems) and stronger cross-sectoral coordination at country level.
One takeaway for me is that foresight can enter evaluation through existing criteria:
Retrospective evaluation tells us how we got here. A future-informed lens helps us ask whether we’re ready for what’s next.
I’d be very interested to hear how others have approached this—have you found practical ways to bring even light-touch foresight into evaluation design or interpretation?
India
Rama Rao Darapuneni
Former Director in ICAR
ICAR
Posted on 25/03/2026
Role of changing perceptions on sustainability over time
Germany
Cornelia Rietdorf
Scientific Associate
German Environment Agency
Posted on 25/03/2026
Good morning / hello to everyone and thank you for starting this interesting discussion round, Steven!
I'm not an evaluator, but worked in M&E / MEL / MEAL / MERL in different contexts over the past 10+ years and saw the challenges of mostly backward looking evaluation way to many times. I've not many experiences with foresight evaluations so here just some general thoughts:
What makes an evaluation foreward looking? For me, in a way it's creating awareness of the L in MEL / MEAL / MERL. Why do we do an evaluation? Way to often I saw that for project teams it's just a troublesome check box to tick off to please donors / project requirements. The evaluation is done in whatever way and then put aside. It was often hard work to raise awareness to the importance and potential of evaluation - to uncover what worked well, what lead to actual positive change, what didn't work and why and what even might have brough negative change to then use these Learnings for better and improved projects / policies / strategies / measures etc.
I'm wondering, if outcome harvesting and mapping is one way to increase the awareness of the potential of evaluations, as many stakeholders involved get more actively involved in several reflection rounds around outcome mapping and outcome harvesting - which then ideally triggers an important reflection process on what might work why, what has worked, what didn't, what was unexpectedly positive or negative and can then in turn be used for an improved follow-up process.
So in a way aren't good tools, arguments and practices to really focus on the learning aspect of evaluations and engage with all involved key stakeholders in reflection processes as much as possible greatly support the evaluation foresight? I hope I'm not completely off track here and am very much looking forward to the discussion threads around this topic and learnig from everyone here.
Cheers,
Conny
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 23/03/2026
Welcome to From Hindsight to Foresight: How Evaluation Can Become Future-Informed Discussion. My name is Steven Lichty and I’ll be hosting this online discussion over the next five weeks. I live in Nairobi and have been working at the nexus of foresight and evaluation for over 20 years. I am looking forward to facilitating our conversations here, sharing resources, and learning from all of you.
Evaluation has long helped us understand what happened, what worked, and what did not. But many of the systems we care about most (food, agriculture, climate, ecosystems, resilience, etc.) are now shaped by accelerating uncertainty, disruption, and long-term change. In that context, looking backward is no longer enough. The question is not only whether an intervention performed well in the past, but whether it is fit for the futures now emerging.
Over the coming weeks, this forum is a place to test ideas, share examples, surface tensions, and learn across disciplines. Through shared experiences, optional readings, and honest reflection, we will explore what it looks like when evaluation starts looking forward. Not abandoning rigour, but expanding it. Not replacing the DAC criteria, but asking what criteria like relevance and sustainability really mean when the future may look nothing like the world in which a programme was initially designed.
This discussion invites evaluators, foresight practitioners, commissioners, researchers, and decision-makers into a shared space of inquiry. How can evaluation become more future-informed, more adaptive, and more useful in times of volatility? What happens when we bring foresight tools like horizon scanning, scenarios, Three Horizons, Futures Triangle, or Causal Layered Analysis into evaluation design, interpretation, and use? How can deeper epistemologies and ontologies driving critical futures thinking inform how we do evaluation?
This community includes some of the most thoughtful evaluators, commissioners, and practitioners working in food security, agriculture, and the environment. You have seen the limits of retrospective evaluation firsthand. You have also probably seen glimpses of something better. You do not need to be an expert in both fields to contribute. Practical experience, critical questions, promising cases, doubts, and provocations are all welcome.
So let’s begin there: Where have you seen the limits of retrospective evaluation in a fast-changing world? And where do you see the most promising entry points for bringing a foresight lens into evaluation practice?
I am are glad you are here and I look forward to an engaging and thought-provoking discussion.
Pagination