Background and Rationale
Food security, environmental, and agricultural development programmes increasingly operate in volatile, uncertain, and complex contexts. Climate disruption, ecosystem degradation, shifting geopolitical conditions, and cascading crises are no longer background noise. They shape the environment in which these programmes are designed and implemented. Evaluation in these sectors often stays focused on retrospective accountability, measuring past performance against fixed objectives, even as operating conditions keep shifting.
This temporal mismatch has practical consequences. When evaluations judge relevance, effectiveness, and sustainability against the conditions that existed at programme design, they can produce findings that are accurate about the past but less useful for guiding future decisions and navigating change. Theory of Change processes often mirror the same limitation one that does not account for the plausible futures that will determine whether current investments ultimately succeed or fail.
Strategic foresight brings forward-looking approaches that can strengthen evaluation practice. Methods such as horizon scanning, scenario planning, Futures Triangle, the Three Horizons framework, and Causal Layered Analysis help evaluators look beyond past performance and consider how programmes might perform under different future conditions. These foresight tools and frameworks can enrich evaluation at every stage, from scoping and design through to learning and use. When used alongside evaluation, they support more anticipatory governance, enabling decisions that draw on evidence, while remaining attentive to uncertainty and long-term change.
Momentum for integrating foresight and evaluation is already visible across various sectors. WFP's Anticipatory Action programmes, for example, have already introduced foresight-informed approaches into monitoring and evaluation frameworks. At the same time, organisations such as the GEF, CGIAR, and FAO are exploring how evaluation can better assess long-term resilience and systemic impacts in environment and agriculture investments. These developments are also prompting broader reflection within the evaluation community, including renewed interest in how the OECD-DAC criteria might evolve, shifting from measuring alignment with past priorities, toward assessing prospective relevance and robustness across plausible future scenarios. Despite this momentum, practical guidance for evaluators remains limited. Few evaluators have received formal exposure to foresight methods, and foresight practitioners are rarely trained in evaluation. The tools, case examples, and community of practice needed to connect these fields are not yet well established.
Discussion Purpose
This online discussion will examine how foresight methods can be integrated into evaluation practice in food security, environmental, and agricultural contexts. Drawing on practitioners’ experiences, real-world examples, and optional readings, the discussion will highlight practical insights that evaluators can use to make their work responsive to uncertainty and more useful for forward-looking decision-making.
Discussion Objectives
- To introduce key foresight concepts and tools including horizon scanning, scenario planning, Causal Layered Analysis, and the Three Horizons framework and explore how they can be applied within evaluation processes.
- To examine how foresight-informed evaluation can strengthen assessments of relevance, sustainability, and systemic impact in food security, environmental, and agricultural programmes.
- To share concrete examples of foresight and evaluation integration from across the sector, including anticipatory action, climate resilience programming, and theory of change processes.
- To identify practical entry points for evaluators to begin incorporating foresight perspectives into their work, regardless of institutional context or resource constraints.
Guiding Questions
- In contexts of climate uncertainty, rapid environmental change, and shifting geopolitical realities, where have you seen the limits of retrospective evaluation? How has this affected the use of findings?
- What foresight tools or methods have you encountered in your evaluation practice? What made them useful or difficult to apply? What foresight tools, if any, have you used personally?
- How might our interpretations of the DAC criteria (such as relevance and sustainability) through a foresight lens change what we measure, how we measure, and how we make recommendations?
- Where do you see opportunities for integrating foresight and evaluation in food security, environmental, and agricultural contexts?
- What skills, resources, and institutional changes would be needed to make foresight a regular part of evaluation design and commissioning?
Discussion Readings
Week 1: Introductory discussion on the theme and exploration of the guiding questions.
Week 2: Examine transformative foresight for the transformational imperative, via a forthcoming article in the Journal of MultiDisciplinary Evaluation, edited by Scott Chaplowe.
Week 3: Discuss “Fusing foresight and futures thinking for a new transformative evaluation paradigm” by Rose Thompson Coon, Katri Vataja, and Pinja Parkkonen (in New Directions for Evaluation, Summer 2024, Issue 183, pages 91-101)
Week 4: Explore Quality Criteria for Food Systems Foresight in Africa: A practitioner’s guide for commissioning, facilitating and evaluating foresight, a recent guide written by Katindi Sivi and launched by the Forum for Agricultural Research in Africa, in partnership with Foresight4Food, University of Oxford, and the International Development Research Centre.
The online discussion will remain open for contributions until 27 April 2026!
Log in to post a commentNot yet a member? Register here
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
I am back from a spur of the moment short holiday where I did not open my laptop for six days, but it nice to see the conversation and discussion continuing.
As we move into our final week, I want to introduce the FARA guide Criteria to Assess High-Quality Food Systems Foresight in Africa (the link is in the intro to this discussion board, but also attached below). My colleague, Dr. Katindi Sivi, was a co-author, so I'm excited to showcase her work.
What I find especially useful about this report is that it is not a step-by-step foresight manual. It is a quality framework for thinking about what makes foresight meaningful, credible, inclusive, and actually useful for decision-making in complex food systems contexts. The guide argues that, in a time shaped by climate risk, demographic change, geopolitical uncertainty, and structural inequality, foresight must move beyond scenario production toward anticipatory governance, local ownership, and real policy influence. It also places unusual emphasis on African realities, including indigenous knowledge, informality, power relations, and participatory practice.
That feels highly relevant to the conversation we have been having here. Over the past weeks, several of you have pushed us to think beyond retrospective accountability alone. Silva and Amy asked whether evaluation can be freed from compliance logic. Rick challenged us to move from prediction toward preparedness and plural futures. Uzodinma emphasized mindset, local ownership, and adaptive learning. Rhode reminded us that knowledge must be communicated in usable ways, not just written for evaluators. Those themes are all echoed in this guide.
The guide is organised around nine interlinked criteria, including contextual relevance, inclusivity, ethics, methodological rigor, strategic communication, institutional embedding, and shifts in thought and behaviour. It also argues that evaluation of foresight should not focus on predictive accuracy, but on whether foresight improves learning (another common theme in our discussions), decision-making, contribution to change, and long-term systems transformation.
So for this final week, I would like to ask: what would high-quality future-informed evaluation actually look like in practice? What conditions need to be in place for it to be ethical, participatory, useful, and institutionally embedded rather than just another report on the shelf?
Benin
Alexis Adébayo ODOUN-IFA
Expert in MEAL
RAAF/ECOWAS
Posted on 13/04/2026
In their writings, Rose Thompson Coon et al. highlight the need to rethink evaluation in order to incorporate a forward-looking, even futuristic, dimension. Indeed, whilst evaluation enables lessons to be learnt, it does not always provide immediate avenues for their operational application once the intervention has ended. Without an in-depth literature review and uptake by other researchers or designers of future projects, the lessons learned from intervention evaluations tend to be forgotten once the interventions have ended. Thus, evaluation reports would benefit from incorporating more in-depth analyses, enabling this forward-looking and future-oriented vision of development to be better taken into account.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
This is an important point Alexis...thank you for contributing. I especially appreciate your observation that lessons from evaluation are often documented but not meaningfully carried forward into future design, policy, or practice. Your comment reinforces why a more future-oriented approach matters....evaluation should not only capture what happened, but also help ensure that learning remains usable, transferable, and alive beyond the life of a single intervention. Learning and education have strong futures/foresight elements by default. How can we better integrate forward-looking learning in our evaluations?
Italy
Silva Ferretti
Freelance consultant
Posted on 13/04/2026
It is quite hard to comment on an article that is not fully accessible :-(
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Which article are you referring to? I attached Fusing foresight and futures thinking for a new transformative evaluation paradigm in my earlier post, so you should have access to it.
Ghana
Ishmael Kwame Agbomlaku
Manager
Integrated Institute of professional, LA plage Meta Verse.
Posted on 13/04/2026
Powerful perspective. Moving from hindsight to foresight is exactly where evaluation must evolve using data not just to report, but to anticipate and improve outcomes. This is critical for effective programme design
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Thanks for the comment, Ishmael. I agree completely. Great foresight always contains elements of hindsight, so it is not a competition, but collaboration and symbiosis. I've that many ancient traditions thought of humanity walking backwards into the future....knowing where we've been is important, but when we see the path curving or diverging, we need to start tacking in that direction.
Italy
Silva Ferretti
Freelance consultant
Posted on 13/04/2026
We can definitely become better at being "forward-looking": understanding likely patterns, more intentional in interrogating likely consequences. But always escaping the temptation to make this "THE plan". Because what matters is having direction and agility, better capacity to see and feel junctures... not a pre-set future.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Hi Silva, thanks for your comment. It is so easy to become trapped by "THE PLAN". This is why I love working with scenarios. You can have the plan, but when you have three or four scenarios on the horizon, you can always wind-tunnel THE PLAN and see where it needs to adapt. All the more reason for the agility, capacity to see and feel junctures, as you mentioned.
Italy
Silva Ferretti
Freelance consultant
Posted on 13/04/2026
The deeper question is: what is evaluation for? If it exists mainly to confirm compliance (i.e. to verify that a plan was executed as designed, that the Theory of Change held....) then adding foresight methods changes nothing. We will simply be anticipating the future in service of the same backward-looking logic and the same set of horizons. Always in "compliance mode." Before we ask how evaluation can get better at anticipating the future, we need to ask a prior question: are we willing to set evaluation free from the obligation to confirm the plan?
Senegal
Amy MARA
Economiste et Specialiste en Passation de Marché
Direction de la Dette Publique
Posted on 17/04/2026
Bonjour,
Ma très chère Silva ,
Votre analyse se révèle très pertinente et appelle une réflexion approfondie, en particulier sur la nécessité de repenser la finalité de l’évaluation au-delà de la simple logique de conformité.
la réflexion proposée soulève une question fondamentale et pertinente : celle de la finalité réelle de l’évaluation. En effet, si l’évaluation est réduite à une fonction de conformité, elle se limite à vérifier si les actions menées correspondent aux prévisions initiales, sans véritablement interroger leur pertinence, leur impact ou leur capacité d’adaptation aux réalités changeantes.
Dans cette perspective, l’introduction de méthodes prospectives dans une évaluation enfermée dans une logique de conformité apparaît insuffisante. Elle risque même de reproduire les mêmes schémas, en projetant simplement dans le futur des hypothèses déjà figées, sans remise en question des cadres d’analyse. Ainsi, anticiper l’avenir sans transformer la finalité de l’évaluation revient à prolonger une approche rétrospective sous une autre forme.
Dès lors, la question centrale devient celle de la transformation du rôle de l’évaluation. Il ne s’agit plus uniquement de confirmer un plan, mais de questionner les hypothèses qui le sous-tendent, d’identifier les écarts entre les intentions et les résultats, et surtout d’accompagner la prise de décision dans des contextes incertains. Une évaluation tournée vers l’avenir doit être un outil d’apprentissage, d’adaptation et d’innovation.
Libérer l’évaluation de l’obligation de confirmer le plan implique plusieurs changements majeurs. D’abord, accepter que les programmes puissent évoluer en fonction des réalités du terrain. Ensuite, intégrer des approches plus flexibles, telles que l’évaluation en temps réel ou l’apprentissage adaptatif. Enfin, reconnaître que l’évaluation peut produire des résultats critiques, parfois en contradiction avec les objectifs initiaux.
Cependant, cette transformation n’est pas sans défis. Elle suppose un changement de culture institutionnelle, où les décideurs acceptent l’incertitude et la remise en question. Elle nécessite également des capacités techniques renforcées et une plus grande ouverture à la participation des parties prenantes.
En conclusion, l’évaluation ne pourra véritablement intégrer une dimension prospective que si elle se libère de sa fonction strictement normative. Elle doit évoluer vers un rôle stratégique, orienté vers l’apprentissage et l’anticipation, afin de mieux répondre aux enjeux complexes et dynamiques des politiques publiques.
Amy MARA
Economiste
Dakar Sénégal
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Merci beaucoup pour cette contribution très riche et très bien formulée. Vous mettez le doigt sur un point central de notre discussion...tant que l’évaluation reste enfermée dans une logique de conformité, même l’introduction de méthodes prospectives risque de ne produire qu’un déplacement superficiel plutôt qu’un véritable changement de posture. J’apprécie particulièrement votre insistance sur la nécessité de transformer la finalité même de l’évaluation, afin qu’elle devienne un espace d’apprentissage, d’adaptation et d’aide à la décision dans l’incertitude. Votre remarque sur l’acceptation institutionnelle de la critique, de l’incertitude et de l’évolution des programmes est essentielle, car elle montre bien que le défi n’est pas seulement méthodologique, mais aussi culturel et politique. C’est précisément cette tension entre évaluation normative et évaluation tournée vers l’avenir que nous devons continuer à explorer ensemble. (I hope my Gemini translation makes sense Amy!)
Germany
Ines Freier
Senior consultant for NRM and biodiversity, Green economy
consultant
Posted on 13/04/2026
The paper offers one option for evaluating new speculative ventures which are not under control of the public. Using foresight methods instead of past performance can also backfire.
I recently evaluated blended finance funds, the performance of the funds was not as expected due to a set of factors driven from research on management and business development like know your customer. Our subject experts always tried to developed new scenarios for the future under which the facilities would work better. Foresight tools are applied within the existing evaluation system based on evaluation departments or institutions and methods which in most cases lack hard technical / subject-related skills and resources improve the evaluation system. Alternatives to the current evaluation system should be explored like participatory process for policy formulation and implementation like stakeholder groups in specific policy areas providing feedback on specific policies. This way future oriented reflexive and learning systems are created using feedback loops. Here future programming is included by stakeholders. Examples are the Brazilian policies for family agriculture or nutrition which are shaped by stakeholder commissions at all levels.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Great comment here, Ines, and a valuable caution. I appreciate your point that using foresight in place of evidence on actual performance can become a way of endlessly rescuing weak results through imagined future scenarios, especially when evaluation teams lack the technical or sector-specific expertise needed to challenge assumptions. Your emphasis on participatory, feedback-rich systems is especially important, because it suggests that future-oriented evaluation should not rely only on evaluators and institutions, but also on structured stakeholder engagement that can shape policy design, implementation, and adaptation in real time. Thus, building futures literacy with communities is also important element of our conversations.
Benin
Koffi Moïse Bienvenu Sodjinou
Chargé de Programme
CASAD International
Posted on 09/04/2026
Evaluation can no longer be content with being a mere act of measurement or a snapshot frozen in time; its true purpose lies in its ability to bring about lasting transformation. By moving beyond its traditional monitoring function to become a lever for change, it acts as a catalyst for self-reflection, compelling stakeholders to confront their practices with the reality of the results. To evaluate without transforming would be like making a diagnosis with no intention of treating the condition, rendering the exercise fruitless and purely bureaucratic. On the contrary, an evaluation focused on progress enables the identification of areas for improvement and sources of innovation, turning mistakes into learning opportunities and judgement into a tool for support.
From an ethical and strategic perspective, this transformative dimension is essential to avoid inertia and ensure the effectiveness of the actions undertaken. In a constantly changing environment, evaluation must serve as a dynamic compass: it does not merely look back to validate achievements, but propels organisations or individuals towards the future by adjusting their trajectories. In short, evaluation only achieves its full value when it becomes an “empowering” process, capable of changing behaviours and optimising systems to deliver real and tangible impact.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Thanks Koffi. You've made some astute observations and contributed a strong articulation of evaluation as something far more alive and consequential than compliance or retrospective judgement. I especially appreciate the idea that evaluation should function as a dynamic compass, helping people and institutions not only understand where they have been, but also adjust where they are going. Framing evaluation as diagnosis without treatment is particularly powerful, because it captures why transformative intent matters if evaluation is to contribute to real learning, adaptation, and lasting change.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 06/04/2026
Week Three Introduction
For Week 3, I’d like to introduce the article Fusing foresight and futures thinking for a new transformative evaluation paradigm by Rose Thompson Coon, Katri Vataja, and Pinja Parkkonen (attached below).
Their article argues that if evaluation is meant to contribute to transformation in an uncertain and complex world, it cannot remain focused mainly on assessing past performance. Instead, it needs to become more future-focused, more dynamic, and more able to engage multiple possible futures.
What makes this article especially useful for our discussion is that it does not stay at the level of theory. Using a case from Sitra in Finland, the authors show how foresight methods such as Horizon Scanning and a modified Delphi process can be integrated into evaluation to validate current strategic choices, generate future programming options, deepen learning about complexity, and strengthen strategic decision-making. They also argue that this shift is not only methodological. It requires a broader rethinking of evaluation’s purpose, including questions of power, participation, and whose futures are being imagined and prioritized.
This article offers a practical bridge between futures thinking and transformative evaluation. It helps move the conversation from “Why should evaluation become more future-informed?” to “What might this actually look like in practice?”
It also raises an important challenge for all of us. If evaluation is to help shape preferred futures, how should it address questions of power, participation, and whose future is being defined?
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 06/04/2026
Week Two Summary
This past week’s discussion surfaced a rich and timely tension at the heart of future-informed evaluation, i.e., whether evaluation should remain oriented toward prediction and linear change, or whether it must shift toward preparedness, plurality, learning, and adaptation.
Rick Davies pushed this strongly by arguing that, in a world of deep uncertainty, evaluation should engage multiple, sufficiently diverse futures rather than rely on a singular predictive logic. He also raised the important question of what criteria we should use to evaluate futures, suggesting both cognitive criteria about how we think and behavioral criteria about how we respond. He further cautioned against using the language of “transformation” too loosely, reminding us that transformation is not inherently good and that evaluators must remain attentive to the aims and politics of change itself.
Michele Friend offered an important philosophical and methodological reframing. Rather than asking what must change first, she argued that transformation should not be seen as a linear sequence at all. Methods, criteria, institutions, and mindsets evolve together through feedback loops between assessment, dialogue, feasibility, and implementation. Her example showed evaluation as an iterative, reflective process that not only judges performance but also helps people and institutions ask who they are becoming.
Dr. Uzodinma Akujekwe Adirieje grounded the conversation in African and low-resource health systems, emphasizing that the deepest shift must be one of mindset: away from compliance-oriented, donor-facing reporting and toward adaptive, locally owned, problem-solving learning. His contribution was especially valuable in showing that transformative evaluation is not abstract; it can produce concrete results when evidence is embedded in real-time decision-making and community realities.
Rhode Early Charles expanded the discussion by arguing that transformation also depends on how evaluation knowledge is communicated. Reports often remain too technical and evaluator-facing. She called for evaluation findings to become multiple, tailored knowledge products that different audiences can actually use, while also warning that overly lean data approaches may miss emerging issues and strategic learning opportunities.
Taken together, the week’s exchanges suggest that future-informed evaluation may require not one single shift, but several at once…from prediction to preparedness, from singular to plural futures, from linear models to feedback-rich learning, from compliance to local ownership, and from static reports to more usable forms of knowledge.
On a technical note, Silva asked a practical platform question. To my knowledge you cannot pick a thread and contribute to another person’s comments. I’ll pass this onto the EvalForEarth team though.
Canada
Rhode Early Charles
Posted on 01/04/2026
I really appreciate the insights shared, especially the focus on adaptive learning and forward-looking evaluation. I would also argue that one critical limitation lies in how evaluation knowledge is produced and communicated. This time, my comment is less focused on methodology and more on offering my perspective in response to your question Steven about what needs to change for evaluation to truly contribute to transformation.
Evaluation reports are often written by evaluators, using technical and methodological language that mainly speaks to other evaluators or technically trained audiences. As a result, these reports tend to be most appealing and useful to those who already understand that language.
What I find interesting is that most projects have communication plans that clearly identify who needs what information, how it should be shared, and why. However, this logic is rarely applied when writing evaluation reports. Different stakeholders have very different needs depending on how they interact with the project, and a single report cannot effectively serve all of them.
To me, evaluation reports should be seen as a starting point, not the final product. The data and findings should serve as inputs into multiple targeted knowledge products, developed by communication experts or sector specialists (health, education, economic development, etc.), that speak directly to specific audiences. These tailored outputs would translate evaluation insights into formats and messages that are relevant and actionable, supporting implementation, positioning, and donor engagement.
In addition, while I understand the rationale behind lean data approaches, they can sometimes be too restrictive. Focusing only on what is needed for indicators or donor requirements may limit opportunities to explore emerging issues or strategic areas more deeply. Sector specialists could play a role here by collecting additional data for learning, positioning, thought leadership, or future programming, as long as there is clear accountability for why that data is being collected and how it will be used. Every question has a cost, but it can also bring real value if it is intentional.
Overall, if evaluation is meant to support transformation, it is not just about improving methods or tools. It is also about making sure that the knowledge we produce is usable, relevant, and accessible to the people who need it. We need to better understand our audiences and what they need.
Nigeria
Dr. Uzodinma Akujekwe Adirieje
CEO
Afrihealth Optonet Association (AHOA) - CSOs Network
Posted on 01/04/2026
In African and low-resource health systems, evaluation too often serves as a post-hoc accountability exercise rather than a tool for systemic transformation. From decades of practice in health systems strengthening, the first and most critical shift must occur in mindset - how we perceive the purpose and ownership of evidence. Evaluators and decision-makers frequently operate with a compliance mindset, producing reports that satisfy external donors but fail to capture the nuanced realities on the ground. Recently, in Lagos State, Nigeria, routine monitoring in the maternal health program had focused narrowly on facility births. But, by adopting a learning-oriented approach - examining quality of care, patient experience, and referral patterns, it was uncovered that 42% of women bypassed local clinics due to perceived low-quality services. Targeted staff training and resource reallocation subsequently increased facility-based deliveries by 17% within a year.
Similarly, Community-Led Monitoring in another Nigerian district revealed a 40% barrier from hidden transport costs, despite reports showing 95% patient “satisfaction.” These insights highlight those methods and criteria, however technically sound, that follow effectively only after the mindset evolves to prioritize adaptive, locally-informed learning over extractive reporting.
Evidence from the recent Q1/2026 ‘Life & Health’ dialogues of Afrihealth Optonet Association (AHOA) shows districts using integrated digital platforms and participatory evaluation achieved a 15% rise in immunization coverage - proof that embedding evaluation in real-time problem-solving, not just retrospective reporting, produces tangible health impact.
Sustainability and long-term development hinge on this alignment. Transformative evaluation is not about better spreadsheets or fancier dashboards; it is about decolonizing intent, ensuring data serves local solutions, and fostering a culture of critical inquiry. In constrained African health systems, the mindset shift is the fulcrum upon which all methods, criteria, and institutional reforms pivot toward lasting, systemic change.
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 01/04/2026
If this weeks question is "If evaluation is to contribute to transformation, what exactly must change first…our methods, our criteria, our institutions, our underlying mindset, or something else?" I will repeat my point below...there is a potentially serious misalignment between the predictive nature of a theory of change as used by evaluators and the need for preparedness in a very uncertain world, A singular view of the future versus a plural view of the futures
[By the way, the display structure for this kind of dialogue should be branching, not a single line]
And please lets not toss the word "transformational" around too lightly. Putin and Trump and other despots are all keen on transformation of one kind or another. What about incremental improvements, or perhaps even just surviving, as an objective ? :-) Inflation take many forms, including changes in our vocabulary. e.g In the past we just had "details", but now we have "granularity" Wow....things really are getting better...or are they?
United States of America
Michele Friend
Professor
George Washington University
Posted on 30/03/2026
Hello everyone,
I’m delighted to be taking part in this discussion. Having a background in philosophy, I will no doubt be asking questions of a different nature. Steven Lichty asked: “If evaluation is to contribute to transformation, what needs to change first… our methods, our criteria, our institutions, our underlying mindset, or something else?”
I don’t think we should frame the question in this way. Transformation is not a linear process where we move from a first stage to a second, then to a third, and so on. On the contrary, several things happen simultaneously, and where each of us starts depends on ourselves, on what we are evaluating, and on the perceived purpose of the evaluation. In other words, what matters is the feedback loop between the assessment and the people it concerns.
Our team recently assessed a building. It met the requirements for LEED certification. To our surprise, it exceeded the LEED criteria. We incorporated these additional features into our assessment (our method is quite flexible in this regard). We then used this assessment to formulate recommendations for further improvements. Thus, the assessment method included these recommendations, and these led to a conversation about feasibility, implementation, timing, importance, and so on. This conversation, in turn, feeds back into the assessment and the recommendations. The conversation also leads those involved in the assessment to ask themselves in-depth questions about themselves, who they are and what they wish to become within the context of the institution. Thus, criteria, institutions and mindsets all evolve simultaneously through our assessment and feedback process.
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 30/03/2026
When the future is looking more uncertain than ever it may be useful to think about preparedness more than prediction. To enhance preparedness we need to be thinking about mutliple versions of the future, not just one, and these versions need to be sufficiently diverse. Having generated those futures how can we then evaluate them? I would be interested to hear from others what they think might be relevant criteria to apply. To kick off, I suggest the criteria may fall into two broad categories: 1. Cognitive: Criteria relating to how we are thinking about the future, 2: Behavioral: Criteria relating to how repond to those futures before and after they are realised
Italy
Silva Ferretti
Freelance consultant
Posted on 30/03/2026
Is it possible to respond to contributions and pick a conversation? I could not find the option to do so. And starting a new message breaks the flow!
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 30/03/2026
Welcome to Week 2
I want to build on last week’s conversation with a short reflection from Scott Chaplowe and Joyce Mukoma’s Evaluation and the Transformational Imperative (see attachment). Their core argument is simple but important. The scale of today’s crises means evaluation cannot remain tied to business-as-usual thinking if it is going to support the wider transformational agenda reflected in the SDGs. They define transformational change not as incremental improvement, but as deep, systemic change in how a system functions.
What I find especially useful is that the article does not present transformation as a single new method. Instead, it asks what is holding evaluation back. It points to four familiar fixations: 1) project fixation, 2) temporal fixation, 3) quantitative fixation, and 4) accountability fixation. In other words, evaluation too often stays trapped inside linear projects, short funding timelines, metric-heavy logics, and compliance-oriented accountability.
Scott and Joyce then suggest several pathways forward…complexity-adaptive methods, principle-focused evaluation, new transformational criteria, data science, and alternative paradigms, including Indigenous and feminist perspectives.
So for this week, I’d like to ask, “If evaluation is to contribute to transformation, what exactly must change first…our methods, our criteria, our institutions, our underlying mindset, or something else?
Nigeria
Dr. Uzodinma Akujekwe Adirieje
CEO
Afrihealth Optonet Association (AHOA) - CSOs Network
Posted on 30/03/2026
FROM HINDSIGHT TO FORESIGHT: EXPERIENCE AT AFRIHEALTH OPTONET ASSOCIATION
by Dr. Uzodinma Adirieje
From hindsight to foresight, our experience at Afrihealth Optonet Association (AHOA) demonstrates that evaluation is most valuable when it moves beyond retrospective accountability to actively shaping future decisions under uncertainty. Three practical insights stand out.
Embed adaptive learning loops into programme design:
In Afrihealth’s health systems and climate-linked interventions, periodic reviews were not treated as endline exercises but as real-time checkpoints. Evaluators facilitated rapid feedback cycles - combining routine data, beneficiary insights, and contextual signals (e.g., policy shifts, climate events like COP29 Baku) - to inform mid-course corrections. This approach ensures programmes remain relevant even as conditions change.
Integrate mixed-methods evidence for anticipatory analysis:
Quantitative indicators alone often lag behind emerging realities. Afrihealth’s evaluations paired service delivery data with qualitative intelligence from communities and frontline workers. For example, shifts in health-seeking behaviour during economic stress were detected early through interviews and focus groups, enabling proactive adjustments in outreach and resource allocation.
Align evaluation questions with decision horizons:
Rather than asking only “what worked,” Afrihealth reframed inquiries toward “what is likely to work next, for whom, and under what conditions.” Scenario-building and contribution analysis were used to explore plausible futures, particularly in programmes intersecting with climate variability and public health risks. This made findings directly usable for strategic planning, not just reporting.
Stakeholder co-creation:
By engaging policymakers, implementers, and communities in defining evaluation priorities, Afrihealth ensured that findings addressed real decision needs. This strengthened ownership and increased the likelihood that recommendations were acted upon.
Similarly, optional readings in developmental evaluation and adaptive management further reinforce these practices, emphasising flexibility, systems thinking, and learning-oriented accountability.
This way, evaluators can enhance relevance in uncertain contexts by institutionalising real-time learning, triangulating diverse evidence, and orienting evaluations toward future-facing decisions.
Dr. Uzodinma Adirieje is a former National President of the Nigerian Association of Evaluators (NAE). He is a seasoned evaluator, health economist, and civil society leader who was the co-consultant to drafting Nigeria’s National M&E Policy. He led SDG3 evaluation synthesis, participated in national SDG 3 and SDG 4 evaluations, and provided M&E training and mentorship, advancing evidence-based, forward-looking development practice.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 28/03/2026
Summary - Week 1
This first week of discussion has made one thing very clear...the call for future-informed evaluation is not coming from a single camp or methodology. It is emerging from lived frustration across practice. Conny Rietdorf reminded us that the “L” in MEL is often the first casualty when evaluation becomes a compliance exercise rather than a space for reflection and learning. Carlos Tarazona then pushed the conversation further through FAO’s One Health evaluation, showing how retrospective analysis can be solid on the past and still insufficient for the futures now emerging. His reframing of relevance as future fitness, sustainability as resilience under change, and coherence as the ability to work across systems gave us a powerful language for thinking differently.
Other contributors sharpened the picture. Serdar Bayryyev highlighted the institutional conditions needed for this shift (i.e., capacity, practical frameworks, and organisational change). Silva Ferretti challenged us not to treat foresight as a technical fix for a deeper cultural problem, asking the more difficult question "What is evaluation for?" Alexis Adébayo grounded the discussion in climate reality, where external shocks can destabilise attribution and weaken the usefulness of findings. Rhode Early Charles reminded us that predictive analytics and foresight are not rivals but complements, especially if we can overcome fragmented data systems. Emmanuel Erick Igiha and Amy Mara brought us back to purpose. Evaluation, at its best, should help people improve, adapt, and navigate what comes next.
So the core thread emerging from Week 1 is this: the move from hindsight to foresight is methodological, yes, but also institutional and deeply cultural. It asks not only for new tools, but for a different orientation to evidence, uncertainty, learning, and change. That feels like an important place to begin.
Looking ahead: In the coming week, we will turn to the transformational imperative and examine transformative foresight through a forthcoming article in the Journal of MultiDisciplinary Evaluation. If you are not familiar with the transformational imperative within evaluation ecosystems, I have attached a short four-page brief written by Scott Chaplowe and Joyce Mukoma.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Amy, thank you for this. You've laid out the landscape beautifully. What strikes me most about your framing is the word transformation. You're not describing a tweak to evaluation methodology… you're describing a fundamental shift in what evaluation is for. Moving from verdict to navigation...from accountability to anticipation.
The four pillars you identify are each compelling on their own. But I think what makes them powerful is how they reinforce each other. Scenario analysis without stakeholder participation risks becoming a technical exercise disconnected from lived realities. Real-time monitoring without a learning culture just generates data that no one acts on. Together, though, they start to describe something that feels genuinely different: evaluation as an ongoing, living conversation with the future.
One question your post raises for me: Who drives this transformation? Evaluators can advocate for forward-looking approaches, but much depends on whether commissioners and decision-makers are willing to fund and use them. In your experience, where has the appetite for prospective evaluation been strongest, and what has made the difference?
Your contribution also makes a nice segue to our focus next week on the transformational imperative. Really glad you're part of this discussion.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Emmanuel, the framing you offer, moving from judging the past to enabling improvement for the future, captures the spirit of what forward-looking evaluation aspires to be. And the question you end with is exactly the right one to keep asking throughout this discussion.
One example that comes to mind is WFP's Anticipatory Action work, where evaluation has been used not only to assess past performance but to refine the trigger systems and scenario models that activate pre-emptive responses before crises fully unfold. That strikes me as a case where evaluation genuinely shaped future action rather than simply recording past performance. But I think the deeper insight in your contribution is about orientation and intent…a forward-looking evaluation can be conducted with largely conventional methods, if the questions it asks and the way findings are framed consistently point toward adaptation and improvement rather than verdict. That cultural shift may be as important as any methodological innovation. What has enabled that orientation in the contexts where you have seen it work?
I just completed a large foresight-informed evaluation for UNICEF, but it is too early to determine what difference it may make. Ask me in 2028!
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Rhode, thank you for this. The point about complementarity between foresight methods and predictive analytics is an important one that does not always get made explicitly. There is sometimes an implicit assumption that foresight is primarily qualitative and futures-oriented, while predictive modelling is the domain of harder data, but in practice strong evaluation benefits from both, and the logic of combining them is sound. Foresight helps us explore the uncertainty space, while predictive methods help us quantify likely trajectories where data allows.
Your point about data fragmentation is well taken and, I would argue, is itself a systemic issue that evaluation has a role in addressing. If evaluations systematically produced structured, accessible data as a matter of course, rather than siloed project-level reports, the longitudinal datasets that would support the kind of modelling you describe would gradually accumulate. National ownership, as you suggest, is one pathway. But evaluation commissioning practices within international organisations could also change in ways that support this. This seems like a concrete institutional reform worth exploring further in the discussion. I also find the AI and machine learning dimension worth tracking carefully. The capacity for cross-project learning at scale is genuinely new, and its implications for evaluation design are still being worked out .
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Thank you, Alexis, for your contribution. The example you raise…costly infrastructure rendered ineffective or destroyed by extreme climate events….puts the attribution problem in very concrete terms. It is a scenario that exposes a fundamental limitation of the logic model at the heart of most retrospective evaluation. If the causal chain is severed by an external shock, the evaluation framework itself struggles to make sense of what happened, let alone offer useful guidance for what should come next.
This connects to a broader issue in evaluation methodology, which is that our standard frameworks often assume a degree of stability in the operating environment that increasingly does not hold in climate-affected contexts. Integrated landscape management is a particularly interesting domain here, because it already operates with long time horizons and complex systems, which arguably makes it one of the areas where foresight-informed evaluation is not a luxury but a necessity. I am curious whether you have seen attempts to build scenario-based thinking into evaluation design in the contexts where you work, even informally, and whether those efforts have helped evaluators and stakeholders navigate the attribution challenges you describe
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Silva, this is a provocation worth examining throughout this discussion, and I think it contributes something that we can continue to unpack in the coming weeks. You are right that foresight tools can simply be recruited into the service of compliance, i.e., anticipating futures to confirm a Theory of Change rather than genuinely interrogating it. That would be a sophisticated version of the same problem.
The prior question you raise “what is evaluation for?” is one I believe this community needs to grapple with more directly. My own sense is that the shift from hindsight to foresight is not just technical, as it also requires a different relationship between evaluators, commissioners, and the programmes being evaluated. If evaluation is purely confirmatory, then foresight becomes window dressing. But if there is institutional appetite for evaluation as genuine exploration, then foresight tools, particularly when used participatorily as you describe, can open up the kind of reflective space that challenges rather than reinforces prevailing assumptions. Keep raising these questions Silva!
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/03/2026
Serdar, thank you for your contribution. The examples you have drawn from WFP, GEF, CGIAR, and FAO's own foresight work are directly relevant to the discussion. It is encouraging to see these referenced alongside each other, since it reinforces that momentum for integrating foresight and evaluation is genuine, even if practical guidance remains thin. Your three-part framework (capacity building, practical frameworks, and institutional change) reflects a sequence that I think is right. Technical tools alone will not shift practice if the institutional incentives continue to reward retrospective accountability above all else. Evaluation mandates, commissioning processes, and the expectations set by donors are all part of the system that needs to shift. That is why I included the question about what institutional changes would be needed, for it seems to me that this is where the real bottleneck sits, not in the availability of foresight methods per se.
The FAO futures of food and agriculture scenarios report you reference is a valuable resource, and it would be interesting to hear from colleagues whether and how evaluators have drawn on those scenarios in their own work, either for framing evaluations or for contextualising findings. Looking forward to continued exchange.
Italy
Carlos Tarazona
Senior Evaluation Officer
FAO
Posted on 27/03/2026
Steve, thank you for this thoughtful engagement and for bringing in Michael Quinn Patton’s work, which I also find highly relevant to this discussion.
I very much agree with your reading that a foresight lens does not necessarily require a parallel methodology, but can be embedded in how we interpret and apply existing frameworks. In that sense, your point about a “more honest application” of the DAC criteria resonates strongly with my own experience particularly in contexts like One Health, climate change adaptation, and agrifood system transformation, where systems are evolving even as we evaluate them.
At the same time, Silvia’s intervention pushes this one step further in an important way. I share the concern that if evaluation remains anchored in a compliance-oriented logic, even well-integrated foresight risks being instrumentalised used to anticipate within predefined boundaries rather than to genuinely question them. The distinction she draws between evaluation as verification versus exploration is, I think, exactly right.
In my view, however, the real constraint on integrating foresight is often not at the level of tools or criteria but much earlier, at the stage of evaluation conceptualisation.
In the FAO One Health case, the ability to incorporate a foresight perspective was enabled by an in-depth preliminary analysis and literature review conducted at the design stage. Without that early investment, it would have been significantly harder to introduce a meaningful forward-looking dimension later on. By the time questions, scope, and methods are fixed, the evaluation architecture is already path-dependent—ironically mirroring the very dynamics we are trying to assess.
So perhaps the discussion can be nuanced in three directions:
If foresight is to be more than an add-on, it needs to be designed in from the outset, not retrofitted.
This has practical implications for commissioners. If we are serious about developmental or formative approaches, foresight needs to be reflected in:
In that respect, the approach we often use at FAO, a question-driven, utilization-focused design, guided but not constrained by OECD DAC criteria does offer some flexibility. It allows us, at least in principle, to embed forward-looking dimensions early on, provided that the conceptual groundwork is strong enough.
So perhaps the challenge is not only to rethink criteria or embrace foresight tools, but also to shift attention upstream: to how evaluations are commissioned, framed, and intellectually grounded before they even begin.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 26/03/2026
Carlos, thank you for sharing this example. The FAO One Health evaluation is a genuinely instructive case, and your framing of the "temporal mismatch" between retrospective findings and forward-looking relevance captures something I think many evaluators intuitively recognise but struggle to articulate clearly in evaluation reports.
What I find particularly insightful in your reflection is the reinterpretation of existing DAC criteria through a foresight lens. Framing relevance as "future fitness," sustainability as "resilience under change," and coherence as "the ability to work across systems" is not a radical departure from the criteria. I would argue they are a more honest application of them in contexts where conditions are already shifting during programme implementation. I have been thinking along similar lines, and your example reinforces the case that foresight doesn't necessarily require a separate methodology inserted into evaluation…it can be woven into the interpretive framing we already use (see my response below to Conny).
Your point about path dependencies is also poignant. Institutional strengths become constraints when the future demands different configurations of expertise and partnership. This seems like fertile ground for scenario planning in particular, whereby it can help organisations like FAO stress-test their current operating models against emerging One Health futures.
You comment also made me think of Michael Quinn Patton’s 2020 article “Evaluation Criteria for Evaluating Transformation: Implications for the Coronavirus Pandemic and the Global Climate Emergency” (see attached). MQP critiques the DAC criteria and offers six new criteria oriented around transformation. From his article abstract:
Fundamental systems transformations are needed to address the global emergency brought on by climate change and related global trends, including the COVID-19 pandemic, which, together, pose existential threats to the future of humanity. Transformation has become the clarion call on the global stage. Evaluating transformation requires criteria. The revised Organization for Economic Cooperation and Development/ Development Assistance Committee criteria are adequate for business as usual summative and accountability evaluations but are inadequate for addressing major systems transformations. Six criteria for evaluating transformations are offered, discussed, and illustrated by applying them to the pandemic and the Global Alliance for the Future of Food. The suggested criteria illustrate possibilities. The criteria for judging any intervention should be developed in the context of and aligned with the purpose of a specific evaluation and information needs of primary intended users. This article concludes that the greatest danger for evaluators in times of turbulence is not the turbulence—it is to act with yesterday’s criteria.
I have used MQP’s transformational criteria in two evaluations. I’ll share later on how this worked and did not work in the context I was working in…a foresight lens definitely played a role…or I should say a lack thereof.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 26/03/2026
Thank you, Rama. The role of changing perceptions on sustainability over time is a rich thread worth exploring further. One of the tensions I find most interesting in this space is that sustainability is often assessed at a fixed point in time (whether at programme design or close), against conditions that may look very different five or ten years later. A foresight lens invites us to ask not just whether a programme is sustainable under current conditions, but whether it is resilient to the range of futures that are plausible given climate trajectories, political shifts, or ecosystem dynamics. Would you be willing to share an example from your own experience where changing perceptions of sustainability, perhaps across funders, governments, or communities, shaped how evaluation findings were received or acted upon?
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 26/03/2026
Thank you all for your contributions. I was not seeing any posts on Tuesday, but yesterday many of you added your insight and responses. So thank you! Let me respond to each of you throughout today and tomorrow.
Conny, thank you for starting the discussion. You are spot on. What you are describing is precisely the tension this discussion is trying to surface. The "L" in MEL/MEAL/MERL is often the first casualty when evaluation is treated as a compliance exercise rather than a genuine learning process. Your observation that evaluation findings are frequently "put aside" after delivery is one of the most persistent and frustrating patterns in our field, and it goes to the heart of why foresight integration matters…if learning isn't happening in real time, forward-looking evaluation becomes even more difficult to anchor institutionally.
Regarding your point about outcome harvesting (OH) and outcome mapping (OM) is well taken. I’ve toyed with new evaluation concepts like Anticipatory Outcome Fishing and Foresight-Infused Outcome Mapping…attempts to weave futures thinking into evaluation approaches. As you mentioned, I have found that these approaches do in fact create more active stakeholder engagement across reflection cycles, which can build the kind of evaluative culture that makes forward-looking thinking more natural…but it is not easy for some organisations to engage at this level. I would also add that the participatory dimension you describe, i.e., getting stakeholders to reflect on what worked, what didn't, and what was unexpected, is also a foundation for scenario thinking. Once people are comfortable sitting with uncertainty and identifying assumptions, introducing foresight tools like horizon scanning or the Three Horizons framework becomes a much shorter step. Looking forward to hearing more from you as the discussion evolves.
Senegal
Amy MARA
Economiste et Specialiste en Passation de Marché
Direction de la Dette Publique
Posted on 25/03/2026
Traditionally, evaluation has been viewed as a retrospective exercise aimed at analyzing the results of a project or public policy after its implementation. However, in a context marked by uncertainty and the growing complexity of public interventions, evaluation is gradually shifting toward a more forward-looking approach, focused on anticipation and continuous improvement.
First, forward-looking evaluation relies on the integration of learning mechanisms. It is no longer merely a matter of judging past performance, but also of identifying lessons learned in order to improve the design and implementation of future actions. This approach promotes adaptive management of projects and programs.
Second, forward-looking evaluation relies on the use of foresight tools such as scenario analysis, ex-ante impact assessments, and risk modeling. These tools help inform decision-making in advance and guide public policies toward sustainable outcomes.
Furthermore, future-oriented evaluation encourages real-time monitoring and continuous assessment. Through information systems and performance indicators, it becomes possible to adjust interventions as they are implemented. This dynamic enhances the responsiveness and effectiveness of projects.
Finally, the forward-looking dimension of evaluation involves greater stakeholder participation. Engaging beneficiaries, decision-makers, and experts helps identify future needs, anticipate challenges, and develop tailored solutions.
Ultimately, shifting from retrospective to prospective evaluation involves transforming evaluation into a true decision-making tool. It thus becomes a strategic lever that not only analyzes the past but, above all, prepares for the future and sustainably improves public action.
Ms. Amy MARA
Economist and procurement specialist
Ph.D. candidate in project management
United Republic of Tanzania
Emmanuel Erick Igiha
Principal M&E Specialist
Tanzania National Parks
Posted on 25/03/2026
In my experience, what really makes an evaluation "forward looking" is its focus on future improvement rather than just judging what happened in the past. A forward-looking evaluation digs into what we've learned, offers practical recommendations and helps us adapt to new challenges or opportunities that might come up. It’s not just about checking if we met our old goals, but about asking, “How can we do even better next time?” I find this approach especially valuable because it encourages ongoing learning and helps everyone involved plan more strategically for the future. What do others think—have you seen examples where a forward-looking evaluation really made a difference?
Canada
Rhode Early Charles
Posted on 25/03/2026
I find this discussion particularly relevant. In my work, I often use time series analysis and predictive modeling to estimate future trends based on historical data.
I would add that while foresight approaches that do not rely on past performance are essential, especially in contexts characterized by high uncertainty or limited data, predictive methods grounded in historical data remain among the most robust tools we have when sufficient and reliable data is available. These methods allow us to identify patterns, quantify trends, and generate evidence-based projections that can effectively complement more qualitative foresight approaches.
With the advancement of AI and machine learning, we now have the capacity to go further by integrating large volumes of data across multiple projects, regions, and even donors. This creates important opportunities to build more accurate and context-sensitive predictive models, particularly when working with similar interventions within a country or sector.
However, a major constraint remains data availability and fragmentation. Data is often siloed within individual projects or organizations, making it difficult to build sufficiently large and diverse datasets for robust modeling. In many cases, data from a single project is not sufficient to support reliable predictions.
One potential way forward would be to strengthen national ownership of project data. Governments could play a key role in consolidating data generated across projects into centralized and accessible databases. If well designed, such systems could support research, inform project design, and enable more rigorous ex-ante analysis of potential success or failure.
In that sense, I see strong complementarities between foresight methods and predictive analytics. Foresight helps us explore uncertainty and alternative futures, while predictive models help us quantify likely trends where data allows. Bringing both together could significantly strengthen evaluation practice and decision-making.
Benin
Alexis Adébayo ODOUN-IFA
Expert in MEAL
RAAF/ECOWAS
Posted on 25/03/2026
I would like to thank you for this initiative.
Retrospective evaluation is essential for assessing impact and drawing lessons. However, in contexts shaped by climate change, it presents significant limitations, particularly in integrated landscape management approaches.
For example, costly infrastructure such as dams can be destroyed or become ineffective due to extreme climate events. In such conditions, it becomes difficult to measure the real impact of a project or to attribute observed results to the intervention rather than to external factors.
This uncertainty affects the use of evaluation results, as they may be perceived as unreliable or not representative, thereby limiting their usefulness for decision-making and future planning.
Italy
Silva Ferretti
Freelance consultant
Posted on 25/03/2026
Thank you for this discussion and for the initial ideas shared!
As someone who consistently puts "forward-looking evaluation" at the centre of my proposals, I want to offer a provocation.
The framing here might suggest that what evaluation needs is better foresight tools and more capacity to anticipate the future. I'd like to challenge that, not to dismiss foresight, but to locate the real problem one level up. Because the issue is not technical. It is cultural.
The deeper question is: what is evaluation for? If it exists mainly to confirm compliance (i.e. to verify that a plan was executed as designed, that the Theory of Change held....) then adding foresight methods changes nothing. We will simply be anticipating the future in service of the same backward-looking logic and the same set of horizons. Always in "compliance mode."
Before we ask how evaluation can get better at anticipating the future, we need to ask a prior question: are we willing to set evaluation free from the obligation to confirm the plan?
Can evaluation be exploration, not verification? That means evaluations that do not just answer questions, but discover better ones, that help people think through the future, not spoon-feed it to them.
Foresight tools are valuable. I have used them. And when used in a participatory way they can be liberating, revealing that people already carry vision and insights that the very plans they are working on tend to constrain.
So this is the real issue. It is not "foresight" as a technical fix. It is about the power to adapt, challenge, and explore continuously.... rather than situating evaluation in a world where our assumptions, our theories, our plans are reference points, and not starting ideas.
Italy
Serdar Bayryyev
Senior Evaluation Officer
FAO
Posted on 25/03/2026
Thank you for initiating this important discussion. To facilitate this discussion, I would like to share some reflections.
Today’s world faces unprecedented challenges of climate change, food security, environmental sustainability, and increasing fragility due to conflicts and related crises. Agricultural development programs operate amid a backdrop of volatility, uncertainty, complexity, and ambiguity.
Traditionally, evaluation function has focused predominantly on retrospective accountability, measuring past performance against predetermined plans, objectives and targets. While valuable, this approach, in today’s rapidly changing context, often doesn’t result in valuable insights and clear, impactful messages. Evaluations that assess relevance, effectiveness, and sustainability based on the conditions at the time of design can produce accurate reflections of past actions but offer limited guidance for future decision-making.
When evaluation processes rely solely on historical benchmarks, they risk overlooking emerging trends and future challenges. For example, a program designed to improve crop yields based on a specific climate scenario may become less relevant if climate patterns shift unexpectedly. Similarly, a project assessed as sustainable under current conditions might prove vulnerable under future stressors. This gap underscores the need for evaluation methodologies that are forward-looking and capable of engaging with plausible futures.
Vairous organizations already embed foresight into their respective practices:
- The World Food Programme (WFP) has integrated foresight-informed approaches into its Anticipatory Action programs, enabling more proactive responses to food crises.
- Organizations such as GEF, CGIAR are exploring how to better assess long-term resilience and systemic impacts in their environmental and agricultural investments.
- FAO has recently published a report that aims to inspire strategic actions to transform agrifood systems into sustainable, resilient, and inclusive ones. This report ( accessible here: https://www.fao.org/global-perspectives-studies/fofa/en/) explores three different scenarios for the future of food and agriculture, based on alternative trends for key drivers, such as income growth and distribution, population growth, technical progress in agriculture, and climate change.
Strategic foresight should be based on a suite of accessible tools and approaches to address this challenge. While various tools and methods have been developed, practical guidance on their applicability remains limited. Many evaluators lack training in foresight methods. To utilize the full potential of foresight in evaluation, several steps are essential:
In an era of unprecedented change, evaluation must evolve from a retrospective mirror to a forward-looking compass. Integrating foresight methods into evaluation processes can enhance relevance, sustainability, and systemic impact assessments, ultimately supporting programs that are resilient and adaptable in the face of uncertainty.
Looking forward to further discussions and shared learning on this important topic.
Best regards,
Serdar Bayryyev, Senior Evaluation Officer
Food and Agriculture Organization
Italy
Carlos Tarazona
Senior Evaluation Officer
FAO
Posted on 25/03/2026
Good morning colleagues, and thank you for launching this very timely discussion.
I’d like to share a recent experience from the FAO Office of Evaluation where we explicitly drew on foresight principles in the design and conduct of an evaluation.
In evaluating FAO’s work on One Health, we began with a familiar retrospective lens: how did the approach evolve, and what did FAO contribute? This analysis showed a strong trajectory—leadership over 20 years, particularly in animal health, zoonotic disease control, biosecurity, and more recently antimicrobial resistance (AMR) and pandemic preparedness.
But we quickly ran into a temporal mismatch.
One Health is not a stable field. It is being reshaped by climate change, biodiversity loss, land-use pressures, AMR, and broader food system transformation. Evaluating performance against past conditions risks producing findings that are valid—but less useful for navigating what comes next.
So the question shifted: not just did FAO perform well? but is its approach fit for the futures now emerging?
That’s where a foresight lens—informally, thinking in terms of emerging risks, system shifts, and plausible futures—added value.
It helped us reinterpret a central tension. FAO’s strengths—deep expertise in animal health, strong country platforms, and operational experience—are also its path dependencies. While FAO has adopted a broader, more holistic definition of One Health, implementation still often appears animal health-centred, with ecosystem and systems dimensions less consistently integrated.
From a forward-looking perspective, this matters. Future One Health challenges are likely to be more interconnected, not less. They will require deeper integration across sectors (animals, plants, environment, food systems) and stronger cross-sectoral coordination at country level.
One takeaway for me is that foresight can enter evaluation through existing criteria:
Retrospective evaluation tells us how we got here. A future-informed lens helps us ask whether we’re ready for what’s next.
I’d be very interested to hear how others have approached this—have you found practical ways to bring even light-touch foresight into evaluation design or interpretation?
India
Rama Rao Darapuneni
Former Director in ICAR
ICAR
Posted on 25/03/2026
Role of changing perceptions on sustainability over time
Germany
Cornelia Rietdorf
Scientific Associate
German Environment Agency
Posted on 25/03/2026
Good morning / hello to everyone and thank you for starting this interesting discussion round, Steven!
I'm not an evaluator, but worked in M&E / MEL / MEAL / MERL in different contexts over the past 10+ years and saw the challenges of mostly backward looking evaluation way to many times. I've not many experiences with foresight evaluations so here just some general thoughts:
What makes an evaluation foreward looking? For me, in a way it's creating awareness of the L in MEL / MEAL / MERL. Why do we do an evaluation? Way to often I saw that for project teams it's just a troublesome check box to tick off to please donors / project requirements. The evaluation is done in whatever way and then put aside. It was often hard work to raise awareness to the importance and potential of evaluation - to uncover what worked well, what lead to actual positive change, what didn't work and why and what even might have brough negative change to then use these Learnings for better and improved projects / policies / strategies / measures etc.
I'm wondering, if outcome harvesting and mapping is one way to increase the awareness of the potential of evaluations, as many stakeholders involved get more actively involved in several reflection rounds around outcome mapping and outcome harvesting - which then ideally triggers an important reflection process on what might work why, what has worked, what didn't, what was unexpectedly positive or negative and can then in turn be used for an improved follow-up process.
So in a way aren't good tools, arguments and practices to really focus on the learning aspect of evaluations and engage with all involved key stakeholders in reflection processes as much as possible greatly support the evaluation foresight? I hope I'm not completely off track here and am very much looking forward to the discussion threads around this topic and learnig from everyone here.
Cheers,
Conny
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 23/03/2026
Welcome to From Hindsight to Foresight: How Evaluation Can Become Future-Informed Discussion. My name is Steven Lichty and I’ll be hosting this online discussion over the next five weeks. I live in Nairobi and have been working at the nexus of foresight and evaluation for over 20 years. I am looking forward to facilitating our conversations here, sharing resources, and learning from all of you.
Evaluation has long helped us understand what happened, what worked, and what did not. But many of the systems we care about most (food, agriculture, climate, ecosystems, resilience, etc.) are now shaped by accelerating uncertainty, disruption, and long-term change. In that context, looking backward is no longer enough. The question is not only whether an intervention performed well in the past, but whether it is fit for the futures now emerging.
Over the coming weeks, this forum is a place to test ideas, share examples, surface tensions, and learn across disciplines. Through shared experiences, optional readings, and honest reflection, we will explore what it looks like when evaluation starts looking forward. Not abandoning rigour, but expanding it. Not replacing the DAC criteria, but asking what criteria like relevance and sustainability really mean when the future may look nothing like the world in which a programme was initially designed.
This discussion invites evaluators, foresight practitioners, commissioners, researchers, and decision-makers into a shared space of inquiry. How can evaluation become more future-informed, more adaptive, and more useful in times of volatility? What happens when we bring foresight tools like horizon scanning, scenarios, Three Horizons, Futures Triangle, or Causal Layered Analysis into evaluation design, interpretation, and use? How can deeper epistemologies and ontologies driving critical futures thinking inform how we do evaluation?
This community includes some of the most thoughtful evaluators, commissioners, and practitioners working in food security, agriculture, and the environment. You have seen the limits of retrospective evaluation firsthand. You have also probably seen glimpses of something better. You do not need to be an expert in both fields to contribute. Practical experience, critical questions, promising cases, doubts, and provocations are all welcome.
So let’s begin there: Where have you seen the limits of retrospective evaluation in a fast-changing world? And where do you see the most promising entry points for bringing a foresight lens into evaluation practice?
I am are glad you are here and I look forward to an engaging and thought-provoking discussion.