The online discussion will remain open for contributions until 30 April 2026!
Background and Rationale
Food security, environmental, and agricultural development programmes increasingly operate in volatile, uncertain, and complex contexts. Climate disruption, ecosystem degradation, shifting geopolitical conditions, and cascading crises are no longer background noise. They shape the environment in which these programmes are designed and implemented. Evaluation in these sectors often stays focused on retrospective accountability, measuring past performance against fixed objectives, even as operating conditions keep shifting.
This temporal mismatch has practical consequences. When evaluations judge relevance, effectiveness, and sustainability against the conditions that existed at programme design, they can produce findings that are accurate about the past but less useful for guiding future decisions and navigating change. Theory of Change processes often mirror the same limitation one that does not account for the plausible futures that will determine whether current investments ultimately succeed or fail.
Strategic foresight brings forward-looking approaches that can strengthen evaluation practice. Methods such as horizon scanning, scenario planning, Futures Triangle, the Three Horizons framework, and Causal Layered Analysis help evaluators look beyond past performance and consider how programmes might perform under different future conditions. These foresight tools and frameworks can enrich evaluation at every stage, from scoping and design through to learning and use. When used alongside evaluation, they support more anticipatory governance, enabling decisions that draw on evidence, while remaining attentive to uncertainty and long-term change.
Momentum for integrating foresight and evaluation is already visible across various sectors. WFP's Anticipatory Action programmes, for example, have already introduced foresight-informed approaches into monitoring and evaluation frameworks. At the same time, organisations such as the GEF, CGIAR, and FAO are exploring how evaluation can better assess long-term resilience and systemic impacts in environment and agriculture investments. These developments are also prompting broader reflection within the evaluation community, including renewed interest in how the OECD-DAC criteria might evolve, shifting from measuring alignment with past priorities, toward assessing prospective relevance and robustness across plausible future scenarios. Despite this momentum, practical guidance for evaluators remains limited. Few evaluators have received formal exposure to foresight methods, and foresight practitioners are rarely trained in evaluation. The tools, case examples, and community of practice needed to connect these fields are not yet well established.
Discussion Purpose
This online discussion will examine how foresight methods can be integrated into evaluation practice in food security, environmental, and agricultural contexts. Drawing on practitioners’ experiences, real-world examples, and optional readings, the discussion will highlight practical insights that evaluators can use to make their work responsive to uncertainty and more useful for forward-looking decision-making.
Discussion Objectives
- To introduce key foresight concepts and tools including horizon scanning, scenario planning, Causal Layered Analysis, and the Three Horizons framework and explore how they can be applied within evaluation processes.
- To examine how foresight-informed evaluation can strengthen assessments of relevance, sustainability, and systemic impact in food security, environmental, and agricultural programmes.
- To share concrete examples of foresight and evaluation integration from across the sector, including anticipatory action, climate resilience programming, and theory of change processes.
- To identify practical entry points for evaluators to begin incorporating foresight perspectives into their work, regardless of institutional context or resource constraints.
Guiding Questions
- In contexts of climate uncertainty, rapid environmental change, and shifting geopolitical realities, where have you seen the limits of retrospective evaluation? How has this affected the use of findings?
- What foresight tools or methods have you encountered in your evaluation practice? What made them useful or difficult to apply? What foresight tools, if any, have you used personally?
- How might our interpretations of the DAC criteria (such as relevance and sustainability) through a foresight lens change what we measure, how we measure, and how we make recommendations?
- Where do you see opportunities for integrating foresight and evaluation in food security, environmental, and agricultural contexts?
- What skills, resources, and institutional changes would be needed to make foresight a regular part of evaluation design and commissioning?
Discussion Readings
Week 1: Introductory discussion on the theme and exploration of the guiding questions.
Week 2: Examine transformative foresight for the transformational imperative, via a forthcoming article in the Journal of MultiDisciplinary Evaluation, edited by Scott Chaplowe.
Week 3: Discuss “Fusing foresight and futures thinking for a new transformative evaluation paradigm” by Rose Thompson Coon, Katri Vataja, and Pinja Parkkonen (in New Directions for Evaluation, Summer 2024, Issue 183, pages 91-101)
Week 4: Explore Quality Criteria for Food Systems Foresight in Africa: A practitioner’s guide for commissioning, facilitating and evaluating foresight, a recent guide written by Katindi Sivi and launched by the Forum for Agricultural Research in Africa, in partnership with Foresight4Food, University of Oxford, and the International Development Research Centre.
This discussion is now closed. Please contact info@evalforearth.org for any further information.
United States of America
Tina Tordjman-Nebe
Evaluation Advisory
UNDP Independent Evaluation Office
Posted on 04/05/2026
Dear Steven,
Dear EvalforEarth Members,
I would like to share a blog I recently authored, initially published on the United Nations Development Programme (UNDP) website:
“Not Just Looking Back: Why Evaluation Needs a Forward View”
Available here: https://www.undp.org/evaluation/blog/not-just-looking-back
The reflections in this piece align closely with the ongoing discussion
Kenya
Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026
Monitoring, Evaluation and Gender Consultant/Trainer
Posted on 03/05/2026
The topic is very relevant, timely and thought-provoking.
In the context of climate uncertainty and its implications for the use of evaluation findings, it is evident that retrospective evaluation is a weak predictor of future outcomes due to climate variability. Methodologically, static measurements are designed for relatively stable environments, and can no longer fit in the dynamic and volatile ecosystems we are experiencing today.
For example, in food security, the more we measure project success based on the yield increases, the more we risk overlooking critical factors such as soil depletion trends, climate shocks, and indigenous community mobility patterns, among others. As a result, findings may quickly become underutilised. It is therefore clear that yesterday’s solutions cannot remain valid for tomorrow’s problems.
Foresight Tools and Methods
Some foresight tools and methods I have encountered include Outcome Harvesting, the 3 Horizons Framework, systems mapping, scenario planning, horizon scanning, and trend analysis. These approaches are valuable because they capture uncertainty and expand thinking beyond linear outcomes, thereby supporting adaptability in programmes.
However, their application comes with challenges because they require time, skills, and facilitation capacity, which many evaluators may not yet possess. Additionally, some stakeholders perceive these approaches as less rigorous. Another constraint is that they are not often embedded in donor terms of reference, which discourages evaluators from applying them.
DAC Criteria through a Foresight Lens
Reinterpreting the DAC criteria through a foresight lens is important. For instance, we need to move from a static relevance to dynamic relevance. Traditionally, we ask whether an intervention was aligned at the design stage; however, we should also be asking whether it will remain relevant under future scenarios.
Similarly, sustainability is often framed as the continuation of benefits after programme completion. In today’s volatile context, this needs to shift toward assessing whether systems can adapt, absorb shocks, and transform. This includes examining resilience and adaptive capacity at both system and community levels.
Opportunities in Food Security, Environmental, and Agricultural Sectors
These sectors are inherently future-facing. In Kenya, since childhood, we were often reminded, “Huu ni uti wa mgongo wa uchumi”, meaning this is the backbone of our economy. Importantly, Indigenous knowledge systems already function as foresight systems.
There are significant opportunities to integrate foresight into areas such as agroecology, climate-resilient agriculture, pastoralist mobility systems, and early warning systems. Indigenous forecasting methods, such as interpreting weather patterns, seasonal cycles, animal behaviour, and land use patterns, offer valuable insights that can strengthen evaluation practice.
What Needs to Change to Integrate Foresight into Evaluation
To make foresight a regular part of evaluation, we need to strengthen skills in systems thinking, futures literacy, facilitation of uncertainty, and multidisciplinary approaches.
At the institutional level, there is a need for flexible terms of reference, adaptive and real-time evaluation designs, and learning-focused commissioning processes that allow for cumulative learning over time.
Overall, we need to shift from the narrow focus on accountability to a balanced learning and anticipation, supported by investments in digital data tools.
My conclusion
I would conclude by urging the adoption of the indigenous scenario work, that is, planning for multiple possible futures rather than assuming a single predictable path. This involves asking critical questions such as What might happen? What could change? What if things go differently?
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 30/04/2026
Thank you to everyone who has contributed to this rich and thoughtful discussion over the past several weeks. I have deeply appreciated the range of perspectives shared—from Kenya, Benin, Southern Africa, South Asia, the Pacific, and beyond—and the way contributors have grounded the foresight–evaluation nexus in real-world questions of climate vulnerability, food systems, environmental governance, agricultural transformation, adaptive management, and community ownership.
A strong thread running through the discussion is that future-informed evaluation is not simply about adding foresight tools to existing evaluation practice. It is about rethinking timing, depth, and intent. How do we design evaluations that do not only ask what worked, but whether an intervention remains relevant, resilient, just, and viable under emerging conditions? How do we move from static baselines to dynamic reference points, from endline judgement to continuous sense-making, and from retrospective accountability to anticipatory decision support? I’ll explore more of these threads in the forthcoming discussion summary.
As the formal discussion closes today, I’d also like to ask: what might be the next step?
One possible follow-up could be a short, practical three-hour masterclass on integrating foresight and futures thinking into evaluation, specifically for food security, environmental, and agricultural development programmes. This could introduce core concepts and tools—such as horizon scanning, Causal Layered Analysis, scenarios, Three Horizons, and wind-tunnelling—while focusing on how they can be embedded into evaluation questions, theories of change, OECD-DAC criteria, adaptive learning, and recommendations.
I’d be very interested to know whether those who participated in this discussion, or colleagues in your organisations and networks, would find such a masterclass useful.
As a small closing contribution, I’m also attaching a pre-publication version of an article I co-authored on transformative foresight and the transformational imperative. In some ways, this brings us full circle back to the Chaplowe and Mukoma article I posted at the beginning of the discussion on 28 March. My article argues that evaluation must move beyond business-as-usual by engaging futures thinking not as a technical add-on, but as part of a deeper reorientation toward anticipatory, justice-oriented, and transformation-focused practice. Please treat it as a pre-publication draft and do not cite, quote, or distribute it without permission from the authors.
Thank you again for the generosity, insight, and practical wisdom you have brought to this discussion.
Canada
Rhode Early Charles
Posted on 30/04/2026
Thank you, Steven, for your thoughtful facilitation and for bringing together such a rich exchange of ideas. The discussion has been insightful, practical, and deeply grounded in real-world challenges. I particularly appreciate how it has opened space for new thinking on foresight, evaluation, and adaptive practice. I look forward to the next steps, including the summary and potential masterclass.
Ghana
Ishmael Kwame Agbomlaku
Manager
Integrated Institute of professional, LA plage Meta Verse.
Posted on 29/04/2026
Evaluation must evolve from a backward-looking exercise into a forward-looking decision-making tool. While hindsight helps us understand what worked and what did not, foresight allows us to anticipate risks, adapt to uncertainty, and design more resilient interventions.
One way to achieve this is by integrating real-time data systems, predictive analytics, and scenario planning into evaluation processes. For example, in climate-sensitive sectors like agriculture or water management, evaluators can use historical data alongside climate projections to guide future programming rather than only assess past outcomes.
Additionally, adopting adaptive and developmental evaluation approaches enables continuous learning and flexibility. This ensures that programs are not only evaluated at the end but are continuously improved based on emerging evidence.
Stakeholder engagement is also critical. Incorporating local knowledge and community perspectives strengthens foresight by grounding future scenarios in real-world contexts.
In essence, evaluation should shift from answering “What happened?” to addressing “What is likely to happen, and how can we prepare?” This transformation makes evaluation more strategic, responsive, and impactful in tackling complex global challenges.As an M&E practitioner, I see this shift as essential for improving development outcomes, especially in vulnerable contexts like Ghana where uncertainty is increasing.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 29/04/2026
Thanks, Ismael. I strongly agree with your central point that evaluation needs to become more of a forward-looking decision-making tool, not only a mechanism for judging past performance. Hindsight remains essential, but it is not sufficient when programmes are operating in systems shaped by climate volatility, political uncertainty, technological change, and shifting community needs.
Your examples from agriculture and water management are especially relevant. In those sectors, historical performance data can tell us what has happened, but climate projections, scenario planning, and real-time monitoring help us ask a more strategic question like what is likely to remain viable under different future conditions?
I also appreciate your link to adaptive and developmental evaluation. For me, this is where foresight and evaluation become mutually reinforcing. Foresight helps identify emerging risks, assumptions, and alternative pathways, while adaptive evaluation helps programmes learn and adjust as those futures begin to unfold. The challenge is to ensure that predictive analytics and real-time data do not become purely technical exercises, but are combined with participatory sense-making, local knowledge, and professional judgement. That is what turns information into useful decisions.
Benin
Expédit TCHIGO
University of Parakou
Posted on 29/04/2026
In Benin, the policy evaluation system is currently undergoing a significant transformation. In this context, a crucial question that urgently arises is how to effectively obtain real-time data. Indeed, while many evaluation processes can draw on tools commonly used in foresight, such as risk registers, adaptive management loops, strategic questioning, and scenario planning these instruments, although valuable, do not fully substitute for rigorous and professional foresight practice. Consequently, it becomes essential to adopt a more integrated approach. Furthermore, the strong emphasis placed on shared ownership throughout the entire process from design to implementation emerges as a key strength, reinforcing both the relevance and sustainability of evaluation outcomes.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 29/04/2026
Expédit, thank you for this thoughtful reflection. And I have to say, your mention of Benin brought back a real blast from the past for me...Benin was the first African country I worked in back in 1997. I was later in Parakou in 1998, and I still have very fond memories of that time.
Your point about real-time data feels especially important. In many evaluation systems, we still depend too heavily on delayed evidence, even when the policy environment is moving quickly. But I also appreciate your caution that tools such as risk registers, adaptive loops, strategic questioning, and scenario planning are not the same as professional foresight practice. They can support foresight, but they do not replace the deeper work of interpreting weak signals, surfacing assumptions, exploring alternative futures, and helping decision-makers act under uncertainty.
I also strongly agree with your emphasis on shared ownership. In a transforming policy evaluation system, foresight-informed evaluation will only be useful if it is not imposed as an external technical exercise, but co-owned from design through implementation. That is what gives the process legitimacy, relevance, and the possibility of real uptake.
United States of America
Stephanie Jill Hodge
Posted on 29/04/2026
From Hindsight to Foresight: How Evaluation Becomes Useful Again
Reflections from working inside systems that don’t hold still long enough to be measured
There is a quiet tension at the centre of most evaluation work, and if you’ve spent any time inside complex environmental or climate programmes, you feel it almost immediately. We are trained—carefully, rigorously—to look backward. To assess what was delivered, what worked, what didn’t, and whether it aligned with what was originally promised. And yet the systems we are working in—food systems, climate adaptation, biodiversity governance, circular economies—are not standing still long enough for that backward glance to remain relevant.
In my own work across Global Environment Facility-linked portfolios and parallel systems, I have watched this tension play out repeatedly. The formal questions—relevance, effectiveness, efficiency, sustainability—remain constant. But the world they are meant to interpret keeps shifting beneath them.
Take the United Nations Environment Programme / GEF ISLANDS programme across fourteen Pacific SIDS. On paper, it was doing what it said it would do. Infrastructure was being installed—systems for managing persistent organic pollutants, mercury, e-waste, used oil. Policies were drafted. Coordination mechanisms were established. If you stayed within the traditional frame, the evaluation could confirm delivery.
But when you step back and look at the system as it actually operates, the question changes. It becomes less about whether outputs were delivered, and more about whether the system being built will still function when the conditions change—which they inevitably will. Tourism fluctuates. Fiscal space contracts. Climate shocks disrupt infrastructure and supply chains. In that context, the real question decision-makers had was not “did this work?” but “will this hold?”
That question sits outside retrospective evaluation unless you deliberately bring it in.
The same pattern appeared in the circular economy work with the Asian Development Bank and GEF across Southeast Asia. We were looking at Extended Producer Responsibility-type systems—policy readiness, institutional arrangements, pilot implementation. All the right ingredients. But again, the fragility was not in the design. It was in the future conditions under which that design would have to operate. Commodity price fluctuations, regulatory enforcement cycles, political turnover—these are not edge cases. They are the operating environment. And yet they rarely sit at the centre of evaluation design.
Even in earlier GEF project design and evaluation work, including PIF-level advisory aligned with the Food and Agriculture Organization, the Theory of Change followed a familiar and comforting logic: outputs lead to capacity, capacity leads to improved management, improved management leads to environmental outcomes. It is clean. It is logical. It is also, in most cases, incomplete. Because it assumes a relatively stable enabling environment. It does not ask, in any structured way, under what future conditions that chain holds—and where it breaks.
Across all of this work, the same limitation keeps surfacing. Retrospective evaluation is very good at validating what has been delivered. It is much weaker at assessing what will endure. And it is largely silent on what is about to fail.
This is where foresight comes in—not as an abstract add-on, but as a practical necessity. And in my experience, it becomes most powerful not at the end of a programme, but in the middle of it—during course correction, during mid-term reviews, in those messy, uncomfortable moments where systems are clearly not behaving as expected but have not yet fully failed.
That is where I now do most of this work.
In a mid-term review, the temptation is always to stabilise the narrative. To explain variance. To adjust ratings. To recommend incremental fixes. But if you treat a mid-term review as a static checkpoint, you miss its real value. A mid-term review is the last credible moment to change direction before a programme locks itself into its own logic.
So the way I approach it is different.
I start by mapping the system not as a logframe, but as a pathway. Evidence to decision. Decision to pipeline. Pipeline to finance. Finance to implementation. Implementation to outcomes. And then I ask a simple question at each step: where is this moving, and where is it stuck?
Not in theory. In practice.
Where are decisions not being taken, even though evidence exists? Where are project concepts sitting without moving into investment-ready pipelines? Where is finance not flowing, even though priorities are clear? Where is implementation breaking down because legitimacy—particularly at the community level—is not secured?
This is not traditional evaluation terrain. But it is where programmes actually succeed or fail.
Once you see the system this way, foresight enters naturally. Because the next question is not “what has happened?” but “what happens next if nothing changes?” And then, “what happens under different plausible futures?”
In practical terms, that means stress-testing the system. Not through elaborate models, but through structured questioning. What happens to this financing model if public budgets contract? What happens to this delivery mechanism under extreme weather disruption? What happens to this policy if enforcement weakens after political change? You do not need perfect scenarios. You need plausible ones.
And then you bring that back into the evaluation.
Recommendations stop being generic—“strengthen capacity,” “improve coordination”—and become directional. Shift this part of the pipeline because it will not hold under foreseeable conditions. Rebalance this financing structure because it is too exposed to a single risk. Invest in this relationship or legitimacy mechanism because without it, implementation will stall regardless of technical design.
In other words, evaluation becomes less about judging the past and more about redirecting the future.
This is exactly the space I am working in now in the PNG Country Package context. Here, the issue is not a lack of activity. It is that movement along the system is uneven and often invisible. Decisions do not consistently translate into pipelines. Pipelines do not consistently translate into finance. Finance does not consistently translate into implementation at scale. And underlying all of it is a critical factor that traditional evaluation often underplays: legitimacy, particularly in a context where customary land ownership defines what is possible.
If you look at this through a retrospective lens, you will produce a perfectly reasonable evaluation that does very little to change outcomes. If you look at it through a forward lens—tracking where the system is likely to stall next—you begin to see where intervention actually matters.
This is not about abandoning the OECD-DAC criteria. It is about stretching them. Relevance becomes prospective—will this remain relevant under plausible futures? Sustainability becomes conditional—under what conditions does this hold? Effectiveness becomes dynamic—not just whether outcomes were achieved, but whether the system is capable of continuing to produce them.
And perhaps most importantly, evaluation shifts function. It stops being primarily a reporting mechanism and becomes a decision-support tool.
That sounds like a small shift. It is not. It requires evaluators to be more explicit about uncertainty, more engaged with system dynamics, and more willing to step slightly outside the comfort zone of purely retrospective judgment. It also requires institutions to accept that the most useful evaluation is not always the most certain one.
If I am honest, many of the most valuable insights in my work have come from the moments where we did exactly that—where we stopped asking “what was?” and started asking “what if?” Where we followed the system forward instead of backward. Where we treated uncertainty not as something to be minimised, but as something to be worked with.
That is where evaluation becomes useful again.
Because in a world that is no longer stable—and is not going to be—the question is not whether we can perfectly understand the past.
It is whether we can act, intelligently and in time, in the face of what comes next.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 29/04/2026
Stephanie, thank you for this deep reflection. Your comment that stays with me is that evaluation is often strongest at validating what has been delivered, but much weaker at assessing what will endure, and almost silent on what is about to fail.
I think this names one of the deepest professional discomforts in evaluation. We often treat uncertainty as a threat to rigour, when in complex climate, environmental, and circular economy systems, ignoring uncertainty may be the greater methodological weakness. A beautifully evidenced retrospective judgement can still be strategically useless if it cannot tell decision-makers where the system is becoming brittle.
Your framing of the mid-term review as “the last credible moment to change direction” is especially poignant. Too often, mid-term reviews become soft accountability exercises....adjust the ratings, tidy the logframe, recommend more coordination...but if we took them seriously as foresight moments, they could become strategic inflection points where programmes are stress-tested before failure becomes locked in.
I saw this very clearly in a recent UNICEF strategic foresight evaluation I conducted...where we had to stretch the OECD-DAC criteria beyond their usual retrospective orientation. Relevance became not only “is this aligned now?” but “will this remain relevant under plausible future conditions?” Coherence became about future institutional fit. Effectiveness had to consider adaptive capacity, not just achieved results. Sustainability became explicitly conditional, i.e., under what political, financial, organisational, and social conditions will this model hold?
For me, the provocation is that perhaps the most useful evaluation is not the one that gives the most confident judgement about the past, but the one that most honestly reveals where the future is likely to break the programme’s assumptions. In that sense, future-informed evaluation does not weaken evaluative judgement...it makes it braver (can we use braver evaluation?)
Zimbabwe
Wilbert Marimira
MEAL Specialist
CARE International
Posted on 29/04/2026
Reflections from my experience in complex adaptation contexts:
Drawing from my work on community‑ and nature‑based adaptation initiatives in Southern Africa and beyond, I believe evaluators sometimes overstate the absence of foresight in evaluation. The challenge is rarely the lack of tools, but rather timing, depth, and intent. Climate disruption, ecosystem degradation, shifting geopolitical conditions, and cascading crises are no longer background noise; they actively shape development pathways and community decisions in real time. Yet foresight is often introduced late in the evaluation cycle, applied superficially, or treated as a technical add‑on rather than a strategic lens. When evaluation is not explicitly designed to engage with uncertainty, power dynamics, and interacting risks from the outset, it struggles to reflect the true complexity of adaptation systems.
In practice, this becomes clear when evaluating climate and nature‑based adaptation interventions. An evaluation that only looks backwards is like navigating by a map of where you’ve been, in terrain that is constantly reshaping itself. Integrating foresight means adapting tools we already know how to use, such as scenario planning, participatory approaches, and forward‑looking Theories of Change, to ask not only what has worked, but what may work under different future conditions. Embedding adaptive management, with regular feedback loops and real‑time data, allows evaluations to remain relevant as contexts shift. Most critically, community perspectives, through co‑evaluation, anchor foresight in lived realities, surfacing local knowledge about risks, trade‑offs, and opportunities. When intent is truly future‑informed, evaluation moves beyond the rear‑view mirror to act as a compass and horizon scanner, helping decision‑makers identify adaptation pathways that are resilient to cascading shocks and ultimately more just, sustainable, and humane.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 29/04/2026
Thanks, Wilbert. I appreciate the distinction you make between the absence of foresight and the underuse of foresight with sufficient timing, depth, and intent. That feels right, as many evaluations already contain fragments of future thinking, but they are often introduced too late or held too lightly to reshape the evaluative frame itself.
Your adaptation example is powerful because climate and nature-based systems expose the limits of backward-looking judgement. In those contexts, the question is not only whether an intervention delivered results, but whether it strengthened the capacity of communities and ecosystems to navigate futures that are unstable, uneven, and politically contested.
I also like your image of evaluation as both compass and horizon scanner. It raises an important challenge for us as evaluators...are we simply documenting adaptation after the fact, or are we helping communities, implementers, and decision-makers recognise which pathways remain viable as risks cascade? For me, that is where future-informed evaluation becomes not just methodological, but ethical. It asks evaluation to serve resilience, justice, and agency in the face of futures that are already arriving.
India
Deepak Sharma
Director
EQUALITY EMPOWERMENT FOUNDATION
Posted on 27/04/2026
It can be journey from Hindsight to Foresight rather collaborating two i.e. Hindsight and Foresight need am=n inclusive list of questions which are frames to capture hindsight aspects of evaluations normally in relevance, coherence, effectiveness, efficiency, and impact, but foresight mostly captured around sustainability. In my experiences we have tried to incorporate question related to both hindsight and foresight linked to all the six and overarching, cross-cutting issues so a matrix is developed.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 29/04/2026
Thanks Deepak, your point about using an inclusive matrix is especially useful. It suggests that foresight should not sit at the end of an evaluation as an add-on, but should be woven through relevance, coherence, effectiveness, efficiency, impact, sustainability, and cross-cutting issues. In that sense, each criterion can ask both, i.e., What have we learned from past and present performance? And what does this imply for future relevance, resilience, adaptation, and strategic positioning?
For me, this is where future-informed evaluation becomes practical. It helps evaluators design better questions, not just use different tools.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posted on 26/04/2026
Building on my earlier reflection, I think the case for future-informed evaluation becomes even more compelling when we look at it through an East African lens. Across the region, programmes are not just operating in “complex contexts"; they are operating in structurally shifting systems. Climate variability, mobility (including refugee dynamics), demographic pressure, and decentralised governance are not external risks; they are core features of the system itself. In such contexts, the limitation of retrospective evaluation is not only that it looks backward but also that it often assumes a level of system stability that simply does not exist.
For example:
What this means in practice is that programme performance becomes highly sensitive to system shifts, making static evaluation benchmarks less meaningful. Taking this further into the Kenyan context, I’ve seen a recurring pattern: Programmes are often designed with relatively fixed theories of change, but are implemented within highly dynamic county-level ecosystems, politically, institutionally, and socially. By the time evaluation assesses “effectiveness” or “sustainability,” the underlying assumptions (on which those criteria are based) may no longer hold. This creates a subtle but important risk, we end up evaluating how well a programme performed in a past version of the system, rather than how well it is positioned for the system that is emerging.
To respond to this, I think future-informed evaluation in Kenya (and similar contexts) needs to move toward a few deliberate shifts:
Baselines should not be treated as fixed anchors, but revisited as systems evolve
Particularly at county level, where political economy and implementation realities shift rapidly
Recognising that outcomes are increasingly co-produced by multiple interacting system actors
Not as separate analyses, but as core to how we interpret findings
Ultimately, in contexts like Kenya, future-informed evaluation is not a methodological upgrade, it is a practical necessity for relevance. It allows evaluation to answer a slightly different but more useful question: Not just “Did this work?” but “Will this continue to work, and under what conditions?”
I would be interested to hear from others working in devolved or climate-vulnerable systems; "How are you adapting evaluation approaches to account for sub-national variability and rapidly shifting implementation contexts?"
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 29/04/2026
Thanks, Dennis. I really appreciate how you ground this in the lived realities of Kenya (my home!) and the wider East African region. Your point that drought, mobility, refugee dynamics, demographic pressure, and devolution are not “contextual risks” but core features of the operating system is exactly why future-informed evaluation matters.
What stands out for me is your observation that many programmes are evaluated against a past version of the system....that is a powerful way to name the problem. If the assumptions beneath a theory of change have shifted, then judging effectiveness or sustainability against those original assumptions can produce technically valid but strategically misleading findings.
I also strongly agree with your shift from static baselines to dynamic reference points, and from endline judgement to continuous sense-making. In devolved and climate-vulnerable contexts, evaluation has to become more anticipatory, politically informed, and adaptive. It should help actors understand not only whether something worked, but whether it remains viable as conditions change.
For me, your reflection reinforces that future-informed evaluation is not about adding foresight tools for their own sake. It is about improving the relevance, timing, and usefulness of evaluative judgement in systems that are already moving.
Kenya
Dennis Ngumi Wangombe
MEL Specialist
CHRIPS
Posted on 26/04/2026
One reflection I would add to this discussion is that the “hindsight vs foresight” framing is useful, but perhaps still incomplete. From practice, the deeper issue is not only that evaluation is retrospective, but that it is often temporally rigid in systems that are inherently adaptive. In many of the programmes I’ve worked on, particularly in fragile and climate-affected contexts, you can have an intervention that is highly “effective” at midline, but fundamentally misaligned with the direction the system is moving. By the time endline evaluation happens, the system has shifted, and the findings, while technically valid, have already lost decision-making value. This aligns with what the paper describes as a temporal mismatch between evaluation and reality. What this suggests is that integrating foresight is not just about adding tools like scenario planning or horizon scanning. It is about reconfiguring when and how evaluative judgement happens.
A few practical shifts that I have found useful:
(e.g., linking MEL systems with real-time decision points, not just reporting milestones)
(this helps avoid reinforcing linear assumptions in non-linear systems)
In my experience, this combination is underutilized, quantitative trend analysis helps anchor plausibility, while foresight expands the space of what we consider possible.
I also want to echo a point raised earlier in the discussion: the constraint is not primarily methodological, but institutional and cultural. As long as evaluation is commissioned primarily for accountability, even the most sophisticated foresight tools risk being absorbed into compliance logic. So perhaps the shift is less about moving from hindsight to foresight, and more about moving from: evaluation as judgement → evaluation as navigation under uncertainty.
Maybe also pose this, how do we redesign evaluation commissioning and incentives so that future-informed insights are not just produced, but actually used in decision-making cycles?
Kenya
Gordon Wanzare
MEL/Project Management Specialist
Posted on 25/04/2026
A very thought provoking discussion!
We may be overstating the absence of foresight in evaluation. The issue is not tools, but timing, depth, and intent.
First, Causal Layered Analysis (CLA). Most evaluations remain at litany and systems levels, rarely interrogating underlying worldviews and deep story. Yet foresight lives precisely there. If we do not challenge foundational assumptions—such as linear planning in volatile systems—evaluation, however sophisticated, simply reinforces them.
Second, risk registers and CLA (Collaborating, Learning, Adapting). These are ubiquitous and often well-executed, but largely within compliance boundaries—managing known risks and enabling incremental adaptation. They seldom question whether the plan itself still holds. Transformative value emerges only when learning loops move beyond adjustment to reframing assumptions and goals.
Third, strategic thinking. The core strategic questions - where have we come from? where are we now? where are we going? how do we get there? how do we know we arrived there? - already embed foresight, but evaluation remains anchored in the past (where have we come from?), present (where are we now? - baseline), and endpoints (how do we know we have arrived there?) while the critical foresight (where are we going?) and the bridge (how do we get there?) remain advisory. Armed with decision-grade data and insights, evaluators should strongly influence future-informed decision making.
Fourth, OECD-DAC evaluation criteria are inherently forward-looking yet applied ex-post - particularly the relevance, impact, and sustainability criteria. If rigorously embedded at design stage—through scenario stress-testing—they shift evaluation from audit to anticipatory governance, from quality control to quality assurance!
The problem is not absence of foresight, but its containment. Until evaluation consistently challenges assumptions early and in real time, we will continue to practice foresight in form, but hindsight in function.
Gordon
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/04/2026
Gordon, I think this is exactly the right provocation. I agree that foresight is not entirely absent from evaluation...the deeper issue is that it is often present only in a limited, procedural, or retrospective form.
The distinction you make around timing, depth, and intent is critical. Many evaluation processes may use tools that look adjacent to foresight (e.g., risk registers, adaptive management loops, strategic questions, even scenario language), but this is not the same as foresight being practised professionally. Foresight is not only a toolkit....it is also a disciplined way of sensing change, reading weak signals, interrogating assumptions, judging when a system is approaching a threshold, and understanding the deeper narratives and power dynamics shaping what futures are considered possible or desirable.
That is where the professional insight of a foresight practitioner matters. Tools can help structure conversation, but they do not automatically generate anticipatory intelligence. Used superficially, they can simply extend existing planning logics. Used with depth, they can reveal when the frame itself is wrong, when adaptation is no longer enough, and when evaluation needs to support strategic reframing rather than incremental improvement.
So I would fully agree...the issue is not the total absence of foresight, but its containment. The challenge is to move from foresight as a set of occasional methods to foresight as a professional evaluative capability via being embedded early, used in real time, and directed toward shaping future-informed decisions rather than merely validating past performance.
Ethiopia
Hailu Negu Bedhane
cementing engineer
Ethiopian electric power
Posted on 24/04/2026
Background and Rationale (East African Context)
Food security, environmental sustainability, and agricultural development programmes across Ethiopia, Kenya, and Tanzania are increasingly operating within conditions defined by systemic uncertainty. Climate variability—manifested through recurrent droughts and erratic rainfall—alongside land degradation, demographic pressures, and evolving geopolitical dynamics, has moved from being a peripheral concern to a central determinant of programme performance.
Despite this evolving context, evaluation practices in these sectors remain predominantly retrospective. They continue to emphasize accountability against fixed, pre-defined objectives, often established under assumptions that no longer hold. This creates a significant temporal misalignment:
This disconnect has tangible implications. For instance:
As a result, evaluation findings may accurately describe past performance but offer limited value for informing future decisions in dynamic environments.
At the same time, leading organizations such as World Food Programme, Food and Agriculture Organization, and CGIAR are increasingly incorporating foresight-oriented approaches, particularly within resilience-building and anticipatory action frameworks.
However, across East Africa:
This discussion is intended to address these gaps by exploring how foresight can be systematically integrated into evaluation practice.
Week 1: Understanding the Limitations of Retrospective Evaluation
Focus
To establish a foundational understanding of why conventional evaluation approaches are inadequate in volatile and rapidly changing environments.
East African Perspective
Within the region, retrospective evaluations frequently:
Illustrative Examples
Core Insight
Retrospective evaluation effectively answers:
“To what extent were planned objectives achieved?”
However, it fails to address the more critical question:
“Were the original assumptions and plans still valid under changing conditions?”
Discussion Emphasis
Week 2: Transformative Foresight in Agricultural and Food Systems
Focus
To examine how foresight approaches enable transformational change rather than incremental improvements.
East African Perspective
Agricultural systems across the region are undergoing structural transitions characterized by:
Application of Foresight
Foresight methodologies can support:
Illustrative Example
Week 3: Advancing Toward a Transformative Evaluation Paradigm
Focus
To explore how integrating foresight into evaluation can create a more adaptive, future-oriented paradigm.
East African Perspective
Evaluation systems must evolve to address:
Implications for Evaluation Criteria
Applying a foresight perspective reshapes traditional evaluation dimensions:
Moves beyond alignment with past needs toward alignment with anticipated future risks and opportunities
Extends beyond continuity after funding to include resilience under future shocks and uncertainties
Expands from measuring output delivery to assessing adaptability and responsiveness to change
Illustrative Example
In a food processing facility:
Core Insight
Evaluation evolves into:
A mechanism for adaptive management and strategic learning, rather than solely a tool for accountability
Week 4: Operationalizing Foresight within Evaluation Practice
Focus
To translate conceptual frameworks into practical tools and methodologies applicable in real-world contexts.
Key Tools and Their Application in East Africa
1. Horizon Scanning
Systematic monitoring of emerging trends, including climate patterns, market dynamics, and policy changes
2. Scenario Planning
Development of multiple plausible future scenarios, such as:
3. Three Horizons Framework
4. Causal Layered Analysis
Multi-level examination of challenges:
Regional Application Areas
Foresight-informed evaluation can be applied to:
Discussion Objectives (Contextualized)
Guiding Questions (East Africa Focus)
Conclusion
Within East Africa, integrating foresight into evaluation is no longer optional—it is a practical necessity.
In sectors defined by uncertainty:
The future effectiveness of evaluation in Ethiopia and across East Africa will depend on its capacity to guide decisions proactively—anticipating challenges before they materialize, rather than reacting after the fact.
Ethiopia
Hailu Negu Bedhane
cementing engineer
Ethiopian electric power
Posted on 24/04/2026
From Hindsight to Foresight: Reframing Evaluation as a Future-Informed Strategic Tool
An Ethiopian and East African Perspective
1. Executive Context
Across Ethiopia and the broader East African region, evaluation practices remain predominantly retrospective. Institutions—ranging from public enterprises such as Ethiopian Electric Power to manufacturing industries, food processing companies, and development programs—continue to rely heavily on post-event assessments that diagnose past failures but rarely shape future decisions in a meaningful way.
While such hindsight-driven approaches provide accountability and documentation, they fall short of enabling anticipatory governance. In environments characterized by operational volatility, supply chain uncertainty, and infrastructure constraints, evaluation must evolve from a record-keeping exercise into a forward-looking decision system.
2. Conceptual Shift: From Retrospective Analysis to Predictive Insight
Traditional evaluation frameworks are anchored in:
These approaches, though necessary, are inherently reactive. They identify deviations after they have already imposed financial, operational, or reputational costs.
A future-informed evaluation paradigm, by contrast, emphasizes:
This transition represents a shift from “What happened?” to “What is likely to happen—and how should we respond now?”
3. Strategic Relevance in the Ethiopian Context
3.1 Infrastructure and Energy Development
Large-scale initiatives in Ethiopia—particularly within organizations like Ethiopian Electric Power—are marked by extended timelines, technical complexity, and dependency on external expertise. Recurring challenges such as drilling inefficiencies, procurement delays, and coordination gaps are frequently documented but insufficiently internalized.
A foresight-oriented evaluation model would enable:
]3.2 Manufacturing and Industrial Operations
Within manufacturing environments—such as plastic pipe production—quality assurance systems often function as end-point filters rather than proactive control mechanisms.
Retrospective evaluation typically identifies:
However, a future-informed approach would:
This transformation is critical for enhancing operational efficiency, reducing waste, and maintaining consistent product standards.
3.3 Development Programs and Public Sector Initiatives
In countries such as Ethiopia, Kenya, and Tanzania, evaluation systems within donor-funded and public programs are frequently compliance-driven. Reports are produced to satisfy external requirements rather than to inform internal strategic adaptation.
This results in:
3.4 Food Sector and Agro-Processing Systems
The food sector—encompassing agriculture, agro-processing, and distribution—is one of the most critical yet vulnerable systems in Ethiopia and across East Africa. Evaluation practices in this sector are typically reactive, focusing on post-harvest losses, food safety incidents, or market shortages after they occur.
Key challenges include:
A foresight-driven evaluation approach would enable:
For example, instead of reacting to grain spoilage or dairy contamination, processors can implement real-time monitoring of temperature, humidity, and hygiene indicators to prevent losses before they occur.
4. Structural Constraints to Forward-Looking Evaluation
Several systemic barriers hinder the transition toward foresight-driven evaluation:
Institutional Culture
Evaluation is often perceived as punitive rather than developmental, discouraging transparency and critical reflection.
Data Infrastructure Deficiencies
Fragmented, manual, and inconsistent data systems limit the ability to generate timely and actionable insights.
Organizational Silos
Knowledge remains compartmentalized, preventing cross-functional learning and coordinated response.
Short-Term Operational Pressures
Immediate delivery targets frequently override investments in long-term analytical capability.
5. Operational Framework for Future-Informed Evaluation
To institutionalize foresight, organizations should adopt the following integrated approach:
5.1 Reposition Evaluation as a Decision Instrument
Evaluation outputs must be explicitly linked to future planning, resource allocation, and operational adjustments.
5.2 Develop Predictive Performance Indicators
Shift from static metrics to dynamic indicators capable of signaling emerging risks, such as:
5.3 Institutionalize “Forward-Looking Lessons”
Move beyond retrospective “lessons learned” toward actionable “lessons applied,” with defined ownership and implementation timelines.
5.4 Embed Scenario-Based Planning
Systematically evaluate potential disruptions—financial, technical, environmental, or logistical—and predefine response strategies.
5.5 Establish Continuous Feedback Mechanisms
Implement real-time monitoring systems and routine performance reviews to ensure adaptive management.
6. Applied Illustration
Energy Sector (Geothermal Development)
Rather than conducting isolated post-project reviews, a foresight-driven system would:
Manufacturing (HDPE Pipe Production)
Instead of relying on final product inspection, organizations should:
Food Sector (Agro-Processing and Supply Chain)
Instead of reacting to:
Organizations should:
Result:
7. Strategic Imperatives for Ethiopia
To advance toward future-informed evaluation, the following priorities are essential:
Transition to integrated, real-time data platforms across manufacturing, energy, and food systems
Equip professionals with skills in data interpretation, forecasting, and risk modeling
Ensure evaluation findings directly inform strategic and operational decisions
Encourage openness, accountability, and continuous improvement
Facilitate structured knowledge sharing between energy, manufacturing, and food sectors
8. Conclusion
Retrospective evaluation, while necessary, is no longer sufficient in addressing the complexities of Ethiopia’s development trajectory. The ability to anticipate, adapt, and respond proactively will define institutional effectiveness in the years ahead.
Transforming evaluation into a future-informed system is not merely a methodological enhancement—it is a strategic imperative.
Sustainable progress will depend not on how effectively institutions document the past, but on how intelligently they prepare for the future.
Canada
Rhode Early Charles
Posted on 24/04/2026
To try to answer the question, and to build on what the document already proposes, one key condition is ensuring that the process is co-owned by the people who will use and live with the findings. This condition helps make the work ethical, participatory, useful, and institutionally embedded at the same time. This is part of a broader shift from hindsight to foresight, actively shaping future-informed decisions.
When stakeholders help define the questions, shape the methods, interpret the results, and commit to follow-up, the evaluation is less likely to be extractive, more grounded in real needs, and more likely to inform actual decisions.
This also aligns with many First Nations approaches in Canada, where evaluation and project review are often community-driven and closely tied to local priorities, consent, and accountability. While the exact process can vary depending on the Nation and on funding or governance arrangements, the underlying principle remains that decisions should not be imposed from outside.
In short, the key condition is shared ownership from design to use. Proposed solutions for future risks must respond to future needs and help shape the future people want. When this happens, ownership becomes a reality, not just an aspiration.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/04/2026
Thanks Rhode, for your contribution. I agree that future-oriented evaluation cannot simply add scenarios or horizon scanning to an extractive process. It has to change who defines value, who interprets evidence, and who has authority over how findings are used.
The emphasis on shared ownership from design to use is also powerful. When communities and stakeholders co-shape the questions, methods, interpretation, and follow-up, evaluation becomes more than accountability to funders. It becomes a process of collective sensemaking and future-making.
The link to First Nations approaches reinforces that future-informed evaluation must be grounded in consent, relational accountability, local priorities, and self-determination. In that sense, ownership is not a procedural add-on; it is the condition that makes evaluation ethical, useful, and transformative.
Nepal
Gana Pati Ojha
Community of Evaluators
Posted on 24/04/2026
The #EvalforEarth discussion comes at exactly the right time. Many evaluations still tell us how projects performed yesterday, while leaders increasingly need evidence on how systems can survive tomorrow.
Across food security, agriculture, climate resilience, and governance, one lesson repeatedly emerges: outcomes are shaped less by individual projects than by the systems in which they operate—institutions, incentives, partnerships, learning cultures, and political ownership. Strong projects often fail inside weak systems; modest interventions can succeed when embedded in adaptive and trusted institutions.
This is why retrospective evaluation alone is no longer enough. It may accurately assess past outputs and efficiency, yet miss the critical forward-looking questions:
• Will this programme remain relevant under climate shocks or market volatility?
• Can institutions adapt when assumptions change?
• Are partnerships resilient under stress?
• Will gains endure after funding ends?
Strategic foresight offers practical tools to strengthen evaluation: horizon scanning, scenario planning, Three Horizons, and causal layered analysis. These methods can help evaluators move from static judgement to dynamic learning.
Three practical entry points:
Perhaps we also need to reinterpret OECD-DAC criteria through a future lens:
Relevance = future fit
Sustainability = resilience under shocks
Impact = contribution to long-term system transformation
The future of evaluation is not abandoning hindsight. It is combining hindsight, insight, and foresight so evidence can guide action in an uncertain world.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 27/04/2026
Thanks Gana, this is a strong framing of why foresight-informed evaluation is becoming essential rather than optional. I especially appreciate the emphasis on systems...evaluations often over-focus on project performance while under-examining the institutional, political, ecological, and relational conditions that determine whether results can endure.
The proposed reframing of OECD-DAC criteria is particularly useful. Thinking of relevance as “future fit,” sustainability as “resilience under shocks,” and impact as “contribution to long-term system transformation” helps shift evaluation from compliance and accountability toward strategic learning and preparedness.
For food systems, climate resilience, agriculture, and governance, this feels especially urgent. The key question is no longer only “Did the intervention work?” but “Under what future conditions could it continue to work, adapt, or scale?” That is where foresight can significantly deepen the evaluative function.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
I am back from a spur of the moment short holiday where I did not open my laptop for six days, but it nice to see the conversation and discussion continuing.
As we move into our final week, I want to introduce the FARA guide Criteria to Assess High-Quality Food Systems Foresight in Africa (the link is in the intro to this discussion board, but also attached below). My colleague, Dr. Katindi Sivi, was a co-author, so I'm excited to showcase her work.
What I find especially useful about this report is that it is not a step-by-step foresight manual. It is a quality framework for thinking about what makes foresight meaningful, credible, inclusive, and actually useful for decision-making in complex food systems contexts. The guide argues that, in a time shaped by climate risk, demographic change, geopolitical uncertainty, and structural inequality, foresight must move beyond scenario production toward anticipatory governance, local ownership, and real policy influence. It also places unusual emphasis on African realities, including indigenous knowledge, informality, power relations, and participatory practice.
That feels highly relevant to the conversation we have been having here. Over the past weeks, several of you have pushed us to think beyond retrospective accountability alone. Silva and Amy asked whether evaluation can be freed from compliance logic. Rick challenged us to move from prediction toward preparedness and plural futures. Uzodinma emphasized mindset, local ownership, and adaptive learning. Rhode reminded us that knowledge must be communicated in usable ways, not just written for evaluators. Those themes are all echoed in this guide.
The guide is organised around nine interlinked criteria, including contextual relevance, inclusivity, ethics, methodological rigor, strategic communication, institutional embedding, and shifts in thought and behaviour. It also argues that evaluation of foresight should not focus on predictive accuracy, but on whether foresight improves learning (another common theme in our discussions), decision-making, contribution to change, and long-term systems transformation.
So for this final week, I would like to ask: what would high-quality future-informed evaluation actually look like in practice? What conditions need to be in place for it to be ethical, participatory, useful, and institutionally embedded rather than just another report on the shelf?
Benin
Alexis Adébayo ODOUN-IFA
Expert in MEAL
RAAF/ECOWAS
Posted on 13/04/2026
In their writings, Rose Thompson Coon et al. highlight the need to rethink evaluation in order to incorporate a forward-looking, even futuristic, dimension. Indeed, whilst evaluation enables lessons to be learnt, it does not always provide immediate avenues for their operational application once the intervention has ended. Without an in-depth literature review and uptake by other researchers or designers of future projects, the lessons learned from intervention evaluations tend to be forgotten once the interventions have ended. Thus, evaluation reports would benefit from incorporating more in-depth analyses, enabling this forward-looking and future-oriented vision of development to be better taken into account.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
This is an important point Alexis...thank you for contributing. I especially appreciate your observation that lessons from evaluation are often documented but not meaningfully carried forward into future design, policy, or practice. Your comment reinforces why a more future-oriented approach matters....evaluation should not only capture what happened, but also help ensure that learning remains usable, transferable, and alive beyond the life of a single intervention. Learning and education have strong futures/foresight elements by default. How can we better integrate forward-looking learning in our evaluations?
Italy
Silva Ferretti
Freelance consultant
Posted on 13/04/2026
It is quite hard to comment on an article that is not fully accessible :-(
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Which article are you referring to? I attached Fusing foresight and futures thinking for a new transformative evaluation paradigm in my earlier post, so you should have access to it.
Ghana
Ishmael Kwame Agbomlaku
Manager
Integrated Institute of professional, LA plage Meta Verse.
Posted on 13/04/2026
Powerful perspective. Moving from hindsight to foresight is exactly where evaluation must evolve using data not just to report, but to anticipate and improve outcomes. This is critical for effective programme design
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Thanks for the comment, Ishmael. I agree completely. Great foresight always contains elements of hindsight, so it is not a competition, but collaboration and symbiosis. I've that many ancient traditions thought of humanity walking backwards into the future....knowing where we've been is important, but when we see the path curving or diverging, we need to start tacking in that direction.
Italy
Silva Ferretti
Freelance consultant
Posted on 13/04/2026
We can definitely become better at being "forward-looking": understanding likely patterns, more intentional in interrogating likely consequences. But always escaping the temptation to make this "THE plan". Because what matters is having direction and agility, better capacity to see and feel junctures... not a pre-set future.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Hi Silva, thanks for your comment. It is so easy to become trapped by "THE PLAN". This is why I love working with scenarios. You can have the plan, but when you have three or four scenarios on the horizon, you can always wind-tunnel THE PLAN and see where it needs to adapt. All the more reason for the agility, capacity to see and feel junctures, as you mentioned.
Italy
Silva Ferretti
Freelance consultant
Posted on 13/04/2026
The deeper question is: what is evaluation for? If it exists mainly to confirm compliance (i.e. to verify that a plan was executed as designed, that the Theory of Change held....) then adding foresight methods changes nothing. We will simply be anticipating the future in service of the same backward-looking logic and the same set of horizons. Always in "compliance mode." Before we ask how evaluation can get better at anticipating the future, we need to ask a prior question: are we willing to set evaluation free from the obligation to confirm the plan?
Senegal
Amy MARA
Economiste et Specialiste en Passation de Marché
Direction de la Dette Publique
Posted on 17/04/2026
Hello,
My dear Silva,
Your analysis is highly insightful and warrants further consideration, particularly regarding the need to rethink the purpose of evaluation beyond a mere focus on compliance.
The proposed reflection raises a fundamental and pertinent question: that of the true purpose of evaluation. Indeed, if evaluation is reduced to a compliance function, it is limited to verifying whether the actions taken correspond to the initial forecasts, without truly questioning their relevance, their impact or their ability to adapt to changing realities.
From this perspective, the introduction of forward-looking methods into an evaluation confined to a compliance-based approach appears insufficient. It even risks reproducing the same patterns, by simply projecting already fixed assumptions into the future, without questioning the analytical frameworks. Thus, anticipating the future without transforming the purpose of evaluation amounts to prolonging a retrospective approach in another form.
Consequently, the central issue becomes the transformation of the role of evaluation. It is no longer merely a matter of confirming a plan, but of questioning the assumptions underpinning it, identifying the gaps between intentions and results, and above all, supporting decision-making in uncertain contexts. A forward-looking evaluation must be a tool for learning, adaptation and innovation.
Freeing evaluation from the obligation to confirm the plan entails several major changes. Firstly, accepting that programmes may evolve in line with realities on the ground. Secondly, incorporating more flexible approaches, such as real-time evaluation or adaptive learning. Finally, recognising that evaluation can produce critical findings, sometimes at odds with the initial objectives.
However, this transformation is not without its challenges. It requires a shift in institutional culture, where decision-makers accept uncertainty and questioning. It also necessitates enhanced technical capabilities and greater openness to stakeholder participation.
In conclusion, evaluation can only truly incorporate a forward-looking dimension if it breaks free from its strictly normative function. It must evolve towards a strategic role, focused on learning and anticipation, in order to better address the complex and dynamic challenges of public policy.
Amy MARA
Economist
Dakar, Senegal
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Thank you very much for this insightful and well-written contribution. You have hit upon a key point in our discussion... as long as assessment remains confined to a logic of compliance, even the introduction of forward-looking methods risks producing only a superficial shift rather than a genuine change in approach. I particularly appreciate your emphasis on the need to transform the very purpose of evaluation, so that it becomes a space for learning, adaptation and decision-making in the face of uncertainty. Your point about institutional acceptance of criticism, uncertainty and programme evolution is crucial, as it clearly shows that the challenge is not merely methodological, but also cultural and political. It is precisely this tension between normative evaluation and forward-looking evaluation that we must continue to explore together.
United Kingdom
Daniel Ticehurst
Monitoring > Evaluation Specialist
freelance
Posted on 29/04/2026
Silva, great point about how some treat and direct evaluation to justify past decisions. On foresight, I would resist trying to get evaluation do better at predicting the future. A fool’s errand. Rather, build systems that can see, respond, and adjust faster. Good “foresight” comes from:
rapid iteration; pattern recognition across experiments; and; continuous updating of assumptions. This helps better navigate the future as it unfolds
Germany
Ines Freier
Senior consultant for NRM and biodiversity, Green economy
consultant
Posted on 13/04/2026
The paper offers one option for evaluating new speculative ventures which are not under control of the public. Using foresight methods instead of past performance can also backfire.
I recently evaluated blended finance funds, the performance of the funds was not as expected due to a set of factors driven from research on management and business development like know your customer. Our subject experts always tried to developed new scenarios for the future under which the facilities would work better. Foresight tools are applied within the existing evaluation system based on evaluation departments or institutions and methods which in most cases lack hard technical / subject-related skills and resources improve the evaluation system. Alternatives to the current evaluation system should be explored like participatory process for policy formulation and implementation like stakeholder groups in specific policy areas providing feedback on specific policies. This way future oriented reflexive and learning systems are created using feedback loops. Here future programming is included by stakeholders. Examples are the Brazilian policies for family agriculture or nutrition which are shaped by stakeholder commissions at all levels.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Great comment here, Ines, and a valuable caution. I appreciate your point that using foresight in place of evidence on actual performance can become a way of endlessly rescuing weak results through imagined future scenarios, especially when evaluation teams lack the technical or sector-specific expertise needed to challenge assumptions. Your emphasis on participatory, feedback-rich systems is especially important, because it suggests that future-oriented evaluation should not rely only on evaluators and institutions, but also on structured stakeholder engagement that can shape policy design, implementation, and adaptation in real time. Thus, building futures literacy with communities is also important element of our conversations.
Benin
Koffi Moïse Bienvenu Sodjinou
Chargé de Programme
CASAD International
Posted on 09/04/2026
Evaluation can no longer be content with being a mere act of measurement or a snapshot frozen in time; its true purpose lies in its ability to bring about lasting transformation. By moving beyond its traditional monitoring function to become a lever for change, it acts as a catalyst for self-reflection, compelling stakeholders to confront their practices with the reality of the results. To evaluate without transforming would be like making a diagnosis with no intention of treating the condition, rendering the exercise fruitless and purely bureaucratic. On the contrary, an evaluation focused on progress enables the identification of areas for improvement and sources of innovation, turning mistakes into learning opportunities and judgement into a tool for support.
From an ethical and strategic perspective, this transformative dimension is essential to avoid inertia and ensure the effectiveness of the actions undertaken. In a constantly changing environment, evaluation must serve as a dynamic compass: it does not merely look back to validate achievements, but propels organisations or individuals towards the future by adjusting their trajectories. In short, evaluation only achieves its full value when it becomes an “empowering” process, capable of changing behaviours and optimising systems to deliver real and tangible impact.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 18/04/2026
Thanks Koffi. You've made some astute observations and contributed a strong articulation of evaluation as something far more alive and consequential than compliance or retrospective judgement. I especially appreciate the idea that evaluation should function as a dynamic compass, helping people and institutions not only understand where they have been, but also adjust where they are going. Framing evaluation as diagnosis without treatment is particularly powerful, because it captures why transformative intent matters if evaluation is to contribute to real learning, adaptation, and lasting change.
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 06/04/2026
Week Three Introduction
For Week 3, I’d like to introduce the article Fusing foresight and futures thinking for a new transformative evaluation paradigm by Rose Thompson Coon, Katri Vataja, and Pinja Parkkonen (attached below).
Their article argues that if evaluation is meant to contribute to transformation in an uncertain and complex world, it cannot remain focused mainly on assessing past performance. Instead, it needs to become more future-focused, more dynamic, and more able to engage multiple possible futures.
What makes this article especially useful for our discussion is that it does not stay at the level of theory. Using a case from Sitra in Finland, the authors show how foresight methods such as Horizon Scanning and a modified Delphi process can be integrated into evaluation to validate current strategic choices, generate future programming options, deepen learning about complexity, and strengthen strategic decision-making. They also argue that this shift is not only methodological. It requires a broader rethinking of evaluation’s purpose, including questions of power, participation, and whose futures are being imagined and prioritized.
This article offers a practical bridge between futures thinking and transformative evaluation. It helps move the conversation from “Why should evaluation become more future-informed?” to “What might this actually look like in practice?”
It also raises an important challenge for all of us. If evaluation is to help shape preferred futures, how should it address questions of power, participation, and whose future is being defined?
Kenya
Steven Lynn Lichty
Managing Partner
REAL Consulting Group
Posted on 06/04/2026
Week Two Summary
This past week’s discussion surfaced a rich and timely tension at the heart of future-informed evaluation, i.e., whether evaluation should remain oriented toward prediction and linear change, or whether it must shift toward preparedness, plurality, learning, and adaptation.
Rick Davies pushed this strongly by arguing that, in a world of deep uncertainty, evaluation should engage multiple, sufficiently diverse futures rather than rely on a singular predictive logic. He also raised the important question of what criteria we should use to evaluate futures, suggesting both cognitive criteria about how we think and behavioral criteria about how we respond. He further cautioned against using the language of “transformation” too loosely, reminding us that transformation is not inherently good and that evaluators must remain attentive to the aims and politics of change itself.
Michele Friend offered an important philosophical and methodological reframing. Rather than asking what must change first, she argued that transformation should not be seen as a linear sequence at all. Methods, criteria, institutions, and mindsets evolve together through feedback loops between assessment, dialogue, feasibility, and implementation. Her example showed evaluation as an iterative, reflective process that not only judges performance but also helps people and institutions ask who they are becoming.
Dr. Uzodinma Akujekwe Adirieje grounded the conversation in African and low-resource health systems, emphasizing that the deepest shift must be one of mindset: away from compliance-oriented, donor-facing reporting and toward adaptive, locally owned, problem-solving learning. His contribution was especially valuable in showing that transformative evaluation is not abstract; it can produce concrete results when evidence is embedded in real-time decision-making and community realities.
Rhode Early Charles expanded the discussion by arguing that transformation also depends on how evaluation knowledge is communicated. Reports often remain too technical and evaluator-facing. She called for evaluation findings to become multiple, tailored knowledge products that different audiences can actually use, while also warning that overly lean data approaches may miss emerging issues and strategic learning opportunities.
Taken together, the week’s exchanges suggest that future-informed evaluation may require not one single shift, but several at once…from prediction to preparedness, from singular to plural futures, from linear models to feedback-rich learning, from compliance to local ownership, and from static reports to more usable forms of knowledge.
On a technical note, Silva asked a practical platform question. To my knowledge you cannot pick a thread and contribute to another person’s comments. I’ll pass this onto the EvalForEarth team though.
Canada
Rhode Early Charles
Posted on 01/04/2026
I really appreciate the insights shared, especially the focus on adaptive learning and forward-looking evaluation. I would also argue that one critical limitation lies in how evaluation knowledge is produced and communicated. This time, my comment is less focused on methodology and more on offering my perspective in response to your question Steven about what needs to change for evaluation to truly contribute to transformation.
Evaluation reports are often written by evaluators, using technical and methodological language that mainly speaks to other evaluators or technically trained audiences. As a result, these reports tend to be most appealing and useful to those who already understand that language.
What I find interesting is that most projects have communication plans that clearly identify who needs what information, how it should be shared, and why. However, this logic is rarely applied when writing evaluation reports. Different stakeholders have very different needs depending on how they interact with the project, and a single report cannot effectively serve all of them.
To me, evaluation reports should be seen as a starting point, not the final product. The data and findings should serve as inputs into multiple targeted knowledge products, developed by communication experts or sector specialists (health, education, economic development, etc.), that speak directly to specific audiences. These tailored outputs would translate evaluation insights into formats and messages that are relevant and actionable, supporting implementation, positioning, and donor engagement.
In addition, while I understand the rationale behind lean data approaches, they can sometimes be too restrictive. Focusing only on what is needed for indicators or donor requirements may limit opportunities to explore emerging issues or strategic areas more deeply. Sector specialists could play a role here by collecting additional data for learning, positioning, thought leadership, or future programming, as long as there is clear accountability for why that data is being collected and how it will be used. Every question has a cost, but it can also bring real value if it is intentional.
Overall, if evaluation is meant to support transformation, it is not just about improving methods or tools. It is also about making sure that the knowledge we produce is usable, relevant, and accessible to the people who need it. We need to better understand our audiences and what they need.
Nigeria
Dr. Uzodinma Akujekwe Adirieje
CEO
Afrihealth Optonet Association (AHOA) - CSOs Network
Posted on 01/04/2026
In African and low-resource health systems, evaluation too often serves as a post-hoc accountability exercise rather than a tool for systemic transformation. From decades of practice in health systems strengthening, the first and most critical shift must occur in mindset - how we perceive the purpose and ownership of evidence. Evaluators and decision-makers frequently operate with a compliance mindset, producing reports that satisfy external donors but fail to capture the nuanced realities on the ground. Recently, in Lagos State, Nigeria, routine monitoring in the maternal health program had focused narrowly on facility births. But, by adopting a learning-oriented approach - examining quality of care, patient experience, and referral patterns, it was uncovered that 42% of women bypassed local clinics due to perceived low-quality services. Targeted staff training and resource reallocation subsequently increased facility-based deliveries by 17% within a year.
Similarly, Community-Led Monitoring in another Nigerian district revealed a 40% barrier from hidden transport costs, despite reports showing 95% patient “satisfaction.” These insights highlight those methods and criteria, however technically sound, that follow effectively only after the mindset evolves to prioritize adaptive, locally-informed learning over extractive reporting.
Evidence from the recent Q1/2026 ‘Life & Health’ dialogues of Afrihealth Optonet Association (AHOA) shows districts using integrated digital platforms and participatory evaluation achieved a 15% rise in immunization coverage - proof that embedding evaluation in real-time problem-solving, not just retrospective reporting, produces tangible health impact.
Sustainability and long-term development hinge on this alignment. Transformative evaluation is not about better spreadsheets or fancier dashboards; it is about decolonizing intent, ensuring data serves local solutions, and fostering a culture of critical inquiry. In constrained African health systems, the mindset shift is the fulcrum upon which all methods, criteria, and institutional reforms pivot toward lasting, systemic change.
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 01/04/2026
If this weeks question is "If evaluation is to contribute to transformation, what exactly must change first…our methods, our criteria, our institutions, our underlying mindset, or something else?" I will repeat my point below...there is a potentially serious misalignment between the predictive nature of a theory of change as used by evaluators and the need for preparedness in a very uncertain world, A singular view of the future versus a plural view of the futures
[By the way, the display structure for this kind of dialogue should be branching, not a single line]
And please lets not toss the word "transformational" around too lightly. Putin and Trump and other despots are all keen on transformation of one kind or another. What about incremental improvements, or perhaps even just surviving, as an objective ? :-) Inflation take many forms, including changes in our vocabulary. e.g In the past we just had "details", but now we have "granularity" Wow....things really are getting better...or are they?
United States of America
Michele Friend
Professor
George Washington University
Posted on 30/03/2026
Hello everyone,
I’m delighted to be taking part in this discussion. Having a background in philosophy, I will no doubt be asking questions of a different nature. Steven Lichty asked: “If evaluation is to contribute to transformation, what needs to change first… our methods, our criteria, our institutions, our underlying mindset, or something else?”
I don’t think we should frame the question in this way. Transformation is not a linear process where we move from a first stage to a second, then to a third, and so on. On the contrary, several things happen simultaneously, and where each of us starts depends on ourselves, on what we are evaluating, and on the perceived purpose of the evaluation. In other words, what matters is the feedback loop between the assessment and the people it concerns.
Our team recently assessed a building. It met the requirements for LEED certification. To our surprise, it exceeded the LEED criteria. We incorporated these additional features into our assessment (our method is quite flexible in this regard). We then used this assessment to formulate recommendations for further improvements. Thus, the assessment method included these recommendations, and these led to a conversation about feasibility, implementation, timing, importance, and so on. This conversation, in turn, feeds back into the assessment and the recommendations. The conversation also leads those involved in the assessment to ask themselves in-depth questions about themselves, who they are and what they wish to become within the context of the institution. Thus, criteria, institutions and mindsets all evolve simultaneously through our assessment and feedback process.
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 30/03/2026
When the future is looking more uncertain than ever it may be useful to think about preparedness more than prediction. To enhance preparedness we need to be thinking about mutliple versions of the future, not just one, and these versions need to be sufficiently diverse. Having generated those futures how can we then evaluate them? I would be interested to hear from others what they think might be relevant criteria to apply. To kick off, I suggest the criteria may fall into two broad categories: 1. Cognitive: Criteria relating to how we are thinking about the future, 2: Behavioral: Criteria relating to how repond to those futures before and after they are realised
Italy
Silva Ferretti
Freelance consultant
Posted on 30/03/2026
Is it possible to respond to contributions and pick a conversation? I could not find the option to do so. And starting a new message breaks the flow!
Pagination