Skip to main content
Steven Lynn Lichty

Kenya

Steven Lynn Lichty Member since 09/03/2026

REAL Consulting Group

Managing Partner
Website

Expertise:

  1. Evaluation, Learning, Meta-analysis & Synthesis
  2. Transformative Foresight & Futures Thinking
  3. Strategic, Systems, Design & Complexity Thinking
  4. Capacity Building & Organisational Development
  5. Facilitation & Visualisation (Miro, Mural, and Kumo.io)
  6. Training & Teaching
  7. Management, Leadership & Administration
  8. Consulting, Advising & Coaching

Sectors: 

  1. Food & Human Security
  2. Youth Empowerment & Leadership
  3. Democracy, Governance, & Economic Development
  4. Resilience, Localisation & Shifting the Power
  5. Peacebuilding & Conflict Transformation
  6. Civil Society, Religious Groups & NGOs
  7. Mental Health & Psychosocial Support
  8. Human Rights, Advocacy & LGBTQIA+

Experience:

  • Director & Founder: Frontline Futures, Nairobi, Kenya       
  • Director & Co-Founder: REAL Studio, Nairobi, Kenya       
  • Co-Founder & Managing Partner: REAL Consulting Group, Nairobi, Kenya/Maputo, Mozambique   
  • Founder & Social Architect: The Frontline Group, Nairobi, Kenya    
  • Global Impact Board Member: Association of Professional Futurists (Global)                                    
  • Board Member & Co-Founder: Participatory Futures Global Swarm, Melbourne, Australia                    
  • Consultant: Grey Swan Guild, Toronto, Canada                                                                               
  • Foresight Consultant: Resilience Frontiers/UNFCCC, Bonn, Germany            
  • Consultant: Institute for Development Impact, 4DI, Washington, DC                                            
  • Design Thinking Fellowship Advisor: C4DLab, University of Nairobi, Kenya                                     
  • Research & Academic Capacity Building: Tangaza University College, Nairobi, Kenya               
  • Research Advisor: East Africa Institute, Nairobi, Kenya                                                           
  • Lecturer & Research Advisor: Tangaza University College, Nairobi, Kenya                                  
  • Research Consultant: World Bank Group, Washington, DC         
  • Evaluation Officer: IRI, Washington, DC    
  • MEL Manager: Medair, Nairobi, Kenya                                                                              

My contributions

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      I am back from a spur of the moment short holiday where I did not open my laptop for six days, but it nice to see the conversation and discussion continuing. 

      As we move into our final week, I want to introduce the FARA guide Criteria to Assess High-Quality Food Systems Foresight in Africa (the link is in the intro to this discussion board, but also attached below). My colleague, Dr. Katindi Sivi, was a co-author, so I'm excited to showcase her work. 

      What I find especially useful about this report is that it is not a step-by-step foresight manual. It is a quality framework for thinking about what makes foresight meaningful, credible, inclusive, and actually useful for decision-making in complex food systems contexts. The guide argues that, in a time shaped by climate risk, demographic change, geopolitical uncertainty, and structural inequality, foresight must move beyond scenario production toward anticipatory governance, local ownership, and real policy influence. It also places unusual emphasis on African realities, including indigenous knowledge, informality, power relations, and participatory practice.

      That feels highly relevant to the conversation we have been having here. Over the past weeks, several of you have pushed us to think beyond retrospective accountability alone. Silva and Amy asked whether evaluation can be freed from compliance logic. Rick challenged us to move from prediction toward preparedness and plural futures. Uzodinma emphasized mindset, local ownership, and adaptive learning. Rhode reminded us that knowledge must be communicated in usable ways, not just written for evaluators. Those themes are all echoed in this guide.

      The guide is organised around nine interlinked criteria, including contextual relevance, inclusivity, ethics, methodological rigor, strategic communication, institutional embedding, and shifts in thought and behaviour. It also argues that evaluation of foresight should not focus on predictive accuracy, but on whether foresight improves learning (another common theme in our discussions), decision-making, contribution to change, and long-term systems transformation.

      So for this final week, I would like to ask: what would high-quality future-informed evaluation actually look like in practice? What conditions need to be in place for it to be ethical, participatory, useful, and institutionally embedded rather than just another report on the shelf?

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      This is an important point Alexis...thank you for contributing. I especially appreciate your observation that lessons from evaluation are often documented but not meaningfully carried forward into future design, policy, or practice. Your comment reinforces why a more future-oriented approach matters....evaluation should not only capture what happened, but also help ensure that learning remains usable, transferable, and alive beyond the life of a single intervention. Learning and education have strong futures/foresight elements by default. How can we better integrate forward-looking learning in our evaluations?

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      Merci beaucoup pour cette contribution très riche et très bien formulée. Vous mettez le doigt sur un point central de notre discussion...tant que l’évaluation reste enfermée dans une logique de conformité, même l’introduction de méthodes prospectives risque de ne produire qu’un déplacement superficiel plutôt qu’un véritable changement de posture. J’apprécie particulièrement votre insistance sur la nécessité de transformer la finalité même de l’évaluation, afin qu’elle devienne un espace d’apprentissage, d’adaptation et d’aide à la décision dans l’incertitude. Votre remarque sur l’acceptation institutionnelle de la critique, de l’incertitude et de l’évolution des programmes est essentielle, car elle montre bien que le défi n’est pas seulement méthodologique, mais aussi culturel et politique. C’est précisément cette tension entre évaluation normative et évaluation tournée vers l’avenir que nous devons continuer à explorer ensemble.  (I hope my Gemini translation makes sense Amy!)

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      Hi Silva, thanks for your comment. It is so easy to become trapped by "THE PLAN". This is why I love working with scenarios. You can have the plan, but when you have three or four scenarios on the horizon, you can always wind-tunnel THE PLAN and see where it needs to adapt. All the more reason for the agility, capacity to see and feel junctures, as you mentioned.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      Great comment here, Ines, and a valuable caution. I appreciate your point that using foresight in place of evidence on actual performance can become a way of endlessly rescuing weak results through imagined future scenarios, especially when evaluation teams lack the technical or sector-specific expertise needed to challenge assumptions. Your emphasis on participatory, feedback-rich systems is especially important, because it suggests that future-oriented evaluation should not rely only on evaluators and institutions, but also on structured stakeholder engagement that can shape policy design, implementation, and adaptation in real time. Thus, building futures literacy with communities is also important element of our conversations. 

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      Thanks Koffi. You've made some astute observations and contributed a strong articulation of evaluation as something far more alive and consequential than compliance or retrospective judgement. I especially appreciate the idea that evaluation should function as a dynamic compass, helping people and institutions not only understand where they have been, but also adjust where they are going. Framing evaluation as diagnosis without treatment is particularly powerful, because it captures why transformative intent matters if evaluation is to contribute to real learning, adaptation, and lasting change.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      Thanks for the comment, Ishmael. I agree completely. Great foresight always contains elements of hindsight, so it is not a competition, but collaboration and symbiosis. I've that many ancient traditions thought of humanity walking backwards into the future....knowing where we've been is important, but when we see the path curving or diverging, we need to start tacking in that direction.  

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 18/04/2026

      Which article are you referring to? I attached Fusing foresight and futures thinking for a new transformative evaluation paradigm in my earlier post, so you should have access to it.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 06/04/2026

      Week Three Introduction

      For Week 3, I’d like to introduce the article Fusing foresight and futures thinking for a new transformative evaluation paradigm by Rose Thompson Coon, Katri Vataja, and Pinja Parkkonen (attached below).

       Their article argues that if evaluation is meant to contribute to transformation in an uncertain and complex world, it cannot remain focused mainly on assessing past performance. Instead, it needs to become more future-focused, more dynamic, and more able to engage multiple possible futures.

      What makes this article especially useful for our discussion is that it does not stay at the level of theory. Using a case from Sitra in Finland, the authors show how foresight methods such as Horizon Scanning and a modified Delphi process can be integrated into evaluation to validate current strategic choices, generate future programming options, deepen learning about complexity, and strengthen strategic decision-making. They also argue that this shift is not only methodological. It requires a broader rethinking of evaluation’s purpose, including questions of power, participation, and whose futures are being imagined and prioritized.

      This article offers a practical bridge between futures thinking and transformative evaluation. It helps move the conversation from “Why should evaluation become more future-informed?” to “What might this actually look like in practice?”

       It also raises an important challenge for all of us. If evaluation is to help shape preferred futures, how should it address questions of power, participation, and whose future is being defined?

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 06/04/2026

      Week Two Summary

      This past week’s discussion surfaced a rich and timely tension at the heart of future-informed evaluation, i.e., whether evaluation should remain oriented toward prediction and linear change, or whether it must shift toward preparedness, plurality, learning, and adaptation. 

      Rick Davies pushed this strongly by arguing that, in a world of deep uncertainty, evaluation should engage multiple, sufficiently diverse futures rather than rely on a singular predictive logic. He also raised the important question of what criteria we should use to evaluate futures, suggesting both cognitive criteria about how we think and behavioral criteria about how we respond. He further cautioned against using the language of “transformation” too loosely, reminding us that transformation is not inherently good and that evaluators must remain attentive to the aims and politics of change itself. 

      Michele Friend offered an important philosophical and methodological reframing. Rather than asking what must change first, she argued that transformation should not be seen as a linear sequence at all. Methods, criteria, institutions, and mindsets evolve together through feedback loops between assessment, dialogue, feasibility, and implementation. Her example showed evaluation as an iterative, reflective process that not only judges performance but also helps people and institutions ask who they are becoming.

      Dr. Uzodinma Akujekwe Adirieje grounded the conversation in African and low-resource health systems, emphasizing that the deepest shift must be one of mindset: away from compliance-oriented, donor-facing reporting and toward adaptive, locally owned, problem-solving learning. His contribution was especially valuable in showing that transformative evaluation is not abstract; it can produce concrete results when evidence is embedded in real-time decision-making and community realities. 

      Rhode Early Charles expanded the discussion by arguing that transformation also depends on how evaluation knowledge is communicated. Reports often remain too technical and evaluator-facing. She called for evaluation findings to become multiple, tailored knowledge products that different audiences can actually use, while also warning that overly lean data approaches may miss emerging issues and strategic learning opportunities. 

      Taken together, the week’s exchanges suggest that future-informed evaluation may require not one single shift, but several at once…from prediction to preparedness, from singular to plural futures, from linear models to feedback-rich learning, from compliance to local ownership, and from static reports to more usable forms of knowledge. 

      On a technical note, Silva asked a practical platform question. To my knowledge you cannot pick a thread and contribute to another person’s comments. I’ll pass this onto the EvalForEarth team though. 

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 30/03/2026

      Welcome to Week 2

      I want to build on last week’s conversation with a short reflection from Scott Chaplowe and Joyce Mukoma’s Evaluation and the Transformational Imperative (see attachment). Their core argument is simple but important. The scale of today’s crises means evaluation cannot remain tied to business-as-usual thinking if it is going to support the wider transformational agenda reflected in the SDGs. They define transformational change not as incremental improvement, but as deep, systemic change in how a system functions. 

      What I find especially useful is that the article does not present transformation as a single new method. Instead, it asks what is holding evaluation back. It points to four familiar fixations: 1) project fixation, 2) temporal fixation, 3) quantitative fixation, and 4) accountability fixation. In other words, evaluation too often stays trapped inside linear projects, short funding timelines, metric-heavy logics, and compliance-oriented accountability.

      Scott and Joyce then suggest several pathways forward…complexity-adaptive methods, principle-focused evaluation, new transformational criteria, data science, and alternative paradigms, including Indigenous and feminist perspectives.

      So for this week, I’d like to ask, “If evaluation is to contribute to transformation, what exactly must change first…our methods, our criteria, our institutions, our underlying mindset, or something else?

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 28/03/2026

      Summary - Week 1

      This first week of discussion has made one thing very clear...the call for future-informed evaluation is not coming from a single camp or methodology. It is emerging from lived frustration across practice. Conny Rietdorf reminded us that the “L” in MEL is often the first casualty when evaluation becomes a compliance exercise rather than a space for reflection and learning. Carlos Tarazona then pushed the conversation further through FAO’s One Health evaluation, showing how retrospective analysis can be solid on the past and still insufficient for the futures now emerging. His reframing of relevance as future fitness, sustainability as resilience under change, and coherence as the ability to work across systems gave us a powerful language for thinking differently.

      Other contributors sharpened the picture. Serdar Bayryyev highlighted the institutional conditions needed for this shift (i.e., capacity, practical frameworks, and organisational change). Silva Ferretti challenged us not to treat foresight as a technical fix for a deeper cultural problem, asking the more difficult question "What is evaluation for?" Alexis Adébayo grounded the discussion in climate reality, where external shocks can destabilise attribution and weaken the usefulness of findings. Rhode Early Charles reminded us that predictive analytics and foresight are not rivals but complements, especially if we can overcome fragmented data systems. Emmanuel Erick Igiha and Amy Mara brought us back to purpose. Evaluation, at its best, should help people improve, adapt, and navigate what comes next.

      So the core thread emerging from Week 1 is this: the move from hindsight to foresight is methodological, yes, but also institutional and deeply cultural. It asks not only for new tools, but for a different orientation to evidence, uncertainty, learning, and change. That feels like an important place to begin.

      Looking ahead: In the coming week, we will turn to the transformational imperative and examine transformative foresight through a forthcoming article in the Journal of MultiDisciplinary Evaluation. If you are not familiar with the transformational imperative within evaluation ecosystems, I have attached a short four-page brief written by Scott Chaplowe and Joyce Mukoma.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 27/03/2026

      Amy, thank you for this. You've laid out the landscape beautifully. What strikes me most about your framing is the word transformation. You're not describing a tweak to evaluation methodology… you're describing a fundamental shift in what evaluation is for. Moving from verdict to navigation...from accountability to anticipation.

      The four pillars you identify are each compelling on their own. But I think what makes them powerful is how they reinforce each other. Scenario analysis without stakeholder participation risks becoming a technical exercise disconnected from lived realities. Real-time monitoring without a learning culture just generates data that no one acts on. Together, though, they start to describe something that feels genuinely different: evaluation as an ongoing, living conversation with the future.

      One question your post raises for me: Who drives this transformation? Evaluators can advocate for forward-looking approaches, but much depends on whether commissioners and decision-makers are willing to fund and use them. In your experience, where has the appetite for prospective evaluation been strongest, and what has made the difference?

      Your contribution also makes a nice segue to our focus next week on the transformational imperative. Really glad you're part of this discussion.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 27/03/2026

      Emmanuel, the framing you offer, moving from judging the past to enabling improvement for the future, captures the spirit of what forward-looking evaluation aspires to be. And the question you end with is exactly the right one to keep asking throughout this discussion.

      One example that comes to mind is WFP's Anticipatory Action work, where evaluation has been used not only to assess past performance but to refine the trigger systems and scenario models that activate pre-emptive responses before crises fully unfold. That strikes me as a case where evaluation genuinely shaped future action rather than simply recording past performance. But I think the deeper insight in your contribution is about orientation and intent…a forward-looking evaluation can be conducted with largely conventional methods, if the questions it asks and the way findings are framed consistently point toward adaptation and improvement rather than verdict. That cultural shift may be as important as any methodological innovation. What has enabled that orientation in the contexts where you have seen it work?

      I just completed a large foresight-informed evaluation for UNICEF, but it is too early to determine what difference it may make. Ask me in 2028!

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 27/03/2026

      Rhode, thank you for this. The point about complementarity between foresight methods and predictive analytics is an important one that does not always get made explicitly. There is sometimes an implicit assumption that foresight is primarily qualitative and futures-oriented, while predictive modelling is the domain of harder data, but in practice strong evaluation benefits from both, and the logic of combining them is sound. Foresight helps us explore the uncertainty space, while predictive methods help us quantify likely trajectories where data allows.

      Your point about data fragmentation is well taken and, I would argue, is itself a systemic issue that evaluation has a role in addressing. If evaluations systematically produced structured, accessible data as a matter of course, rather than siloed project-level reports, the longitudinal datasets that would support the kind of modelling you describe would gradually accumulate. National ownership, as you suggest, is one pathway. But evaluation commissioning practices within international organisations could also change in ways that support this. This seems like a concrete institutional reform worth exploring further in the discussion. I also find the AI and machine learning dimension worth tracking carefully. The capacity for cross-project learning at scale is genuinely new, and its implications for evaluation design are still being worked out  .

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 27/03/2026

      Thank you, Alexis, for your contribution. The example you raise…costly infrastructure rendered ineffective or destroyed by extreme climate events….puts the attribution problem in very concrete terms. It is a scenario that exposes a fundamental limitation of the logic model at the heart of most retrospective evaluation. If the causal chain is severed by an external shock, the evaluation framework itself struggles to make sense of what happened, let alone offer useful guidance for what should come next.

      This connects to a broader issue in evaluation methodology, which is that our standard frameworks often assume a degree of stability in the operating environment that increasingly does not hold in climate-affected contexts. Integrated landscape management is a particularly interesting domain here, because it already operates with long time horizons and complex systems, which arguably makes it one of the areas where foresight-informed evaluation is not a luxury but a necessity. I am curious whether you have seen attempts to build scenario-based thinking into evaluation design in the contexts where you work, even informally, and whether those efforts have helped evaluators and stakeholders navigate the attribution challenges you describe

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 27/03/2026

      Silva, this is a provocation worth examining throughout this discussion, and I think it contributes something that we can continue to unpack in the coming weeks. You are right that foresight tools can simply be recruited into the service of compliance, i.e., anticipating futures to confirm a Theory of Change rather than genuinely interrogating it. That would be a sophisticated version of the same problem.

      The prior question you raise “what is evaluation for?” is one I believe this community needs to grapple with more directly. My own sense is that the shift from hindsight to foresight is not just technical, as it also requires a different relationship between evaluators, commissioners, and the programmes being evaluated. If evaluation is purely confirmatory, then foresight becomes window dressing. But if there is institutional appetite for evaluation as genuine exploration, then foresight tools, particularly when used participatorily as you describe, can open up the kind of reflective space that challenges rather than reinforces prevailing assumptions. Keep raising these questions Silva! 

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 27/03/2026

      Serdar, thank you for your contribution. The examples you have drawn from WFP, GEF, CGIAR, and FAO's own foresight work are directly relevant to the discussion. It is encouraging to see these referenced alongside each other, since it reinforces that momentum for integrating foresight and evaluation is genuine, even if practical guidance remains thin. Your three-part framework (capacity building, practical frameworks, and institutional change) reflects a sequence that I think is right. Technical tools alone will not shift practice if the institutional incentives continue to reward retrospective accountability above all else. Evaluation mandates, commissioning processes, and the expectations set by donors are all part of the system that needs to shift. That is why I included the question about what institutional changes would be needed, for it seems to me that this is where the real bottleneck sits, not in the availability of foresight methods per se.

      The FAO futures of food and agriculture scenarios report you reference is a valuable resource, and it would be interesting to hear from colleagues whether and how evaluators have drawn on those scenarios in their own work, either for framing evaluations or for contextualising findings. Looking forward to continued exchange.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 26/03/2026

      Carlos, thank you for sharing this example. The FAO One Health evaluation is a genuinely instructive case, and your framing of the "temporal mismatch" between retrospective findings and forward-looking relevance captures something I think many evaluators intuitively recognise but struggle to articulate clearly in evaluation reports.

      What I find particularly insightful in your reflection is the reinterpretation of existing DAC criteria through a foresight lens. Framing relevance as "future fitness," sustainability as "resilience under change," and coherence as "the ability to work across systems" is not a radical departure from the criteria. I would argue they are a more honest application of them in contexts where conditions are already shifting during programme implementation. I have been thinking along similar lines, and your example reinforces the case that foresight doesn't necessarily require a separate methodology inserted into evaluation…it can be woven into the interpretive framing we already use (see my response below to Conny).

      Your point about path dependencies is also poignant. Institutional strengths become constraints when the future demands different configurations of expertise and partnership. This seems like fertile ground for scenario planning in particular, whereby it can help organisations like FAO stress-test their current operating models against emerging One Health futures. 

      You comment also made me think of Michael Quinn Patton’s 2020 article “Evaluation Criteria for Evaluating Transformation: Implications for the Coronavirus Pandemic and the Global Climate Emergency” (see attached). MQP critiques the DAC criteria and offers six new criteria oriented around transformation. From his article abstract:
      Fundamental systems transformations are needed to address the global emergency brought on by climate change and related global trends, including the COVID-19 pandemic, which, together, pose existential threats to the future of humanity. Transformation has become the clarion call on the global stage. Evaluating transformation requires criteria. The revised Organization for Economic Cooperation and Development/ Development Assistance Committee criteria are adequate for business as usual summative and accountability evaluations but are inadequate for addressing major systems transformations. Six criteria for evaluating transformations are offered, discussed, and illustrated by applying them to the pandemic and the Global Alliance for the Future of Food. The suggested criteria illustrate possibilities. The criteria for judging any intervention should be developed in the context of and aligned with the purpose of a specific evaluation and information needs of primary intended users. This article concludes that the greatest danger for evaluators in times of turbulence is not the turbulence—it is to act with yesterday’s criteria.

      I have used MQP’s transformational criteria in two evaluations. I’ll share later on how this worked and did not work in the context I was working in…a foresight lens definitely played a role…or I should say a lack thereof.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 26/03/2026

      Thank you, Rama. The role of changing perceptions on sustainability over time is a rich thread worth exploring further. One of the tensions I find most interesting in this space is that sustainability is often assessed at a fixed point in time (whether at programme design or close), against conditions that may look very different five or ten years later. A foresight lens invites us to ask not just whether a programme is sustainable under current conditions, but whether it is resilient to the range of futures that are plausible given climate trajectories, political shifts, or ecosystem dynamics. Would you be willing to share an example from your own experience where changing perceptions of sustainability,  perhaps across funders, governments, or communities, shaped how evaluation findings were received or acted upon? 

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 26/03/2026

      Thank you all for your contributions. I was not seeing any posts on Tuesday, but yesterday many of you added your insight and responses. So thank you! Let me respond to each of you throughout today and tomorrow.

      Conny, thank you for starting the discussion. You are spot on. What you are describing is precisely the tension this discussion is trying to surface. The "L" in MEL/MEAL/MERL is often the first casualty when evaluation is treated as a compliance exercise rather than a genuine learning process. Your observation that evaluation findings are frequently "put aside" after delivery is one of the most persistent and frustrating patterns in our field, and it goes to the heart of why foresight integration matters…if learning isn't happening in real time, forward-looking evaluation becomes even more difficult to anchor institutionally.

      Regarding your point about outcome harvesting (OH) and outcome mapping (OM) is well taken. I’ve toyed with new evaluation concepts like Anticipatory Outcome Fishing and Foresight-Infused Outcome Mapping…attempts to weave futures thinking into evaluation approaches. As you mentioned, I have found that these approaches do in fact create more active stakeholder engagement across reflection cycles, which can build the kind of evaluative culture that makes forward-looking thinking more natural…but it is not easy for some organisations to engage at this level.  I would also add that the participatory dimension you describe, i.e., getting stakeholders to reflect on what worked, what didn't, and what was unexpected, is also a foundation for scenario thinking. Once people are comfortable sitting with uncertainty and identifying assumptions, introducing foresight tools like horizon scanning or the Three Horizons framework becomes a much shorter step. Looking forward to hearing more from you as the discussion evolves.

    • Steven Lynn Lichty

      Kenya

      Steven Lynn Lichty

      Managing Partner

      REAL Consulting Group

      Posted on 23/03/2026

      Welcome to From Hindsight to Foresight: How Evaluation Can Become Future-Informed Discussion. My name is Steven Lichty and I’ll be hosting this online discussion over the next five weeks. I live in Nairobi and have been working at the nexus of foresight and evaluation for over 20 years. I am looking forward to facilitating our conversations here, sharing resources, and learning from all of you. 

      Evaluation has long helped us understand what happened, what worked, and what did not. But many of the systems we care about most (food, agriculture, climate, ecosystems, resilience, etc.) are now shaped by accelerating uncertainty, disruption, and long-term change. In that context, looking backward is no longer enough. The question is not only whether an intervention performed well in the past, but whether it is fit for the futures now emerging.

      Over the coming weeks, this forum is a place to test ideas, share examples, surface tensions, and learn across disciplines. Through shared experiences, optional readings, and honest reflection, we will explore what it looks like when evaluation starts looking forward. Not abandoning rigour, but expanding it. Not replacing the DAC criteria, but asking what criteria like relevance and sustainability really mean when the future may look nothing like the world in which a programme was initially designed.

      This discussion invites evaluators, foresight practitioners, commissioners, researchers, and decision-makers into a shared space of inquiry. How can evaluation become more future-informed, more adaptive, and more useful in times of volatility? What happens when we bring foresight tools like horizon scanning, scenarios, Three Horizons, Futures Triangle, or Causal Layered Analysis into evaluation design, interpretation, and use? How can deeper epistemologies and ontologies driving critical futures thinking inform how we do evaluation?

      This community includes some of the most thoughtful evaluators, commissioners, and practitioners working in food security, agriculture, and the environment. You have seen the limits of retrospective evaluation firsthand. You have also probably seen glimpses of something better. You do not need to be an expert in both fields to contribute. Practical experience, critical questions, promising cases, doubts, and provocations are all welcome.

      So let’s begin there: Where have you seen the limits of retrospective evaluation in a fast-changing world? And where do you see the most promising entry points for bringing a foresight lens into evaluation practice?

      I am are glad you are here and I look forward to an engaging and thought-provoking discussion.