Senior Evaluation Specialist in the UN system (FAO, IAEA, WFP)
Food security analysis and Humanitarian assistance programmes management in the field with international NGOs (1999- 2003)
Posted on 20/02/2024
Dear Muriel,
I agree A.I. brings so much potential for evaluation, and many questions all at once!
In the Office of Evaluation of WFP, as we have looked to increase our ability to be more responsive to colleagues’ needs for evidence, the recent advancements in artificial intelligence (A.I.) came as an obvious solution to explore. Therefore, I am happy to share some of the experience and thoughts we have accumulated as we have started connecting with this field.
Our starting point for looking into A.I. was recognizing that we were limited in our capacity to make the most of the wealth of knowledge contained across our evaluations, to address our colleagues’ learning needs. This was mainly because manually locating and extracting evidence on a given topic of interest, to synthesize or summarize it for them, take so much time and efforts.
So, we are working on developing an A.I. powered solution to automate evidence search using Natural Language Processing (NLP) tools, allowing to query our evidence with questions in natural language, a little like we do in any search engine on the web. Then, making the most of recent technology leaps in the field of generative A.I, such as Chat GPT, the solution could also deliver text that is newly generated from the extracted text passages, such as summaries of insights.
We also expect that automating text retrieval will have additional benefits, such as helping to tag documents automatically and more systematically than humans, to support analytics and reporting; and as that Ai will also give an opportunity to direct relevant evidence directly to audiences based on their function, interests and location, just like Spotify or Netflix do.
As we manage to have a solution that offers a good performance in the search results it offers, we hope it may then be replicable to serve other similar needs.
Beyond these uses that we are specifically exploring in the WFP Office of Evaluation, I see other benefits of A.I. to evaluations, such as:
- Automating processes routinely conducted in evaluations, such as the synthesizing of existing evidence to generate brief summaries that could feed evaluations as secondary data.
- Better access to knowledge or guidance and facilitating the curation of evidence for reporting in e.g., annual reporting exercises.
- Facilitating the generation of syntheses and identification of patterns from evaluation or review-type exercises.
- Improving editing through automated text review tools to help enhance language.
I hope these inputs are useful, and look forward to hearing the experiences of others, as we are all learning as we go, and this is indeed full of promises, risks and surely moves us out of our comfort zones.
Best
Aurelie
Italy
Aurelie Larmoyer
Senior Evaluation officer
WFP
Posted on 22/08/2025
Dear Monica and colleagues,
Thank you for a very interesting discussion and insights.
Increasing the influence of evidence on decision-making is a long-standing challenge for all of us working on Monitoring or Evaluation functions, which connects to a series of factors at all levels of our Organizations, even when the importance of evidence is well recognized.
In the World Food Programme' evaluation function, we have enforced dedicated strategies to encourage our stakeholders to use evidence as they make decisions. Looking back on a few years of experience, I can share some reflections on what we start seeing to be effective.
I certainly subscribe to many of the points made by colleagues related to impeding factors, such as issues with the quality of their data; the fragmentation of information systems; an organizational culture that still sees monitoring or evaluation as compliance exercises rather than learning opportunities, or on the centrality of senior leadership’s championing role.
Regarding options to better embed evidence into decision, I also concur with the views shared by our colleague Brilliant Nkomo, that formalizing processes is an essential first step. If many organizations do have clear M&E frameworks and develop associated guidance and capacities, it is also important to ensure to somewhat institutionalize the feeding of evidence-based knowledge into any policy or programmatic discussion, by integrating it formally into strategic planning, or regular programme stock take and review processes.
Related to this, our experience has shown that efforts to sync our evidence generation with the needs of our stakeholders do pay off: as generators of evidence and knowledge, we have been more proactive in engaging with those whose decisions we think we can support, to hear what our colleagues are grappling with and understand where they have gaps. In parallel, we developed a capacity to respond to needs as they arise, by repurposing evidence that exists into new products. Since we started developing this capacity, we are seeing that a culture shift slowly operating, whereby evaluation gets to be considered as a knowledge partner.
Last, I would also say that the returns we had from our stakeholders, as we surveyed them on how they value the evidence we provide, has also kept confirming that tailored communication also pays off: we know that various roles will call for various focus and form of evidence preferences. So, the more we adapt to the variety of needs of those we serve, the better our products are received, and likely to be used.
Best to all,
Aurelie