Monitoring, Evaluation, Accountability, and Learning (MEAL); rapid-cycle and formative evaluation; mixed-methods evaluation design; utilization-focused evaluation; adaptive management and learning agendas; evidence use for decision-making; qualitative and quantitative data collection and analysis; rapid assessments; performance monitoring systems; equity, gender, and social inclusion (GESI); capacity strengthening for evaluation; development and humanitarian programming.
I have over 19 years of experience designing, managing, and using evaluations to support timely decision-making and program adaptation across Canada, the Caribbean, Latin America, and West and Central Africa. My work includes building organization-wide M&E frameworks, developing learning agendas, leading and overseeing evaluations, and supporting implementers and decision-makers to use evidence effectively. I have worked with international NGOs and partners such as DAI, MEDA, Cuso International, Catholic Relief Services, and World Vision across sectors including governance, livelihoods, youth, health, education, WASH, and emergency response.
Canada
Rhode Early Charles
Posted on 01/04/2026
I really appreciate the insights shared, especially the focus on adaptive learning and forward-looking evaluation. I would also argue that one critical limitation lies in how evaluation knowledge is produced and communicated. This time, my comment is less focused on methodology and more on offering my perspective in response to your question Steven about what needs to change for evaluation to truly contribute to transformation.
Evaluation reports are often written by evaluators, using technical and methodological language that mainly speaks to other evaluators or technically trained audiences. As a result, these reports tend to be most appealing and useful to those who already understand that language.
What I find interesting is that most projects have communication plans that clearly identify who needs what information, how it should be shared, and why. However, this logic is rarely applied when writing evaluation reports. Different stakeholders have very different needs depending on how they interact with the project, and a single report cannot effectively serve all of them.
To me, evaluation reports should be seen as a starting point, not the final product. The data and findings should serve as inputs into multiple targeted knowledge products, developed by communication experts or sector specialists (health, education, economic development, etc.), that speak directly to specific audiences. These tailored outputs would translate evaluation insights into formats and messages that are relevant and actionable, supporting implementation, positioning, and donor engagement.
In addition, while I understand the rationale behind lean data approaches, they can sometimes be too restrictive. Focusing only on what is needed for indicators or donor requirements may limit opportunities to explore emerging issues or strategic areas more deeply. Sector specialists could play a role here by collecting additional data for learning, positioning, thought leadership, or future programming, as long as there is clear accountability for why that data is being collected and how it will be used. Every question has a cost, but it can also bring real value if it is intentional.
Overall, if evaluation is meant to support transformation, it is not just about improving methods or tools. It is also about making sure that the knowledge we produce is usable, relevant, and accessible to the people who need it. We need to better understand our audiences and what they need.
Canada
Rhode Early Charles
Posted on 25/03/2026
I find this discussion particularly relevant. In my work, I often use time series analysis and predictive modeling to estimate future trends based on historical data.
I would add that while foresight approaches that do not rely on past performance are essential, especially in contexts characterized by high uncertainty or limited data, predictive methods grounded in historical data remain among the most robust tools we have when sufficient and reliable data is available. These methods allow us to identify patterns, quantify trends, and generate evidence-based projections that can effectively complement more qualitative foresight approaches.
With the advancement of AI and machine learning, we now have the capacity to go further by integrating large volumes of data across multiple projects, regions, and even donors. This creates important opportunities to build more accurate and context-sensitive predictive models, particularly when working with similar interventions within a country or sector.
However, a major constraint remains data availability and fragmentation. Data is often siloed within individual projects or organizations, making it difficult to build sufficiently large and diverse datasets for robust modeling. In many cases, data from a single project is not sufficient to support reliable predictions.
One potential way forward would be to strengthen national ownership of project data. Governments could play a key role in consolidating data generated across projects into centralized and accessible databases. If well designed, such systems could support research, inform project design, and enable more rigorous ex-ante analysis of potential success or failure.
In that sense, I see strong complementarities between foresight methods and predictive analytics. Foresight helps us explore uncertainty and alternative futures, while predictive models help us quantify likely trends where data allows. Bringing both together could significantly strengthen evaluation practice and decision-making.