- Development of web-based monitoring database
- Organisational & Advocacy Capacity Assessments
- MEL
- Project Management
- Portfolio Management
- Storytelling
- Capacity Development of CSOs
- Development Cooperation
- Quality Management
- Climate Change Adaptation
Germany
Cornelia Rietdorf Member since 10/04/2019
German Environment Agency
Scientific Associate
WebsiteMy contributions
- In our last EvalForward Talks session on 30 April, I shared my experience of how to conduct a capacity-development project review from a monitoring and evaluation (M&E) perspective, and I would like to thank all those who participated for their comments and insight. Herewith a brief overview of the key points that emerged on how to capture the effects of capacity development (trainings, workshops and mentoring activities) in a meaningful way. The twists and turns of capacity development As you know, a broad approach is needed when evaluating advocacy and capacity-development projects ‒ and...Read more
Germany
Cornelia Rietdorf
Scientific Associate
German Environment Agency
Posted on 25/03/2026
Good morning / hello to everyone and thank you for starting this interesting discussion round, Steven!
I'm not an evaluator, but worked in M&E / MEL / MEAL / MERL in different contexts over the past 10+ years and saw the challenges of mostly backward looking evaluation way to many times. I've not many experiences with foresight evaluations so here just some general thoughts:
What makes an evaluation foreward looking? For me, in a way it's creating awareness of the L in MEL / MEAL / MERL. Why do we do an evaluation? Way to often I saw that for project teams it's just a troublesome check box to tick off to please donors / project requirements. The evaluation is done in whatever way and then put aside. It was often hard work to raise awareness to the importance and potential of evaluation - to uncover what worked well, what lead to actual positive change, what didn't work and why and what even might have brough negative change to then use these Learnings for better and improved projects / policies / strategies / measures etc.
I'm wondering, if outcome harvesting and mapping is one way to increase the awareness of the potential of evaluations, as many stakeholders involved get more actively involved in several reflection rounds around outcome mapping and outcome harvesting - which then ideally triggers an important reflection process on what might work why, what has worked, what didn't, what was unexpectedly positive or negative and can then in turn be used for an improved follow-up process.
So in a way aren't good tools, arguments and practices to really focus on the learning aspect of evaluations and engage with all involved key stakeholders in reflection processes as much as possible greatly support the evaluation foresight? I hope I'm not completely off track here and am very much looking forward to the discussion threads around this topic and learnig from everyone here.
Cheers,
Conny