Skip to main content

RE: Evaluability Assessments: An invitation to reflect and discuss

Silva Ferretti

Italy

Silva Ferretti

Freelance consultant

Posted on 04/08/2024

I agree with you [Rick Davies contribution below] on many things, and yes, it's key to have a reality check on what an evaluation should and can do.

1. "All projects can and should be evaluated
 
Yes but WHEN? EA recommended delays can give time to address data, theory and stakeholder concerns time to be resolved and thus lead to a more useful evaluation
 
Yes, but HOW? Approaches need to be proportionate to resources and capacities and context. EAs can help here
I am with you on this. And too many evaluations go on autopilot (i.e. cut and paste a template with small adaptations at the end of the project). I definitely welcome evaluability as an opportunity to broaden possibilities, such as suggesting real-time evaluations, promoting participatory approaches, or highlighting important questions that need to be addressed or priorities to address. But let's be realistic: most evaluability documents I saw check if the programme had collected preset indicators on an agreed log frame/theory of change and demand to check which OECD⁄DAC criteria can be evaluated. The Austrian Development Agency has interesting points on the utility of the evaluation as perceived by different stakeholders - but the annexe is then missing :-( (is this still a working document?)  
I dream as an evaluability document that provides a catalogue of possibilities rather than narrowing to conventional approaches. They should suggest that a developmental evaluation is possible, for example, rather than asking for preset theories of change. Otherwise, evaluability will stifle rather than promote innovation.
 
2. Re  "Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance."  and your comment that.. "We are confusing the project documentation with the reality of the work."
 
It would be an extreme position to argue that there will be no instances of good practice (however defined) on the ground in these circumstances. 
But It would be equally extreme to argue the other way, that the state of design and data availability has no relevance at all.If you do take that position the decision to evaluate is effectively a gamble, with someone's time and resources. 
At some stage someone has to decide how much money to spend when and how
Of course, we should spend evaluation resources wisely. And, of course, a good evaluability assessment can result in more efficient evaluations (and many evaluation managers are already good at doing that)
Thinking about cost-effectiveness.... can we really justify spending a lot of money to understand only if a programme can be evaluated without providing any additional learning? I am not discarding this, and yes, it can be useful.
But, for suitable programmes, it is possible to have a phased evaluation where the first phase assesses the programme and provides some preliminary learning while understanding the best options for the next phases. Of course, you need an evaluator / a team conversant with different methodologies, adaptable, and close to management, but it is very possible to do that. A phased evaluation will not always be suitable, but it is a real possibility that can be overshadowed by conventional evaluability approaches.
 
3. Re ""Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) " 
 
I would like to see some evidence for this claim?
As counter evidence, at least of intention, I refer you to this diagram, from the Austrian Development Agency EA guide, and to the reference to the jigsaw nature of an EA, in the sense of having to fit different needs and capacities together, rather than following  any blueprint
 
As I wrote before, the checklist confirms a very conventional understanding of a programme. I reiterate here that I worked on programs that did not have a log frame, or set indicators and baselines (think, for example, about evolving models of work), and where the evaluation helped to systematize approaches. Such programs would simply not have passed the proposed checklists. And to what extent they could have reached important stakeholders that were only identified through snowballing.  Yes. I am referring to special cases, as I tend to work more grassroots and on innovative projects. But these are the ones most at risk of remaining under the radar with a conventional understanding of evaluability.
 
So I see how evaluability as it is can work for large, relatively standardized projects. But as it is now, it still falls short to become a possibility for novel approaches and for liberating ideas and possibilities, which our sector badly needs.