Thank you for your active participation and feedback! I am reading and reflecting on all as they come in. I'll respond to Dreni-Mi and Daniel now, and I look forward to continued discussion.
Dear Dreni-Mi,
Many thanks for your posting. You’ve given a sound overview of the various phases of an evaluability assessment, a rationale for its implementation, and its benefits. There are several stages of evaluability assessments mapped out in the evaluation literature, all somewhat related but stressing different aspects or values, and with slightly different categorization. Wholey’s 8 steps come to mind, as well as Trevisan and Walser’s 4 steps. Your emphasis on cost effectiveness and preparing for quality evaluation resonates I think across all the approaches.
One of our learnings conducting evaluability assessments at CGIAR this past year was the value of a framework (checklist) for use and also the need to be flexible in its implementation. The jigsaw nature of EAs that Rick Davies shared in another posting I think is an especially helpful way to think about EAs as a means of bringing together the various pieces in an approach best suited to a certain context. Clearly defining objectives for an EA, and responding to specific needs leads to a more effective use of the framework, providing more flexibility and nuance to the process.
From the key stages you’ve outlined, what have you found the most challenging to implement? And what is your experience with use of evaluability assessment results?
Kind regards,
Amy
Dear Daniel,
Many thanks for your comments. I’ll respond to a few. I fully agree – evaluators should have a seat at the design table. Their participation and ability to facilitate evaluative thinking among colleagues usually aids in greater evaluability for an intervention, based on my experience. It can also facilitate the development of sound monitoring and evaluation planning, and the capacity to use and learn from data generated from these processes. Such a role broadens what is typically understood, in some circles anyway, of what evaluators are and what they do, which I really like in terms of support to the further professionalization of our field.
In his 3rd edition of Evaluation Thesaurus, Michael Scriven makes reference to the philosopher Karl Popper’s concept of “falsifiability” when discussing evaluability. This concept relates to the idea that there should always be a capacity for some theory, hypothesis, or statement to be proven wrong. For an evaluand (Scriven’s definition of what is to be evaluated -- a program, project, personnel, or policy, etc) to be evaluable then, I understand that broadly it would be deemed falisifiable to the extent that it is designed, developed, or constructed in a way where evidence may be generated as “proof” of its value.
The religious connotation in Scriven’s reference to a first commandment certainly is intended to place importance on the concept. Evaluability as “the first commandment in accountability” impresses upon me as something that is due – a justification, and ultimately a responsibility. Scriven notes low evaluability has a high price in terms of the cost borne. “You can’t learn by trial and error if there’s no clear way to identify the errors.” And “It is not enough that one be able to explain how one spent the money, but it is also expected that one be able to justify this in terms of the achieved results” (p. 1).
I think Scriven provides further discussion on evaluability in his 4th edition. I’m traveling and don’t have access to my hard copy library back home. Perhaps others can reference further inputs on this.
RE: Evaluability Assessments: An invitation to reflect and discuss
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 04/08/2024