I agree with the argument that the rigid application of a tool, whatever it may be, likely does not result in a positive outcome. This may be the rigid application of theories of change, an overused approach that has become synonymous with “doing” evaluation, yet is still not used to its fullest application in most evaluation reports I read. Or with the over valuing of RCTs based on ideological interests. Or the rigid application of the OECD-DAC criteria based on an expected paradigm. There are expected pathways to what “knowledge” is to be within our field that contributes to this rigidity, particularly when applied in a mechanistic way, and its overuse can indeed perpetuate the bureaucratic nature of our established systems. I fully agreed with the points raised by Dahler-Larsen and Raimondo in Copenhagen several years ago at EES.
Yet I would also argue that any tool, such as an evaluability assessment, should not be dismissed based on this argument. I think a more useful line of inquiry may be to think about when and how EAs could be most useful. In my experience EAs can in effect be a tool for breaking with mechanistic evaluation and bureaucratic systems – and yes, an attempt to breaking management’s capture of evaluation -- through better defining a meaningful and useful focus for an evaluation. Or the decision to not do an evaluation based on its findings. I think the challenge is at the organizational level with the inevitable interest to standardize and create norms for its use across complex realities.
RE: Evaluability Assessments: An invitation to reflect and discuss
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 23/08/2024
Hi all,
I agree with the argument that the rigid application of a tool, whatever it may be, likely does not result in a positive outcome. This may be the rigid application of theories of change, an overused approach that has become synonymous with “doing” evaluation, yet is still not used to its fullest application in most evaluation reports I read. Or with the over valuing of RCTs based on ideological interests. Or the rigid application of the OECD-DAC criteria based on an expected paradigm. There are expected pathways to what “knowledge” is to be within our field that contributes to this rigidity, particularly when applied in a mechanistic way, and its overuse can indeed perpetuate the bureaucratic nature of our established systems. I fully agreed with the points raised by Dahler-Larsen and Raimondo in Copenhagen several years ago at EES.
Yet I would also argue that any tool, such as an evaluability assessment, should not be dismissed based on this argument. I think a more useful line of inquiry may be to think about when and how EAs could be most useful. In my experience EAs can in effect be a tool for breaking with mechanistic evaluation and bureaucratic systems – and yes, an attempt to breaking management’s capture of evaluation -- through better defining a meaningful and useful focus for an evaluation. Or the decision to not do an evaluation based on its findings. I think the challenge is at the organizational level with the inevitable interest to standardize and create norms for its use across complex realities.
Regards, Amy