Dear colleagues,
Have you heard of evaluability assessments (EAs)? Evaluability refers to the extent to which an intervention can be evaluated in a reliable and credible fashion (OECD, 2022).
Under the guidance of Rick Davies, CGIAR’s Evaluation Office developed an EA framework in 2022 and, over the past year, has been using it to assess the evaluability of four large-scale programmes known as Regional Integrated Initiatives. You can read a synthesis report here and a recent blog on CGIAR’s experience here
The importance of EAs
At CGIAR, we have come to identify EAs as central to developing a culture of results. A strong focus on evaluability facilitates clear planning for monitoring and evaluation and its subsequent use. As the late evaluation theorist Michael Scriven noted, evaluability is analogous to requiring serviceability in a new car and may be thought of as “the first commandment in accountability” (Scriven, 1991, p. 138).
Invitation to discussion
We are keen to share our experience and learn from Eval Forward colleagues about their experiences of EAs. If you have commissioned or participated in an EA or used EA results when managing evaluations, please consider sharing your insights and reflections on (one or all of) the following questions:
- To conduct an EA or to not conduct an EA: What factors enter into a decision to conduct, or not to conduct an EA? What purpose have you formulated for an EA depending on what is being assessed and when?
- Distinguishing between EAs and other kinds of evaluative exercises: How can we bring clarity to the EA process for programmes and/or monitoring colleagues in order to help facilitate a successful outcome? How easy is it to convince funders of the relevance and usefulness of EAs in the context of budgetary constraints?
- Use of EA results for decision-making processes: Do you have examples of how EA results have been used for decision-making purposes? Have you encountered any challenges in facilitating the use of EA results?
- Use of EAs in assessing evaluability of interventions designed to contribute toward realization of the Sustainable Development Goals (SDGs): Are there examples of the use of EAs to determine the evaluability of country level or organizational level contributions to the realization of the SDGs? Are there any particular considerations for EAs within this circumstance?
Let’s connect and share
Notably, Evaluability Assessments have a designated page on BetterEvaluation platform. Learning from this discussion will be shared on the Better Evaluation page and at the upcoming European Evaluation Society (EES) conference. We are excited to explore these questions and share our experiences with you!
Amy
---
References and other resources
- CGIAR Independent Advisory and Evaluation Service (IAES). (2023). Regional Integrated Initiative-Evaluability Assessment Report. Rome: IAES Evaluation Function. https://cas.cgiar.org/
- Child, K. and Davies, R. (2021). Evaluability Assessments are an essential new tool for CGIAR Managers. Independent Advisory and Evaluation Service. Rome: CGIAR IAES Evaluation Function. https://iaes.cgiar.org/evaluation/news/evaluability-assessments-are-essential-new-tool-cgiar-managers
- Davies, Rick & Lina Payne (2015). Evaluability assessments: Reflections on a review of the literature. Evaluation 21(2): 216-31.
- Independent Advisory and Evaluation Service (IAES). (2022). CGIAR Evaluation Guidelines. Conducting and Using Evaluability Assessments within CGIAR. Rome: CGIAR.
- Jersild, A., Gullotta, G. and Negroustoueva, S. (2024). Enhancing Pathway to Impact: CGIAR’s Experience with Evaluability Assessments of Regional Programming. Rome, CGIAR Independent Advisory and Evaluation Service. https://iaes.cgiar.org/evaluation/news/enhancing-pathway-impact-cgiars-experience-evaluability-assessments-regional
- Organisation for Economic Co-operation and Development, Development Assistance Committee. (2022). Glossary of key terms in evaluation and results-based management. Paris. https://one.oecd.org/document/DCD/DAC/EV(2022)2/en/pdf
- Scriven, M. (1991). Evaluation Thesaurus (4th ed.). Newbury Park, CA: Sage Publications.
This discussion is now closed. Please contact info@evalforearth.org for any further information.
Chad
Dreni-Mi Mahamat Djimé
Assistant au Secrétaire Général du Ministère de l"Agriculture
Ministère de l'Agriculture/Ancien Directeur des Etudes de la Planification et du Suivi
Posted on 26/07/2024
[Originally posted in French]
Hello dear friends and colleagues of Eval Forward!
As we are in a learning process, I'd like to share a few points from my research to help us understand Evaluability Assessment.
Assessing evaluability: a crucial step before any evaluation
Evaluability assessment is a process that aims to determine whether an evaluation of a programme, project or policy is feasible and relevant. In other words, it involves checking whether the conditions are right for conducting a high-quality evaluation and obtaining reliable and useful results.
Why is this important?
The key stages in an evaluability assessment
The benefits of evaluating evaluability
In short, evaluability assessment is an essential tool for guaranteeing the quality and relevance of evaluations. It ensures that the resources invested in evaluation are used optimally and that the results obtained are useful for improving policies and programmes.
Thank you
Dreni-Mi from Chad
Canada
Steven Lam
Independent Research and Evaluation Consultant
Posted on 26/07/2024
Hi Amy and all,
To answer question 2 of how to facilitate the use of evaluability assessments, I find it helpful to do an evaluability assessment without calling it an evaluability assessment given the politics of "evaluability." I conceptualize it as an activity to prepare programs for evaluation rather than to determine "evaluability." This means making sure that the linkages in the program theory of change are logical, that the proposed outcomes are possible, etc. This approach to evaluability assessment is more of a process integrated within program planning and evaluation generally and as such does not often lead to a stand-alone output.
A few years ago my colleague and I reviewed evaluability frameworks and case studies, which might provide more insights into other gaps and opportunities.
Lam S, Skinner K. (2021). The use of evaluability assessments in improving future evaluations: a scoping review of 10 years of literature (2008-2018). American Journal of Evaluation. 42(4): 523-540.
Best,
Steven
Italy
Silva Ferretti
Freelance consultant
Posted on 29/07/2024
The issue I have with "evaluability" is that all programs can (and should) be evaluated.
Some programs lend themselves to more "rigorous" and mainstream approaches, while others, due to their inherent complexity, require different methodologies.
These alternative approaches can help build an understanding of dynamics that did not previously exist... which is a GOOD thing. Not all programs need to be preformatted. This can be especially beneficial for participatory and innovative/adaptive programs that need to determine the best way forward as they progress.
It is crucial that we do not use evaluability as another brick in the wall for making evaluations just about compliance with results or a cheaper process of standard research applied to arbitrary theories of change.
Instead, we should reclaim evaluation as an opportunity to assess changes in ways that are coherent with the program's intent and context. So the question is not "Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) but rather "What evaluative approach is best suited to this context."
When I read, "*Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance. In these circumstances, an Evaluability Assessment would not be needed to decide whether to go ahead with an evaluation. Efforts need to focus on the more immediate tasks of improving project design and/or the M&E framework*."(https://www.betterevaluation.org/methods-approaches/themes/evaluability-assessment)
I really want to cry....
We are confusing the project documentation with the reality of the work. In my experience, many projects had sketchy logograms or theories of change, yet were terrific. The evaluation helped systematize them and unpack learning and value. If we keep looking only at what is preformatted, we miss the chance of real transformation.
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 29/07/2024
In response to Silva's comments below (also sent to email server)
"Hi Silva
Let me address some of your points
1. "All projects can and should be evaluated"
Yes but WHEN? EA recommended delays can give time to address data, theory and stakeholder concerns time to be resolved and thus lead to a more useful evaluation
Yes, but HOW? Approaches need to be proportionate to resources and capacities and context. EAs can help here
2. Re "Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance." and your comment that.. "We are confusing the project documentation with the reality of the work."
It would be an extreme position to argue that there will be no instances of good practice (however defined) on the ground in such circumstances (i.e where there are poor designs poor M&E frameworks).
But It would be equally extreme to argue the other way, that the state of design and data availability has no relationship to outcomes on the ground at all. If you do take that position the decision to evaluate is effectively a gamble, with someone's time and resources.
At some stage someone has to decide how much money to spend when and how. EAs can help inform those decisions
3. Re ""Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) "
I would like to see some evidence for this claim
As counter evidence, at least of intention, I refer you to this diagram, from the Austrian Development Agency EA guide, and to the reference to the jigsaw nature of an EA, in the sense of having to fit different needs and capacities together, rather than following any blueprint
regards, rick
United Kingdom
Daniel Ticehurst
Monitoring > Evaluation Specialist
freelance
Posted on 31/07/2024
Dear Amy,
Thanks for posting. I remember well reading Rick D's Synthesis of the Literature back in 2013. I had four observations:
Finally, some help: You quoted Michael Scriven as saying: "evaluability is analogous to requiring serviceability in a new car and may be thought of as “the first commandment in accountability”. I know this must be a significant saying, but I don't understand/get it and its importance. What do you think he means?
Best wishes and thanks again,
Daniel
Norway
Lal - Manavado
Consultant
Independent analyst/synthesist
Posted on 01/08/2024
Dear Amy,
Many thanks for raising this question, which has the potential of doing much good as well as its opposite.
My point of departure regarding evaluation is very simple; could/has a set of actions intended to enhance the lives of a target group attainable/been attained?
If the answer to this question is no/marginally/partially, the set of actions undertaken are flwed in someway. However, such unsatisfactory actions may leave behind excellent motor ways, specialist hospitals, bridges, international airports etc.
The evaluability of the extent to which some public good has been achieved is extremely difficult. When projects i.e., a set of actions is decided on, it is seldom that the actual desires of a target group is consulted in a meaningful way. And even when there is a general agreement in a target group on the desirability of a goal, there may be disagreement on how that objective may be attained. Often, this is due to the cultural norms dominant in that social group or due to the influence of some outside vested interest skilled in propaganda.
Should the evaluator be willing and able to identify and gauge the extent and nature of those three variables during, execution and conclusion of a project, it is possible to undertake remedial adjustments to it in order to increase its success. True, it is difficult, but if undertaken, in the end, it would benefit a group of live people neither who or whose needs are a mere entity.
Cheers!
Lal.
Italy
Carlos Tarazona
Senior Evaluation Officer
FAO
Posted on 01/08/2024
Hi Amy,
Thanks for starting this valuable exchange. I would like to share my experience doing evaluability assessments of programmes in FAO. These have generally aimed at determining the readiness of evaluating FAO’s work in a given area, and/or to inform decision-making on FAO evaluation plans.
For example, the evaluability assessment of FAO's rural poverty reduction programme (available at https://openknowledge.fao.org/server/api/core/bitstreams/161ba76c-daea-…) was instrumental to identify which components of FAO's programme were feasible and worthwhile to evaluate for results. With this objective in mind, the evaluability assessment reviewed the clarity of the programme logic and the extent of completion of planned activities and use of resources, and resulted in proposals on the scope, approach, methodological design and work-plan of the evaluation.
More recently, we are using evaluability assessments to inform our decisions on which evaluations to conduct, as well as to define their scope, objectives and approach. This development is aligned with our medium-term strategy that seeks to ensure that the Office of Evaluation produces high-quality evaluations that meet the needs of FAO stakeholders (https://www.fao.org/evaluation/highlights/detail/fao-evaluation-strateg…)
Best regards,
Carlos
Italy
Silva Ferretti
Freelance consultant
Posted on 04/08/2024
I agree with you [Rick Davies contribution below] on many things, and yes, it's key to have a reality check on what an evaluation should and can do.
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 04/08/2024
Ethiopia
Hadera Gebru Hagos
Senior Consultant, Natural Resource Management and Livestock Specialist
Freelance Consultant
Posted on 06/08/2024
Thank you Amy for bringing up this topic. I also thank the CGIAR’s Evaluation Office for developing an EA framework for use.
United Kingdom
Daniel Ticehurst
Monitoring > Evaluation Specialist
freelance
Posted on 06/08/2024
Dear Amy,
Thanks for taking time to read through and reply.
My apologies but let me be terse and honest.....
Many thanks for explaining what Mr Scriven wrote. I now understand. That said, I remain none the wiser on the import and significance of what he wrote. Motherhood and apple pie, so clever but a bit thin. 😉
On EAs themselves, and as I alluded to, the purpose and scope of an EA's inquiry appears to be part and parcel of what most people would refer to as a competent ex ante evaluation or appraisal. As you say, great to have an evaluator or two on the team ,yet...How could you not look at the "evaluability" of the investment by appraising it and the evidential reasoning that informs its rationale and design factors, including a ToC and/or a results framework? Or are we saying that an appraisal that recommends an investment not worthy can be judged as evaluable or, indeed, vice versa (assuming the EA is conducted after the appraisal)?
Thus, and as Hadera points out, the incremental value generated by carrying out a discrete - some would say a contrived EA solely by evaluators appears marginal at best, potentially fragments the team and comes across as rather extravagant and indulgent.
Many thanks for the post and the discussions that have prompted enquiry, debate, skepticism and doubt about EAs.
With best wishes,
Daniel
Cameroon
Norbert TCHOUAFFE TCHIADJE
Associate Professor / Senior Researcher
University of Ebolowa/Pan-African Institute for Development Cameroon
Posted on 06/08/2024
Thanks dear all for your insights.
For me , EA is relevant with theory of change.
It is a way to reassess and to cross- check the evaluation process in order to verify if the outcomes are credible, acceptable , reliable and consistent with what was expected.
Thanks.
Norbert TCHOUAFFE, PhD
Italy
Svetlana I Negroustoueva
Lead, Evaluation Function
CGIAR
Posted on 08/08/2024
Dear colleagues,
it is exciting to see such an insightful and passionate discussion. I represent CGIAR’s evaluation function and we developed and have implemented EAs with Amy under advisory of Rick Davies. Rationale for conducting EAs in CGIAR is summed up in the blog. Complementing and in response to some items raised here, it is important to note that all contexts are not created equal and level of unquestionable preparedness for an evaluation cannot be assumed in any type and size of intervention in any context. In the evolving contexts, some aspects may not be prioritized in time and to meet the needs of everyone involved. EA thus they have already brought us one step closer to facilitate learning and ensure that accountability mechanisms, you can even call it an evaluative baseline, are in place before an intervention is launched or progresses too far. EAs have helped builds confidence among stakeholders, including MEL professionals (often disempowered) and funders, that the aspirational goals and objectives are not only feasible to implement and measure, but also that MEL colleagues are a key stakeholder for us and towards reconciling accountability and transparency with those who fund. EAs have helped enhance the sense of ownership and credibility of processes and results, and thus can be crucial for securing funding and support. By recommending funders around timing and evaluation scope expectation can be set properly also at which level the evaluation can be funded, to set up expectations on the depth of inquiry and evaluative learning.
Italy
Gaia Gullotta
Evaluation consultant
CGIAR
Posted on 08/08/2024
Dear All,
I am excited to have been part of the team for the Evaluability Assessment of the four Regional Integrated Initiatives of CGIAR. It was a valuable learning experience, revealing both strengths and challenges in using the EA framework for the first time, particularly in understanding evaluability and stakeholder engagement.
Many thanks, Hadera, for your posting. I appreciate you bringing up this important issue. I agree, Project Cycle Management (PCM) practices are essential, but they do not make standalone EAs redundant. EAs complement PCM by providing a dedicated, independent, and detailed assessment of a project's evaluability, thus enhancing overall project quality, accountability, and likelihood of success.
Indeed, EAs involve external evaluators who provide an impartial perspective. This independent assessment can identify potential issues that internal stakeholders might overlook due to familiarity or their interests in the project’s success.
Furthermore, standalone EAs specifically assess whether a project is ready to be evaluated. This includes reviewing the clarity and measurability of objectives, the robustness of data collection plans, the availability of baseline data, etc. These elements might not be thoroughly analyzed during regular PCM phases.
Finally, based on our experience with previous evaluations, an important issue is the quality of data. In fact, it has often been necessary to conduct extensive data cleaning, which required additional time and resources. This could be anticipated and prevented through an evaluability assessment.
With thanks and warm regards,
Gaia Gullotta
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 10/08/2024
Hi all
I would just like to add to Svetlana's important point: "it is important to note that all contexts are not created equal and level of unquestionable preparedness for an evaluation cannot be assumed in any type and size of intervention in any context. In the evolving contexts, some aspects may not be prioritized in time and to meet the needs of everyone involved".
I think this is especially the case with stakeholders' interests in an evaluation: what they want to know and what they are concerned about. These are very likely to change over time.
Czechia
Jindra Cekan
Founder
Valuing Voices at Cekan Consulting LLC
Posted on 12/08/2024
Everyone, glad to see this is being discussed. In 2017 we got a grant from Michael Scriven's Faster Forward Fund to look at 8 ex-post studies. Here is the blog about our report, linked: https://www.betterevaluation.org/tools-resources/building-evidence-base-for-post-project-evaluation-case-study-review-evaluability-checklists
Jindra čekan/ová, PhD
“By the power of your inner peace, you become the force that changes everything.”
― Thích Nhất Hạnh
"Hope is not the conviction that something will turn out well, but the certainty that it is worth doing, no matter how it turns out." - Vaclav Havel
Czechia
Jindra Cekan
Founder
Valuing Voices at Cekan Consulting LLC
Posted on 13/08/2024
Everyone, yes, as Gaia said: "EAs specifically assess whether a project is ready to be evaluated. This includes reviewing the clarity and measurability of objectives, the robustness of data collection plans, the availability of baseline data, etc." It astonishes me how many projects across the development spectrum are unevaluable (thanks to EAs which show us this) which is an avoidable data debacle. For instance, in 2022 we shared our ex-post evaluation process and 4 case studies at the Adaptation Fund. On slide 8, we note that only 20% of the projects were evaluable based on the criteria we set which included being closed at least 3 years but not more than 7, that there was good data to evaluate (methodological feasibility), and that it provided opportunities for learning across sectors/ geographies etc.
Canada
Jackie (Jacqueline) Yiptong Avila
Program Evaluator/Survey Methodologist
Independent Consultant
Posted on 19/08/2024
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 19/08/2024
Many thanks, Jindra, for sharing your experience and the useful links below. I read through your EA checklist for ex-ante evaluations with interest. Your experience of very few programs having sufficient data resonates. I’d be interested if you have any reflection on stakeholder reception and use of EA results based on your experience (question 3 above).
Warm regards, Amy
Canada
Elizabeth G Sweitzer
Senior Consultant
Guidehouse
Posted on 20/08/2024
Norway
Lal - Manavado
Consultant
Independent analyst/synthesist
Posted on 20/08/2024
Hi, Jackie!
If you are willing to regard evaluation as ascertaining the extent to which any given policy, its implementation strategy, operationalisation and field activity have succeeded in enhancing the quality of life of a target group, then your question becomes simple to answer. Put differently, the problem is to determine whether the physical end product of the above, for instance, a road, hospital, school, farming support etc., has actually benefited a group of people.
Success of an action to be evaluated then, is not measurable by the technical excellence of such end products, some of which may be intangible at first glance, for example, a policy. To evaluate the success or failure of a policy, one has to ascertain to what degree it has benefited a target group.
It is here one runs into a set of difficulties:
Best wishes!
Lal.
Italy
Silva Ferretti
Freelance consultant
Posted on 20/08/2024
Hello again...
is evaluability really about checking
"can managerial requirements be satisfied" or rather about
"Understanding what evaluation approach suits the project and its context"?
I see this tendency of using evaluability as a top-down assessment
surfacing over and over-
further narrowing the scope of the evaluation rather than expanding it.
Iraq
Ayman Ramsis
Program Director
Samaritan's Purse
Posted on 21/08/2024
Dear Jackie,
Great question,
In humanitarian interventions, both the immediate response and rapid response first line interventions, when recipients are on the move, and are not likely to be found again, we do a quick questionnaire to collect some data on the spot, for review later on. Also in short term responses in humanitarian interventions, post distribution monitoring surveys are considered good enough monitoring and evaluation intervention, to capture participants perspective and ensure accountability to affected population and safe guarding aspects.
Someone can argue that in such types of response, we did not do "project evaluation" per se, but rather self evaluation to identify potential successes and lessons learned for future interventions.
With deep respect,
Ayman Ramsis
+201006712846
Czechia
Jindra Cekan
Founder
Valuing Voices at Cekan Consulting LLC
Posted on 21/08/2024
Amy and all-
So I think we need to differentiate between what is evaluable in terms of the aims of the evaluation (timeframe for exposed typically excludes some projects because we have found that we really need to look 3 to 5 years after closure and if it has been more than seven years, then it’s hard to evaluate, and we’ve also found that project that we implemented longest had a greater likelihood of sustainability of results and emerging impacts), plus we need to look at the quality of evaluative data, (for instance, not just trainings given, but what were participants trained in? Was there any knowledge change or behavior change as a result at the final evaluation so we could evaluate changes to it expost).
So an evaluability assessment depends on the aim of that evaluation which is different from how we evaluate. With ex-post evaluations, we want to make sure that partners and participants are still around to be asked (I did one 15 years later post closure in Uganda and the professors training teachers at the pedagogical University were there, but all the ministry staff had changed, and local teachers and students had long moved on).
Yes, I too mandate in our evaluations that we must have boots on the ground to evaluate with local stakeholders, partners at national and regional levels, with the villagers, with the local leaders, and all of that is participatory. Yes, we use focus group interviews, key informant interviews transect walks, lots of participatory tools as outlined in this toolkit we worked on (just out, please comment: 🔗 https://lnkd.in/gyfu_K_3 🌟 Share your comments and suggestions via af-terg-sec@adaptation-fund.org).
To the question of stakeholders creating the evaluation with us: we have asked local people if they would like us to evaluate their project to give them answers they’d want to learn… but repeatedly we have gotten no answers. They look at us rather confused because, I think, they’re just living their lives and they see the project as an input into their lives, one of many, whereas when we try to isolate the project and its results in itself is very strange to folks ;).
Hope this help!
Jindra Čekan/ová PhD
Www.ValuingVoices.com and Jindracekan.com
Cameroon
Frankline OJONG Agbor
President
Cameroon Society of English Speaking Evaluators (CamSEE)
Posted on 21/08/2024
Dear team,
Nepal
Samriti Maharjan
Evaluation Analyst
CGIAR IAES
Posted on 21/08/2024
Hello All,
I worked as an Evaluation Analyst with the EA team on conducting Regional Integrated Initiative. I had both theoretical learning (definition, uses and the importance of Evaluability Assessments) and practical experiences from the assignment. Evaluability Assessment is definitely conducted to assess, evaluate whether the program or portfolio is ready for evaluations. It is also used to improve the country organizations and program theory of change, and monitoring and evaluation framework. In the evaluation assessments, I believe Gender Equality and Inclusiveness is equally important to bring diverse data points. In this EA exercise, we have inclusively included the both the male and female participants rate from the interviews and surveys.
Liberia
Musa K. Sanoe
National M&E Specialist
United Nations Food and Agriculture Organization (FAO)
Posted on 21/08/2024
Evaluative Assessment- I have heard this recently. While I haven't seen a concrete EA report, judging from what I have read, I find it difficult to accept EA as an important step towards credible evaluation. A program/project with a sound logframe, and M&E Framework where indicators are clearly defined, with well-defined disaggregation methods, data sources, methods of data collection, analysis, etc. all defined before implementation. The project has implemented DQA throughout, to be aware of the data quality issues, and has taken measures to improve the quality of data. The project/program has implemented After Action Reviews (AAR) and other reflections to correct gaps.
Amid all these, well-defined M&E Framework, consistent DQA, etc. I do not feel that EA is important, but a smart way of filling any loopholes that are likely to be picked up by the evaluation team. I do not think this is the best way of using donor funds. I rather strengthen the M&E System that will deliver and ensure that the project/program is evaluative at all times rather than putting resources into conducting EA, and after evaluation.
Italy
Silva Ferretti
Freelance consultant
Posted on 22/08/2024
Standing ovation, Musa. I completely agree with you.
Most projects today are already highly structured, incorporating all the elements you mentioned—to the point of straitjacketing them (but that’s a debate for another day!).
If substantial issues are still emerging despite these significant M&E investments, perhaps we need to re-evaluate our management and M&E approaches rather than introducing yet another external process.
Let me reiterate my point: resources spent on evaluability processes - that merely uncover gaps, which should already be apparent - are not the most effective use of our limited evaluation resources. A better investment would be in strengthening organizational capacity and localizing evaluative efforts, rather than continuously paying external consultants.
Or, instead of perpetually investing in processes that cage us further within the same frameworks we might use resources to ensure evaluations and evaluative processes are tailored, meaningful, and localized—not just 'cut and paste' exercises recycling the same ToRs over and over.
---
BONUS POINT: an Artificial Intelligence (AI) driven twist:
I asked AI to take the stance of a proponent of evaluability assessment and critique my response. You know what? I do see the points in the AI generated response and I am aware that proponents are motivated by strong and genuine aspirations. However, I also see how aspirational they are. The reality, as it also surfaced from some posts in this conversation, is that evaluability assessment risks becoming yet another layer of bureaucracy or narrow checks rather than the transformative tool it aspires to. And it is likely to involve short-term stunts by external consultants. The investment in engagement, capacity, and support that this aspirational approach would entail is likely not to be there: if projects are not evaluable it is not often a quick fix that would make it so. Call me pessimistic but I remain skeptical. And I have plenty of ideas about where money could be spent instead.
[Below the AI response to my request mentioned above]
"While I appreciate the concerns raised about the potential redundancy of Evaluability Assessments (EA) in well-structured projects, I believe it's important to consider the unique value that EA brings to the table—beyond merely filling gaps in existing M&E frameworks.
First, EA serves as more than just a checklist or audit of what should already be in place; it provides an opportunity to rigorously test the logic and feasibility of a project before significant resources are expended. Even with a sound logframe and M&E framework, projects can sometimes miss critical assumptions or overlook potential risks that only become apparent through a dedicated evaluability process. EA offers a safeguard, ensuring that projects are genuinely ready for evaluation and that the data collected will be meaningful and actionable.
Moreover, while it's true that most projects today are highly structured, the complexity and scale of many interventions mean that gaps can still emerge despite the best-laid plans. EA allows for a focused examination of these potential gaps, offering a proactive approach to problem-solving rather than waiting for issues to arise during the evaluation phase. This proactive stance can ultimately save time and resources by identifying and addressing issues early on, rather than scrambling to fix them after the fact.
Regarding the investment in external consultants, it’s worth noting that EA can also be localized and integrated into the existing capacities of organizations. Rather than viewing EA as an additional, external process, it can be an opportunity to build internal capacity and foster a culture of continuous improvement. By embedding evaluability within the organization, we not only reduce reliance on external consultants but also ensure that evaluations are tailored, nuanced, and truly reflective of the project's context.
In conclusion, while strengthening M&E systems is undoubtedly important, EA should not be dismissed as redundant or unnecessary. Instead, it should be seen as a strategic investment in ensuring that evaluations are effective, relevant, and capable of delivering real insights that drive better outcomes."
United Kingdom
Daniel Ticehurst
Monitoring > Evaluation Specialist
freelance
Posted on 23/08/2024
Just pitching in, like Silva, to congratulate Musta on making such a great point. The seeming marginal value and high opportunity costs of EAs.
At 2022’s european evaluation society, the key note by Estelle Raimondo and Peter Dahler-Larsen was striking. They rehearsed an interesting analysis on the indiscriminate application and diminishing returns to the practice of late through its "performative" use. Bureaucratic capture.
Some argue EAs are the least of today’s evaluation community’s concerns.
The keynote’s reference to how "....sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it" reminds of the analogy the famous classicist and poet AE Housman made in 1903:
"...gentlemen who use manuscripts as drunkards use lamp-posts,—not to light them on their way but to dissimulate their instability.”
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 23/08/2024
Hi all,
I agree with the argument that the rigid application of a tool, whatever it may be, likely does not result in a positive outcome. This may be the rigid application of theories of change, an overused approach that has become synonymous with “doing” evaluation, yet is still not used to its fullest application in most evaluation reports I read. Or with the over valuing of RCTs based on ideological interests. Or the rigid application of the OECD-DAC criteria based on an expected paradigm. There are expected pathways to what “knowledge” is to be within our field that contributes to this rigidity, particularly when applied in a mechanistic way, and its overuse can indeed perpetuate the bureaucratic nature of our established systems. I fully agreed with the points raised by Dahler-Larsen and Raimondo in Copenhagen several years ago at EES.
Yet I would also argue that any tool, such as an evaluability assessment, should not be dismissed based on this argument. I think a more useful line of inquiry may be to think about when and how EAs could be most useful. In my experience EAs can in effect be a tool for breaking with mechanistic evaluation and bureaucratic systems – and yes, an attempt to breaking management’s capture of evaluation -- through better defining a meaningful and useful focus for an evaluation. Or the decision to not do an evaluation based on its findings. I think the challenge is at the organizational level with the inevitable interest to standardize and create norms for its use across complex realities.
Regards, Amy
Mexico
América Elvira Hernández
Especialista en Monitoreo y Evaluación
Evaluadora independiente
Posted on 24/08/2024
[translated from original in Spanish]
Dear fellow evaluation experts. I am happy to subscribe that it is important to conduct an evaluability assessment is fundamental not only for the planning and monitoring of the evaluation, but also and particularly important to project the scope that the evaluation could have and to manage the expectations of the stakeholders. As a result of an evaluation, it was suggested to adopt evaluability steps for the follow-up of the project and to ensure the conditions for external evaluation, in particular. So yes, the distinction between evaluability assessment and internal and external evaluation exercises can be facilitated, strengthening the mechanisms to achieve the former and prepare the latter.
In the case mentioned above, not all the criteria included in the design of the instrument to measure the evaluability of the project were available and this led to some differences of opinion regarding the expected evaluation among stakeholders, particularly between the funder and the evaluand. Agreements were negotiated and reached, and as a result the evaluation process was able to continue.
The evaluated project was funded by multilateral agencies, so yes, it is possible to use the results of the EA in an evaluation and both complement each other's information to measure contributions to the SDGs.
Benin
Emile Nounagnon HOUNGBO
Agricultural Economist, Associate Professor, Director of the School of Agribusiness and Agricultural Policy
National University of Agriculture
Posted on 26/08/2024
[Translated from French original]
Ladies and gentlemen,
I have come to take part in the debate. The evaluability of a development project is automatically established by the quality of the specific objective and the expected results of the project. Clarity and precision at these two levels ipso faco justify the evaluability of the project. Therefore, everything depends on how the project is set up. As a result, there is no need for a special evaluability assessment. With a specific SMART objective and expected results, all with clear, measurable indicators, the project's evaluability is guaranteed. The rest will depend on the planning and implementation of monitoring and evaluation. From a financial point of view, it is not appropriate to invest further resources in evaluability assessment, as funding monitoring and evaluation has often been a major challenge. Resources should therefore be mobilised for the effective and efficient implementation of project monitoring-evaluation.
Thank you for your support.
Guinea
Jonas SAGNO
Programs and Resource Mobilization Director
Gigantesq Connexion NGO
Posted on 26/08/2024
[Translated from French original]
Dear Amy,
First of all, I would like to thank you for initiating this reflection and discussion on a subject as relevant as evaluability assessments (EA). Your invitation to explore this topic has given me much food for thought.
Evaluability assessments are a powerful tool for ensuring the quality and usefulness of evaluations. They help to clarify expectations, identify potential challenges, and ensure that interventions are designed in a way that allows for rigorous and constructive evaluation. To maximise their impact, I believe it would be crucial to improve communication about their added value, train stakeholders in their usefulness, and ensure their systematic integration into the planning of interventions, particularly those linked to the SDGs. In doing so, evaluability assessments can make a significant contribution to the continuous improvement of development programmes and the achievement of expected results.
I look forward to reading the thoughts of other members of the community on this topic.
Yours sincerely
Jonas SAGNO
Sociologist & Director of Programmes and Resource Mobilisation at the NGO Gigantesq Connexion(Incubator & Accelerator of Projects focused on Sustainable Development)
Mauritania
Ahmedou El Bah
Consultant
Posted on 01/09/2024
[Translated from French original]
Dear Amy,
First of all, I would like to thank you for initiating this reflection and discussion on a subject as relevant as evaluability assessments (EA).
I have had the opportunity to participate in several evaluability assessments (EAs), including those recently carried out by the CGIAR, as well as others related to FAO's work on gender and its strategic objective of reducing rural poverty. These varied experiences have convinced me of the great value of EAs, especially when it comes to tackling complex issues.
In these contexts, where the scope of the evaluation is often blurred, the EA process is essential. It enables key interventions to be mapped out, encouraging early evaluative thinking within the programme team. It also provides a concrete test of the theory of change, the results framework and the monitoring and evaluation tools, highlighting any gaps in dedicated resources, including the budget allocated to the evaluation itself.
Carrying out EAs upstream of the intervention cycle has proved particularly judicious. This approach encourages a reflective pause in the early stages of implementation, enabling adjustments and improvements to be proposed before the programme has progressed too far. This can potentially generate savings in resources and increase the chances of success.
In addition, the EA process provides an excellent opportunity to revisit and refine the theory of change and results framework, aligning them with the realities on the ground and budgetary constraints. This ensures that evaluation remains relevant and feasible throughout the implementation of the programme.
Finally, the EA facilitates the establishment of an early relationship between the evaluation office and those responsible for programme implementation, encouraging smooth collaboration and communication throughout the evaluation. EAs also help to determine the right time for evaluation and the type of evaluation best suited to the intervention.
In sum, my experience with EAs, both within the CGIAR and other organisations, highlights their crucial role in clarifying the scope of the evaluation, strengthening programme design and ensuring the availability of resources for effective evaluation. They encourage early collaboration between stakeholders and help to define the timing and nature of the evaluation, ensuring its relevance and appropriateness to the context of the intervention.
Italy
Gaia Gullotta
Evaluation consultant
CGIAR
Posted on 02/09/2024
Dear Jackie,
Thank you for your message. Our EA checklist is available on page 20 of the guidelines Conducting and Using Evaluability Assessments in CGIAR: CGIAR Evaluation Guidelines | IAES | CGIAR Independent Advisory and Evaluation Services. There are 14 core questions out of a total of 40. The document is also available in Spanish.
Kind regards,
Gaia Gullotta
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 03/09/2024
Thank you all for your participation. There’s been a lot of discussion on the pros and cons of EAs, with strong perspectives on either side of the debate. We have varied experiences with EAs as a group, some of us having implemented EAs, some of us not; and some of us having read reports, some of us not. And we have strong perspectives ranging from seeing them as an unnecessary use of scarce M&E resources to identifying specific benefits for their use in planning and maximizing the outcome of an evaluation.
We will wrap up this discussion by September 10th. Before then, I’d like to invite more reflection on when to implement an EA and when not to - the question of both cost-benefit and perceived benefit to stakeholders, relating to questions 1 and 2 above. I would suggest that EAs need to be proportionate to the cost of a subsequent evaluation, both as good use of financial resources and for stakeholder buy-in. Does anyone have any thoughts to contribute on this, both in terms of actual ratios, and/or addressing organizational policy on EAs on when and how they should be implemented? I know of some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount. It seems to me that in addition to a checklist for implementing an EA, which provides important concepts to think about and address, a checklist for whether to implement an EA could also be useful in providing what to consider in deciding whether one is applicable and/or feasible.
Kind regards, Amy
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 04/09/2024
Responding to Amy's post.
1. Yes, EA's need to be proportionate to likely costs of an evalaution. I agree 100%. Some historical research on past ratios should now be possible to give us ball-park ideas on the range and median values
2. Yes, a checklist re the need for an EA , is at least worth thinking about. One pointer in this direction is in the comment that "some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount" Here I think budget size may in effect be being used as a proxy for programme complexity. But it is not a straight forward correlation. A large simple programme may not need an EA i.e one with a few simple clear objectives and interventions. Would an immuniation programme fit this description? But as general rule I think large budget programmes do tend to be more complex in terms of objectives, interventions, partners and stakeholders and geographic locations. Consider these as nodes in a network, the more of these there are the more numerous are the possible and actual causal linkages, and types and sources of information about what is happening. if a checklist of EA need is to be developed perhaps these sources of complexity could be items to considered?
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 04/09/2024
Many thanks, Rick, for your comments. Such historical data on past ratios would be interesting to examine. And yes, budget size may be one of the items on a checklist considered as a proxy for complexity, but I agree, it should not be the only one in depicting complexity, for the reason you pointed out. Your suggestion about depicting nodes in a network makes sense to me. The more numerous the possible causal linkages and sources of data may then result in a higher score, which may then lead to a “yes” decision on an EA.
Perhaps such a checklist might also help focus an EA, or include a follow-on set of items that can initially explore the four primary areas depicted in the jigsaw diagram you shared below - https://mande.co.uk/wp-content/uploads/2022/05/Austria-diagram.png (institutional and physical context, intervention design, stakeholder demand, and data availability). Such a checklist, if needed, may then not only guide a decision on whether to conduct an EA, but it may also help focus an EA and its priority areas, thus making it a more cost-effective and focused exercise.
I’d be interested to hear from others on this forum who manage evaluations/EAs. How do you decide in your organization whether or not to conduct an EA? And how are decisions made in how to focus an EA?
Regards, Amy
Australia
DOROTHY LUCKS
EXECUTIVE DIRECTOR
SDF GLOBAL PTY LTD
Posted on 06/09/2024
Thanks Amy and others for this interesting thread.
We have been involved in many EAs for different organisations - international financing institutions, UN agencies, NGOs and private sector. I agree with Rick that complexity rather than size of investment is most critical in terms of the EA value. Institutions with a clear mandate and operational procedures, and often a menu of performance indicators and guidelines usually do not require an EA.
The most useful ones that we have been engaged with are with complex, developmental projects where the expected outcomes may be emergent with process as well as output and outcome indicators. Another useful process for EAs has been where there is limited M&E capacity within the implementation team and they are unsure how to measure what is outlined in the design. So it is the incremental value of the EA and also the investment of cost to benefit - two recent examples below.
One, a very complex natural resource management programme that reached its final years, covering policy, institutional and physical results. The implementation team realised that they did not know how to measure all of the final outcomes - they had assumed that an impact assessment team would produce all data required but did not have the budget for the extent of data gathering required. We did a (very belated) EA and found that the team needed to reconstruct a range of raw implementation data to enable tracking of outcomes - a huge job. If they had an EA, and capacity development earlier in the programme, they would have been in a much stronger position and the costs involved in solving the issues would have been much lower.
Another was a complex youth and indigenous project - close to commencement - where a culturally sensitive approach to indicators and monitoring processes was required. That EA was carried out in a very participatory (inexpensive) way that was designed to engage participants in safe and appropriate ways of recording data that would demonstrate levels of progress and learning that would feed back into improving design for later stages of implementation. The costs of the early time investment in the EA reaped huge benefits for both programme outcomes and evaluation.
I also like the idea of the decision-making nodes for whether an EA is required or not. Thanks again for all the points raised.
United Kingdom
Rick Davies
Evaluation Consultant
Posted on 07/09/2024
At the significant risk of creating a too-rigid conceptualisation, I suggest it could be useful to ask these questions when considering the relevance of an EA:
1. institutional and physical context: How numerous and diverse are the locations?
2. Intervention design: How numerous and diverse are the interventions, and their interconnections?
3. Stakeholder demand: How numerous and diverse are the funders, implementing partners and beneficiary groups, and their interconnections?
4. Data availability: How numerous and diverse are the M&E systems and their interconnections?
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 13/09/2024
Thank you all for an interesting and engaging dialogue on evaluability assessments. Please check back soon for an excellent summary of our discussion drafted by Gaia Gullotta of CGIAR. It will be provided in English, Spanish and French.
Cheers!
Nigeria
Esosa Tiven Orhue
Founder/CEO
E-Warehouse Consulting
Posted on 16/09/2024
Thank you.