Skip to main content
42 contributions

Evaluability Assessments: An invitation to reflect and discuss

Posted on 24/07/2024 by Amy Jersild

Dear colleagues,

Have you heard of evaluability assessments (EAs)? Evaluability refers to the extent to which an intervention can be evaluated in a reliable and credible fashion (OECD, 2022). 

Under the guidance of Rick Davies, CGIAR’s Evaluation Office developed an EA framework in 2022 and, over the past year, has been using it to assess the evaluability of four large-scale programmes known as Regional Integrated Initiatives. You can read a synthesis report here and a recent blog on CGIAR’s experience here

The importance of EAs 

At CGIAR, we have come to identify EAs as central to developing a culture of results. A strong focus on evaluability facilitates clear planning for monitoring and evaluation and its subsequent use. As the late evaluation theorist Michael Scriven noted, evaluability is analogous to requiring serviceability in a new car and may be thought of as “the first commandment in accountability” (Scriven, 1991, p. 138).  

Invitation to discussion

We are keen to share our experience and learn from Eval Forward colleagues about their experiences of EAs. If you have commissioned or participated in an EA or used EA results when managing evaluations, please consider sharing your insights and reflections on (one or all of) the following questions: 

  1. To conduct an EA or to not conduct an EA: What factors enter into a decision to conduct, or not to conduct an EA? What purpose have you formulated for an EA depending on what is being assessed and when?
  2. Distinguishing between EAs and other kinds of evaluative exercises: How can we bring clarity to the EA process for programmes and/or monitoring colleagues in order to help facilitate a successful outcome? How easy is it to convince funders of the relevance and usefulness of EAs in the context of budgetary constraints?
  3. Use of EA results for decision-making processes: Do you have examples of how EA results have been used for decision-making purposes? Have you encountered any challenges in facilitating the use of EA results?
  4. Use of EAs in assessing evaluability of interventions designed to contribute toward realization of the Sustainable Development Goals (SDGs): Are there  examples of the use of EAs to determine the evaluability of country level or organizational level contributions to the realization of the SDGs? Are there any particular considerations for EAs within this circumstance?
Let’s connect and share

Notably, Evaluability Assessments have a designated page on BetterEvaluation platform. Learning from this discussion will be shared on the Better Evaluation page and at the upcoming European Evaluation Society (EES) conference. We are excited to explore these questions and share our experiences with you! 

Amy

---

References and other resources

 

This discussion is now closed. Please contact info@evalforearth.org for any further information.

Dreni-Mi Mahamat Djimé

Chad

Dreni-Mi Mahamat Djimé

Assistant au Secrétaire Général du Ministère de l"Agriculture

Ministère de l'Agriculture/Ancien Directeur des Etudes de la Planification et du Suivi

Posted on 26/07/2024

[Originally posted in French]

Hello dear friends and colleagues of Eval Forward!

As we are in a learning process, I'd like to share a few points from my research to help us understand Evaluability Assessment.

Assessing evaluability: a crucial step before any evaluation

Evaluability assessment is a process that aims to determine whether an evaluation of a programme, project or policy is feasible and relevant. In other words, it involves checking whether the conditions are right for conducting a high-quality evaluation and obtaining reliable and useful results.

Why is this important?

  • Save time and resources: By identifying potential obstacles in advance, you can avoid launching an evaluation that will not produce the expected results.
  • Improve the quality of the evaluation: By clarifying the objectives, research questions and available data, you increase the chances of obtaining solid, relevant results.
  • Enhance the credibility of the evaluation: By demonstrating that the evaluation has been designed in a rigorous manner, the confidence of decision-makers and other stakeholders is strengthened.

 

The key stages in an evaluability assessment

  1. Defining the objectives of the evaluation: What changes are we trying to measure?
  2. Identification of research questions: What are the questions that the evaluation needs to answer?
  3. Analysis of the context: What is the context in which the programme is being implemented? What external factors could influence the results?
  4. Assessment of available data: What existing data can be used to answer the research questions? Is it sufficient and of good quality?
  5. Identification of constraints and challenges: What are the potential obstacles to carrying out the evaluation (e.g. lack of resources, difficulties in accessing data, etc.)?
  6. Proposed evaluation plan: If the evaluation is deemed feasible, a detailed plan can be drawn up, including the methodology, timetable and resources required.

 

The benefits of evaluating evaluability

  • Better understanding of the programme: Evaluability assessment provides a better understanding of the programme, its objectives and how it works.
  • Improved programme design: The results of an evaluability assessment can help to improve programme design and make it more effective.
  • Better use of resources: By avoiding unnecessary or poorly designed evaluations, resources can be used more efficiently.

 

In short, evaluability assessment is an essential tool for guaranteeing the quality and relevance of evaluations. It ensures that the resources invested in evaluation are used optimally and that the results obtained are useful for improving policies and programmes.

Thank you

Dreni-Mi from Chad

Steven Lam

Canada

Steven Lam

Independent Research and Evaluation Consultant

Posted on 26/07/2024

Hi Amy and all,

To answer question 2 of how to facilitate the use of evaluability assessments, I find it helpful to do an evaluability assessment without calling it an evaluability assessment given the politics of "evaluability." I conceptualize it as an activity to prepare programs for evaluation rather than to determine "evaluability." This means making sure that the linkages in the program theory of change are logical, that the proposed outcomes are possible, etc. This approach to evaluability assessment is more of a process integrated within program planning and evaluation generally and as such does not often lead to a stand-alone output. 

A few years ago my colleague and I reviewed evaluability frameworks and case studies, which might provide more insights into other gaps and opportunities.

Lam S, Skinner K. (2021). The use of evaluability assessments in improving future evaluations: a scoping review of 10 years of literature (2008-2018). American Journal of Evaluation. 42(4): 523-540.

Best,

Steven

Silva Ferretti

Italy

Silva Ferretti

Freelance consultant

Posted on 29/07/2024

The issue I have with "evaluability" is that all programs can (and should) be evaluated.

Some programs lend themselves to more "rigorous" and mainstream approaches, while others, due to their inherent complexity, require different methodologies.

These alternative approaches can help build an understanding of dynamics that did not previously exist... which is a GOOD thing. Not all programs need to be preformatted. This can be especially beneficial for participatory and innovative/adaptive programs that need to determine the best way forward as they progress.

It is crucial that we do not use evaluability as another brick in the wall for making evaluations just about compliance with results or a cheaper process of standard research applied to arbitrary theories of change.
Instead, we should reclaim evaluation as an opportunity to assess changes in ways that are coherent with the program's intent and context. So the question is not "Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) but rather "What evaluative approach is best suited to this context."

When I read, "*Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance. In these circumstances, an Evaluability Assessment would not be needed to decide whether to go ahead with an evaluation. Efforts need to focus on the more immediate tasks of improving project design and/or the M&E framework*."(https://www.betterevaluation.org/methods-approaches/themes/evaluability-assessment)
I really want to cry....

We are confusing the project documentation with the reality of the work. In my experience, many projects had sketchy logograms or theories of change, yet were terrific. The evaluation helped systematize them and unpack learning and value. If we keep looking only at what is preformatted, we miss the chance of real transformation.

Rick Davies

United Kingdom

Rick Davies

Evaluation Consultant

Posted on 29/07/2024

In response to Silva's comments below (also sent to email server)

"Hi Silva

Let me address some of your points

1. "All projects can and should be evaluated

             Yes but WHEN? EA recommended delays can give time to address data, theory and stakeholder concerns time to be resolved and thus lead to a more useful evaluation

              Yes, but HOW? Approaches need to be proportionate to resources and capacities and context. EAs can help here

2. Re  "Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance."  and your comment that.. "We are confusing the project documentation with the reality of the work."

            It would be an extreme position to argue that there will be no instances of good practice (however defined) on the ground in such circumstances (i.e where there are poor designs poor M&E frameworks). 

            But It would be equally extreme to argue the other way, that the state of design and data availability has no relationship to outcomes on the ground at all. If you do take that position the decision to evaluate is effectively a gamble, with someone's time and resources. 

           At some stage someone has to decide how much money to spend when and how. EAs can help inform those decisions

3. Re ""Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) " 

             I would like to see some evidence for this claim

            As counter evidence, at least of intention, I refer you to this diagram, from the Austrian Development Agency EA guide, and to the reference to the jigsaw nature of an EA, in the sense of having to fit different needs and capacities together,  rather than following  any blueprint

regards, rick


 

Daniel Ticehurst

United Kingdom

Daniel Ticehurst

Monitoring > Evaluation Specialist

freelance

Posted on 31/07/2024

Dear Amy,

Thanks for posting. I remember well reading Rick D's Synthesis of the Literature back in 2013. I had four observations: 

  1. Rick says: "Many problems of evaluability have their origins in weak project design. Some of these can be addressed by engagement of evaluators at the design stage, through evaluability checks or otherwise".  A great point. Too often is it the case that theories of change and/or results frameworks are either developed without any decent enough problem analysis; 
  2. And it's to tell  - Any competent person would be able to spot these and recommend improvements to the coherence and validity of what is to be evaluated - they used to call this an ex ante or appraisal. Mr Quinn Patton has usefully re-introduced such an assessment through developmental evaluation. In this context one would have to question the need for a special / separate evaluability assessment;  
  3. What about monitoring ? - ie. management involvement at design and implementation. An Evaluator's perspective and info needs of them and donors are but just one thing. M and E serve different objects, answer different questions at different times and have different requirements for comparative analysis; and 
  4. If the results - outputs, outcomes and impacts - of the entity being evaluated are not relevant to those in need  - as opposed to those in power- then it matters little whether it can be evaluated or even approved in the very first place. No? An obvious point I know.

 

Finally, some help: You quoted Michael Scriven as saying: "evaluability is analogous to requiring serviceability in a new car and may be thought of as “the first commandment in accountability”. I know this must be a significant saying, but I don't understand/get it and its importance. What do you think he means? 

Best wishes and thanks again,

Daniel 

Lal - Manavado

Norway

Lal - Manavado

Consultant

Independent analyst/synthesist

Posted on 01/08/2024

Dear Amy,

Many thanks for raising this question, which has the potential of doing much good as well as its opposite. 

My point of departure regarding evaluation is very simple; could/has a set of actions intended to enhance the lives of a target group attainable/been attained?

If the answer to this question is no/marginally/partially, the set of actions undertaken are flwed in someway. However, such unsatisfactory actions may leave behind excellent motor ways, specialist hospitals, bridges, international airports etc.

The evaluability of the extent to which some public good has been achieved is extremely difficult. When projects i.e., a set of actions is decided on, it is seldom that the actual desires of a target group is consulted in a meaningful way. And even when there is a general agreement in a target group  on the desirability of a goal, there may be disagreement on how that objective may be attained. Often, this is due to the cultural norms dominant in that social group or due to the influence of some outside vested interest skilled in propaganda.

Should the evaluator be willing and able to identify and gauge the extent and nature of those three variables during, execution and conclusion of a project, it is possible to undertake remedial adjustments to it in order to increase its success. True, it is difficult, but if undertaken, in the end, it would benefit a group of live people neither who or whose needs are a mere entity.

Cheers! 

Lal.

Carlos Tarazona

Italy

Carlos Tarazona

Senior Evaluation Officer

FAO

Posted on 01/08/2024

Hi Amy,

Thanks for starting this valuable exchange. I would like to share my experience doing evaluability assessments of programmes in FAO. These have generally aimed at determining the readiness of evaluating FAO’s work in a given area, and/or to inform decision-making on FAO evaluation plans. 

For example, the evaluability assessment of FAO's rural poverty reduction programme (available at https://openknowledge.fao.org/server/api/core/bitstreams/161ba76c-daea-…) was instrumental to identify which components of FAO's programme were feasible and worthwhile to evaluate for results. With this objective in mind, the evaluability assessment reviewed the clarity of the programme logic and the extent of completion of planned activities and use of resources, and resulted in proposals on the scope, approach, methodological design and work-plan of the evaluation.

More recently, we are using evaluability assessments to inform our decisions on which evaluations to conduct, as well as to define their scope, objectives and approach. This development is aligned with our medium-term strategy that seeks to ensure that the Office of Evaluation produces high-quality evaluations that meet the needs of FAO stakeholders (https://www.fao.org/evaluation/highlights/detail/fao-evaluation-strateg…)   

Best regards,

Carlos

 

Silva Ferretti

Italy

Silva Ferretti

Freelance consultant

Posted on 04/08/2024

I agree with you [Rick Davies contribution below] on many things, and yes, it's key to have a reality check on what an evaluation should and can do.

1. "All projects can and should be evaluated
 
Yes but WHEN? EA recommended delays can give time to address data, theory and stakeholder concerns time to be resolved and thus lead to a more useful evaluation
 
Yes, but HOW? Approaches need to be proportionate to resources and capacities and context. EAs can help here
I am with you on this. And too many evaluations go on autopilot (i.e. cut and paste a template with small adaptations at the end of the project). I definitely welcome evaluability as an opportunity to broaden possibilities, such as suggesting real-time evaluations, promoting participatory approaches, or highlighting important questions that need to be addressed or priorities to address. But let's be realistic: most evaluability documents I saw check if the programme had collected preset indicators on an agreed log frame/theory of change and demand to check which OECD⁄DAC criteria can be evaluated. The Austrian Development Agency has interesting points on the utility of the evaluation as perceived by different stakeholders - but the annexe is then missing :-( (is this still a working document?)  
I dream as an evaluability document that provides a catalogue of possibilities rather than narrowing to conventional approaches. They should suggest that a developmental evaluation is possible, for example, rather than asking for preset theories of change. Otherwise, evaluability will stifle rather than promote innovation.
 
2. Re  "Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance."  and your comment that.. "We are confusing the project documentation with the reality of the work."
 
It would be an extreme position to argue that there will be no instances of good practice (however defined) on the ground in these circumstances. 
But It would be equally extreme to argue the other way, that the state of design and data availability has no relevance at all.If you do take that position the decision to evaluate is effectively a gamble, with someone's time and resources. 
At some stage someone has to decide how much money to spend when and how
Of course, we should spend evaluation resources wisely. And, of course, a good evaluability assessment can result in more efficient evaluations (and many evaluation managers are already good at doing that)
Thinking about cost-effectiveness.... can we really justify spending a lot of money to understand only if a programme can be evaluated without providing any additional learning? I am not discarding this, and yes, it can be useful.
But, for suitable programmes, it is possible to have a phased evaluation where the first phase assesses the programme and provides some preliminary learning while understanding the best options for the next phases. Of course, you need an evaluator / a team conversant with different methodologies, adaptable, and close to management, but it is very possible to do that. A phased evaluation will not always be suitable, but it is a real possibility that can be overshadowed by conventional evaluability approaches.
 
3. Re ""Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) " 
 
I would like to see some evidence for this claim?
As counter evidence, at least of intention, I refer you to this diagram, from the Austrian Development Agency EA guide, and to the reference to the jigsaw nature of an EA, in the sense of having to fit different needs and capacities together, rather than following  any blueprint
 
As I wrote before, the checklist confirms a very conventional understanding of a programme. I reiterate here that I worked on programs that did not have a log frame, or set indicators and baselines (think, for example, about evolving models of work), and where the evaluation helped to systematize approaches. Such programs would simply not have passed the proposed checklists. And to what extent they could have reached important stakeholders that were only identified through snowballing.  Yes. I am referring to special cases, as I tend to work more grassroots and on innovative projects. But these are the ones most at risk of remaining under the radar with a conventional understanding of evaluability.
 
So I see how evaluability as it is can work for large, relatively standardized projects. But as it is now, it still falls short to become a possibility for novel approaches and for liberating ideas and possibilities, which our sector badly needs.
 
Amy Jersild

United States of America

Amy Jersild

PhD Candidate and evaluation consultant

Western Michigan University

Posted on 04/08/2024
Dear all,
 
Thank you for your active participation and feedback! I am reading and reflecting on all as they come in. I'll respond to Dreni-Mi and Daniel now, and I look forward to continued discussion. 
 
Dear Dreni-Mi,
 
Many thanks for your posting. You’ve given a sound overview of the various phases of an evaluability assessment, a rationale for its implementation, and its benefits. There are several stages of evaluability assessments mapped out in the evaluation literature, all somewhat related but stressing different aspects or values, and with slightly different categorization. Wholey’s 8 steps come to mind, as well as Trevisan and Walser’s 4 steps. Your emphasis on cost effectiveness and preparing for quality evaluation resonates I think across all the approaches.
 
One of our learnings conducting evaluability assessments at CGIAR this past year was the value of a framework (checklist) for use and also the need to be flexible in its implementation. The jigsaw nature of EAs that Rick Davies shared in another posting I think is an especially helpful way to think about EAs as a means of bringing together the various pieces in an approach best suited to a certain context. Clearly defining objectives for an EA, and responding to specific needs leads to a more effective use of the framework, providing more flexibility and nuance to the process.
 
From the key stages you’ve outlined, what have you found the most challenging to implement? And what is your experience with use of evaluability assessment results?
 
Kind regards,
Amy
 
Dear Daniel,
 
Many thanks for your comments. I’ll respond to a few. I fully agree – evaluators should have a seat at the design table. Their participation and ability to facilitate evaluative thinking among colleagues usually aids in greater evaluability for an intervention, based on my experience. It can also facilitate the development of sound monitoring and evaluation planning, and the capacity to use and learn from data generated from these processes. Such a role broadens what is typically understood, in some circles anyway, of what evaluators are and what they do, which I really like in terms of support to the further professionalization of our field.
 
In his 3rd edition of Evaluation Thesaurus, Michael Scriven makes reference to the philosopher Karl Popper’s concept of “falsifiability” when discussing evaluability. This concept relates to the idea that there should always be a capacity for some theory, hypothesis, or statement to be proven wrong. For an evaluand (Scriven’s definition of what is to be evaluated -- a program, project, personnel, or policy, etc) to be evaluable then, I understand that broadly it would be deemed falisifiable to the extent that it is designed, developed, or constructed in a way where evidence may be generated as “proof” of its value.
 
The religious connotation in Scriven’s reference to a first commandment certainly is intended to place importance on the concept. Evaluability as “the first commandment in accountability” impresses upon me as something that is due – a justification, and ultimately a responsibility. Scriven notes low evaluability has a high price in terms of the cost borne. “You can’t learn by trial and error if there’s no clear way to identify the errors.” And “It is not enough that one be able to explain how one spent the money, but it is also expected that one be able to justify this in terms of the achieved results” (p. 1).
 
I think Scriven provides further discussion on evaluability in his 4th edition. I’m traveling and don’t have access to my hard copy library back home. Perhaps others can reference further inputs on this.
 
Thoughts?
 
Best,
Amy
Hadera Gebru Hagos

Ethiopia

Hadera Gebru Hagos

Senior Consultant, Natural Resource Management and Livestock Specialist

Freelance Consultant

Posted on 06/08/2024

Thank you Amy for bringing up this topic. I also thank the CGIAR’s Evaluation Office  for developing  an EA framework for use. 
 

To my understanding, if project cycle management is effectively carried out  of a given project, the envisaged benefits of the  Evaluability Assesment are inbuilt within  the project cycle management.  Thus,  all such projects are Evaluable.  I do not see the need for standalone   Evaluability Assessment.  Usually, projects are funded after they pass through various steps/phases of validation by stakeholders to ensure their feasibility,  smooth implementation and  that the  envisaged outcomes and impacts are realized.  These phases/steps include as we understand: problem identification; feasibility study; stakeholder analysis; initial conceptualization; project planning; implementation plan; evaluation and monitoring plan. In addition projects are appraised before implementation. Further, during implementation, among others, timely and regular monitoring and reporting are done and communicated among key stakeholders, with the objective to ensure/check projects are implemented rightly and to take timely corrective measures if otherwise.   So to conclude, given the current widely reported financial constraint to fund projects, why do we need to conduct standalone EA, if sound/effective/contemporary/informed project cycle management is employed and  projects are apprised properly? 
 
With regards,
 
Hadera Gebru
Senior Consultant, Livestock and Natural 
Resource Management Specialist
Addis Ababa, Ethiopia
Daniel Ticehurst

United Kingdom

Daniel Ticehurst

Monitoring > Evaluation Specialist

freelance

Posted on 06/08/2024

Dear Amy,

Thanks for taking time to read through and reply. 

My apologies but let me be terse and honest.....

Many thanks for explaining what Mr Scriven wrote. I now understand. That said, I remain none the wiser on the import and significance of what he wrote. Motherhood and apple pie, so clever but a bit thin. 😉 

On EAs themselves, and as I alluded to, the purpose and scope of an EA's inquiry appears to be part and parcel of what most people would refer to as a competent ex ante evaluation or appraisal. As you say, great to have an evaluator or two on the team ,yet...How could you not look at the "evaluability" of the investment by appraising it and the evidential reasoning that informs its rationale and design factors, including a ToC and/or a results framework?  Or are we saying that an appraisal that recommends an investment not worthy can be judged as evaluable or, indeed, vice versa (assuming the EA is conducted after the appraisal)? 

Thus, and as Hadera points out, the incremental value generated by carrying out a discrete  - some would say a contrived EA solely by evaluators appears marginal at best, potentially fragments the team and comes across as rather extravagant and indulgent.  

Many thanks for the post and the discussions that have prompted enquiry, debate, skepticism and doubt about  EAs.   

With best wishes,

Daniel      

 

Norbert TCHOUAFFE TCHIADJE

Cameroon

Norbert TCHOUAFFE TCHIADJE

Associate Professor / Senior Researcher

University of Ebolowa/Pan-African Institute for Development Cameroon

Posted on 06/08/2024

Thanks dear all for your insights.

For me , EA is relevant with theory of change.

It is a way to reassess and to cross- check the evaluation process in order to verify if the outcomes are credible,  acceptable , reliable and consistent with what was expected.

Thanks.

Norbert TCHOUAFFE, PhD

 

Svetlana I Negroustoueva

Italy

Svetlana I Negroustoueva

Lead, Evaluation Function

CGIAR

Posted on 08/08/2024

Dear colleagues, 

it is exciting to see such an insightful and passionate discussion. I represent CGIAR’s evaluation function and we developed and have implemented EAs with Amy under advisory of Rick Davies. Rationale for conducting EAs in CGIAR is summed up in the blog. Complementing and in response to some items raised here, it is important to note that all contexts are not created equal and level of unquestionable preparedness for an evaluation cannot be assumed in any type and size of intervention in any context. In the evolving contexts, some aspects may not be prioritized in time and to meet the needs of everyone involved. EA thus they have already brought us one step closer to facilitate learning and ensure that accountability mechanisms, you can even call it an evaluative baseline, are in place before an intervention is launched or progresses too far. EAs have helped builds confidence among stakeholders, including MEL professionals (often disempowered) and funders, that the aspirational goals and objectives are not only feasible to implement and measure, but also that MEL colleagues are a key stakeholder for us and towards reconciling accountability and transparency with those who fund. EAs have helped enhance the sense of ownership and credibility of processes and results, and thus can be crucial for securing funding and support. By recommending funders around timing and evaluation scope expectation can be set properly also at which level the evaluation can be funded, to set up expectations on the depth of inquiry and evaluative learning.  

Gaia Gullotta

Italy

Gaia Gullotta

Evaluation consultant

CGIAR

Posted on 08/08/2024

Dear All,

I am excited to have been part of the team for the Evaluability Assessment of the four Regional Integrated Initiatives of CGIAR. It was a valuable learning experience, revealing both strengths and challenges in using the EA framework for the first time, particularly in understanding evaluability and stakeholder engagement. 

Many thanks, Hadera, for your posting. I appreciate you bringing up this important issue. I agree, Project Cycle Management (PCM) practices are essential, but they do not make standalone EAs redundant. EAs complement PCM by providing a dedicated, independent, and detailed assessment of a project's evaluability, thus enhancing overall project quality, accountability, and likelihood of success.

 Indeed, EAs involve external evaluators who provide an impartial perspective. This independent assessment can identify potential issues that internal stakeholders might overlook due to familiarity or their interests in the project’s success.

Furthermore, standalone EAs specifically assess whether a project is ready to be evaluated. This includes reviewing the clarity and measurability of objectives, the robustness of data collection plans, the availability of baseline data, etc. These elements might not be thoroughly analyzed during regular PCM phases.

Finally, based on our experience with previous evaluations, an important issue is the quality of data. In fact, it has often been necessary to conduct extensive data cleaning, which required additional time and resources. This could be anticipated and prevented through an evaluability assessment.

With thanks and warm regards,

Gaia Gullotta

Rick Davies

United Kingdom

Rick Davies

Evaluation Consultant

Posted on 10/08/2024

Hi all

 

I would just like to add to Svetlana's important point: "it is important to note that all contexts are not created equal and level of unquestionable preparedness for an evaluation cannot be assumed in any type and size of intervention in any context. In the evolving contexts, some aspects may not be prioritized in time and to meet the needs of everyone involved".
I think this is especially the case with stakeholders' interests in an evaluation: what they want to know and what they are concerned about. These are very likely to change over time. 

Jindra Cekan

Czechia

Jindra Cekan

Founder

Valuing Voices at Cekan Consulting LLC

Posted on 12/08/2024

Everyone, glad to see this is being discussed. In 2017 we got a grant from Michael Scriven's Faster Forward Fund to look at 8 ex-post studies. Here is the blog about our report, linked: https://www.betterevaluation.org/tools-resources/building-evidence-base-for-post-project-evaluation-case-study-review-evaluability-checklists

As part of that work, we created Evaluability Checklists which we've adapted and used the last 7 years in many ex-posts we've done including these for the Adaptation Fund https://www.adaptation-fund.org/about/evaluation/publications/evaluations-and-studies/ex-post-evaluations/
 
Warmly, Jindra


 

Jindra čekan/ová, PhD
 

Valuing Voices: Sustainable Solutions for Excellent Impacts
and Development Finance + Development Aid for Sustained Impacts
CEKAN CONSULTING LLC
President/ Catalyst
 

“By the power of your inner peace, you become the force that changes everything.”

― Thích Nhất Hạnh 
 

"Hope is not the conviction that something will turn out well, but the certainty that it is worth doing, no matter how it turns out." Vaclav Havel

Jindra Cekan

Czechia

Jindra Cekan

Founder

Valuing Voices at Cekan Consulting LLC

Posted on 13/08/2024

Everyone, yes, as Gaia said: "EAs specifically assess whether a project is ready to be evaluated. This includes reviewing the clarity and measurability of objectives, the robustness of data collection plans, the availability of baseline data, etc." It astonishes me how many projects across the development spectrum are unevaluable (thanks to EAs which show us this) which is an avoidable data debacle. For instance, in 2022 we shared our ex-post evaluation process and 4 case studies at the Adaptation Fund. On slide 8, we note that only 20% of the projects were evaluable based on the criteria we set which included being closed at least 3 years but not more than 7, that there was good data to evaluate (methodological feasibility), and that it provided opportunities for learning across sectors/ geographies etc.

Warmly, Jindra
Jackie (Jacqueline) Yiptong Avila

Canada

Jackie (Jacqueline) Yiptong Avila

Program Evaluator/Survey Methodologist

Independent Consultant

Posted on 19/08/2024
Hi Everyone, 
 
Thanks for sharing your experience on EAs. I will be interested to hear what makes a program”unevaluable”. Is there a checklist for such thing? 
 
Kind regards, 
 
Jackie
Amy Jersild

United States of America

Amy Jersild

PhD Candidate and evaluation consultant

Western Michigan University

Posted on 19/08/2024

Many thanks, Jindra, for sharing your experience and the useful links below. I read through your EA checklist for ex-ante evaluations with interest. Your experience of very few programs having sufficient data resonates. I’d be interested if you have any reflection on stakeholder reception and use of EA results based on your experience (question 3 above).

Warm regards, Amy

Elizabeth G Sweitzer

Canada

Elizabeth G Sweitzer

Senior Consultant

Guidehouse

Posted on 20/08/2024
Hi Jackie,
 
I’m not sure if I’ve seen a checklist (though one would be great!), but it usually comes down to what type of evaluation is being requested and whether the type of information is available. For example- if the client wants a process evaluation but is not willing for the evaluator to conduct interviews or focus groups, then that could be an issue. 
 
The main points I would consider are whether the information/buy-in is available to satisfy (1) the type of evaluation requested (2) the contractual requirements (3) the ethical needs (4) the budget/resources allotted. 
 
Hope this helps.
 
Best,
Lizzy
Lal - Manavado

Norway

Lal - Manavado

Consultant

Independent analyst/synthesist

Posted on 20/08/2024

Hi, Jackie!

If you are willing to regard evaluation as ascertaining the extent to which any given policy, its implementation strategy, operationalisation and field activity have succeeded in enhancing the quality of life of a target group, then your question becomes simple to answer. Put differently, the problem is to determine whether the physical end product of the above, for instance, a road, hospital, school, farming support etc., has actually benefited a group of people.

Success of an action to be evaluated then, is not measurable by the technical excellence of such end products, some of which may be intangible at first glance, for example, a policy. To evaluate the success or failure of a policy, one has to ascertain to what degree it has benefited a target group.

It is here one runs into a set of difficulties:

  • Is the majority of a target group willing to _use_ the end product of a project? If not, an excellent road, a hospital etc., may be able to serve its intended purpose, but it would not benefit the people. Cultural norms and certain kinds of vested interests may bring about an adverse public opinion  against it, which would turn the end product useless. Such situations are very difficult to identify, hence evaluate.
  • Target group may be unable to make appropriate use of an end-product owing to a majority of a target group being incompetent to do so. Lack of relevant training, its neglect in project plans etc., are the main causes of this. Evaluation does not seem to consider the importance of a target group possessing this competence.

Best wishes!

Lal.

Silva Ferretti

Italy

Silva Ferretti

Freelance consultant

Posted on 20/08/2024

Hello again...

is evaluability really about checking
"can managerial requirements be satisfied" or rather about
"Understanding what evaluation approach suits the project and its context"?
I see this tendency of using evaluability as a top-down assessment
surfacing over and over- 

further narrowing the scope of the evaluation rather than expanding it.

Ayman Ramsis

Iraq

Ayman Ramsis

Program Director

Samaritan's Purse

Posted on 21/08/2024

Dear Jackie,

Great question,

In humanitarian interventions, both the immediate response and rapid response first line interventions, when recipients are on the move, and are not likely to be found again, we do a quick questionnaire to collect some data on the spot, for review later on. Also in short term responses in humanitarian interventions, post distribution monitoring surveys are considered good enough monitoring and evaluation intervention, to capture participants perspective and ensure accountability to affected population and safe guarding aspects.

Someone can argue that in such types of response, we did not do "project evaluation" per se, but rather self evaluation to identify potential successes and lessons learned for future interventions.

With deep respect,

Ayman Ramsis 
+201006712846

Jindra Cekan

Czechia

Jindra Cekan

Founder

Valuing Voices at Cekan Consulting LLC

Posted on 21/08/2024

Amy and all-
So I think we need to differentiate between what is evaluable in terms of the aims of the evaluation (timeframe for exposed typically excludes some projects because we have found that we really need to look 3 to 5 years after closure and if it has been more than seven years, then it’s hard to evaluate, and we’ve also found that project that we implemented longest had a greater likelihood of sustainability of results and emerging impacts), plus we need to look at the quality of evaluative data, (for instance, not just trainings given, but what were participants trained in? Was there any knowledge change or behavior change as a result at the final evaluation so we could evaluate changes to it expost). 
So an evaluability assessment depends on the aim of that evaluation which is different from how we evaluate. With ex-post evaluations, we want to make sure that partners and participants are still around to be asked (I did one 15 years later post closure in Uganda and the professors training teachers at the pedagogical University were there, but all the ministry staff had changed, and local teachers and students had long moved on).

Yes, I too mandate in our evaluations that we must have boots on the ground to evaluate with local stakeholders, partners at national and regional levels, with the villagers, with the local leaders, and all of that is participatory. Yes, we use focus group interviews, key informant interviews transect walks, lots of participatory tools as outlined in this toolkit we worked on (just out, please comment: 🔗 https://lnkd.in/gyfu_K_3 🌟 Share your comments and suggestions via af-terg-sec@adaptation-fund.org).

To the question of stakeholders creating the evaluation with us: we have asked local people if they would like us to evaluate their project to give them answers they’d want to learn… but repeatedly we have gotten no answers. They look at us rather confused because, I think, they’re just living their lives and they see the project as an input into their lives, one of many, whereas when we try to isolate the project and its results in itself is very strange to folks ;).


Hope this help!

Jindra Čekan/ová PhD
Www.ValuingVoices.com and Jindracekan.com

Frankline OJONG Agbor

Cameroon

Frankline OJONG Agbor

President

Cameroon Society of English Speaking Evaluators (CamSEE)

Posted on 21/08/2024

Dear team,

              Evaluability assessment or evaluation readiness assessment have to do more with the organization capacity to monitor and evaluate an intervention. Assessing the following;
 
1. Organization M&E office
2. Organization M&E staffing 
3. Capacity of M&E staff (training and skills)
4. Organization M&E policy and framework 
5. Management M&E knowledge and use
There are 10 points here to look at.
 
Looking at the type of project whether it is humanitarian or developmental. I think all projects should be monitored and evaluated, but the question in evaluability assessment is: does the team have sufficient capacity and processes to deliver timely and useful evaluations?
 
Best regards 
Samriti Maharjan

Nepal

Samriti Maharjan

Evaluation Analyst

CGIAR IAES

Posted on 21/08/2024

Hello All,

I worked as an Evaluation Analyst with the EA team on conducting Regional Integrated Initiative. I had both theoretical learning (definition, uses and the importance of Evaluability Assessments) and practical experiences from the assignment. Evaluability Assessment is definitely conducted to assess, evaluate whether the program or portfolio is ready for evaluations. It is also used to improve the country organizations and program theory of change, and monitoring and evaluation framework. In the evaluation assessments, I believe Gender Equality and Inclusiveness is equally important to bring diverse data points. In this EA exercise, we have inclusively included the both the male and female participants rate from the interviews and surveys. 

Musa K. Sanoe

Liberia

Musa K. Sanoe

National M&E Specialist

United Nations Food and Agriculture Organization (FAO)

Posted on 21/08/2024

Evaluative Assessment- I have heard this recently. While I haven't seen a concrete EA report, judging from what I have read, I find it difficult to accept EA as an important step towards credible evaluation. A program/project with a sound logframe, and M&E Framework where indicators are clearly defined, with well-defined disaggregation methods, data sources, methods of data collection, analysis, etc. all defined before implementation. The project has implemented DQA throughout, to be aware of the data quality issues, and has taken measures to improve the quality of data. The project/program has implemented After Action Reviews (AAR) and other reflections to correct gaps. 

Amid all these, well-defined M&E Framework, consistent DQA, etc. I do not feel that EA is important, but a smart way of filling any loopholes that are likely to be picked up by the evaluation team. I do not think this is the best way of using donor funds. I rather strengthen the M&E System that will deliver and ensure that the project/program is evaluative at all times rather than putting resources into conducting EA, and after evaluation. 

Silva Ferretti

Italy

Silva Ferretti

Freelance consultant

Posted on 22/08/2024

Standing ovation, Musa. I completely agree with you.

Most projects today are already highly structured, incorporating all the elements you mentioned—to the point of straitjacketing them (but that’s a debate for another day!). 

If substantial issues are still emerging despite these significant M&E investments, perhaps we need to re-evaluate our management and M&E approaches rather than introducing yet another external process.

Let me reiterate my point: resources spent on evaluability processes - that merely uncover gaps, which should already be apparent - are not the most effective use of our limited evaluation resources. A better investment would be in strengthening organizational capacity and localizing evaluative efforts, rather than continuously paying external consultants.

Or, instead of perpetually investing in processes that cage us further within the same frameworks we might use resources to ensure evaluations and evaluative processes are tailored, meaningful, and localized—not just 'cut and paste' exercises recycling the same ToRs over and over.

---

BONUS POINT: an Artificial Intelligence (AI) driven twist: 

I asked AI to take the stance of a proponent of evaluability assessment and critique my response. You know what? I do see the points in the AI generated response and I am aware that proponents are motivated by strong and genuine aspirations. However, I also see how aspirational they are. The reality, as it also surfaced from some posts in this conversation, is that evaluability assessment risks becoming yet another layer of bureaucracy or narrow checks rather than the transformative tool it aspires to. And it is likely to involve short-term stunts by external consultants. The investment in engagement, capacity, and support that this aspirational approach would entail is likely not to be there: if projects are not evaluable it is not often a quick fix that would make it so.  Call me pessimistic but I remain skeptical. And I have plenty of ideas about where money could be spent instead.

[Below the AI response to my request mentioned above] 

"While I appreciate the concerns raised about the potential redundancy of Evaluability Assessments (EA) in well-structured projects, I believe it's important to consider the unique value that EA brings to the table—beyond merely filling gaps in existing M&E frameworks.

First, EA serves as more than just a checklist or audit of what should already be in place; it provides an opportunity to rigorously test the logic and feasibility of a project before significant resources are expended. Even with a sound logframe and M&E framework, projects can sometimes miss critical assumptions or overlook potential risks that only become apparent through a dedicated evaluability process. EA offers a safeguard, ensuring that projects are genuinely ready for evaluation and that the data collected will be meaningful and actionable.

Moreover, while it's true that most projects today are highly structured, the complexity and scale of many interventions mean that gaps can still emerge despite the best-laid plans. EA allows for a focused examination of these potential gaps, offering a proactive approach to problem-solving rather than waiting for issues to arise during the evaluation phase. This proactive stance can ultimately save time and resources by identifying and addressing issues early on, rather than scrambling to fix them after the fact.

Regarding the investment in external consultants, it’s worth noting that EA can also be localized and integrated into the existing capacities of organizations. Rather than viewing EA as an additional, external process, it can be an opportunity to build internal capacity and foster a culture of continuous improvement. By embedding evaluability within the organization, we not only reduce reliance on external consultants but also ensure that evaluations are tailored, nuanced, and truly reflective of the project's context.

In conclusion, while strengthening M&E systems is undoubtedly important, EA should not be dismissed as redundant or unnecessary. Instead, it should be seen as a strategic investment in ensuring that evaluations are effective, relevant, and capable of delivering real insights that drive better outcomes."

Daniel Ticehurst

United Kingdom

Daniel Ticehurst

Monitoring > Evaluation Specialist

freelance

Posted on 23/08/2024

Just pitching in, like Silva, to congratulate Musta on making such a great point. The seeming marginal value and high  opportunity costs of EAs. 

At 2022’s  european evaluation society, the key note by Estelle Raimondo and Peter Dahler-Larsen was striking. They rehearsed an interesting analysis on the indiscriminate application and diminishing returns to the practice of late through its "performative" use.  Bureaucratic capture. 

Some argue EAs are the least of today’s evaluation community’s concerns. 

The keynote’s reference to how "....sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it" reminds of the analogy the famous classicist and poet AE Housman made in 1903:

"...gentlemen who use manuscripts as drunkards use lamp-posts,—not to light them on their way but to dissimulate their instability.”

 

Amy Jersild

United States of America

Amy Jersild

PhD Candidate and evaluation consultant

Western Michigan University

Posted on 23/08/2024

Hi all,

I agree with the argument that the rigid application of a tool, whatever it may be, likely does not result in a positive outcome. This may be the rigid application of theories of change, an overused approach that has become synonymous with “doing” evaluation, yet is still not used to its fullest application in most evaluation reports I read. Or with the over valuing of RCTs based on ideological interests. Or the rigid application of the OECD-DAC criteria based on an expected paradigm. There are expected pathways to what “knowledge” is to be within our field that contributes to this rigidity, particularly when applied in a mechanistic way, and its overuse can indeed perpetuate the bureaucratic nature of our established systems. I fully agreed with the points raised by Dahler-Larsen and Raimondo in Copenhagen several years ago at EES.

Yet I would also argue that any tool, such as an evaluability assessment, should not be dismissed based on this argument. I think a more useful line of inquiry may be to think about when and how EAs could be most useful. In my experience EAs can in effect be a tool for breaking with mechanistic evaluation and bureaucratic systems – and yes, an attempt to breaking management’s capture of evaluation -- through better defining a meaningful and useful focus for an evaluation. Or the decision to not do an evaluation based on its findings. I think the challenge is at the organizational level with the inevitable interest to standardize and create norms for its use across complex realities. 

Regards, Amy

América Elvira Hernández

Mexico

América Elvira Hernández

Especialista en Monitoreo y Evaluación

Evaluadora independiente

Posted on 24/08/2024

[translated from original in Spanish]

Dear fellow evaluation experts. I am happy to subscribe that it is important to conduct an evaluability assessment is fundamental not only for the planning and monitoring of the evaluation, but also and particularly important to project the scope that the evaluation could have and to manage the expectations of the stakeholders. As a result of an evaluation, it was suggested to adopt evaluability steps for the follow-up of the project and to ensure the conditions for external evaluation, in particular. So yes, the distinction between evaluability assessment and internal and external evaluation exercises can be facilitated, strengthening the mechanisms to achieve the former and prepare the latter.

In the case mentioned above, not all the criteria included in the design of the instrument to measure the evaluability of the project were available and this led to some differences of opinion regarding the expected evaluation among stakeholders, particularly between the funder and the evaluand. Agreements were negotiated and reached, and as a result the evaluation process was able to continue.

The evaluated project was funded by multilateral agencies, so yes, it is possible to use the results of the EA in an evaluation and both complement each other's information to measure contributions to the SDGs.

Emile Nounagnon HOUNGBO

Benin

Emile Nounagnon HOUNGBO

Agricultural Economist, Associate Professor, Director of the School of Agribusiness and Agricultural Policy

National University of Agriculture

Posted on 26/08/2024

[Translated from French original]

Ladies and gentlemen,

I have come to take part in the debate. The evaluability of a development project is automatically established by the quality of the specific objective and the expected results of the project. Clarity and precision at these two levels ipso faco justify the evaluability of the project. Therefore, everything depends on how the project is set up. As a result, there is no need for a special evaluability assessment. With a specific SMART objective and expected results, all with clear, measurable indicators, the project's evaluability is guaranteed. The rest will depend on the planning and implementation of monitoring and evaluation. From a financial point of view, it is not appropriate to invest further resources in evaluability assessment, as funding monitoring and evaluation has often been a major challenge. Resources should therefore be mobilised for the effective and efficient implementation of project monitoring-evaluation.

Thank you for your support.

Jonas SAGNO

Guinea

Jonas SAGNO

Programs and Resource Mobilization Director

Gigantesq Connexion NGO

Posted on 26/08/2024

[Translated from French original]

Dear Amy,

First of all, I would like to thank you for initiating this reflection and discussion on a subject as relevant as evaluability assessments (EA). Your invitation to explore this topic has given me much food for thought.

Evaluability assessments are a powerful tool for ensuring the quality and usefulness of evaluations. They help to clarify expectations, identify potential challenges, and ensure that interventions are designed in a way that allows for rigorous and constructive evaluation. To maximise their impact, I believe it would be crucial to improve communication about their added value, train stakeholders in their usefulness, and ensure their systematic integration into the planning of interventions, particularly those linked to the SDGs. In doing so, evaluability assessments can make a significant contribution to the continuous improvement of development programmes and the achievement of expected results.

I look forward to reading the thoughts of other members of the community on this topic.

Yours sincerely

Jonas SAGNO

Sociologist & Director of Programmes and Resource Mobilisation at the NGO Gigantesq Connexion(Incubator & Accelerator of Projects focused on Sustainable Development)

Ahmedou El Bah

Mauritania

Ahmedou El Bah

Consultant

Posted on 01/09/2024

[Translated from French original]

Dear Amy,

First of all, I would like to thank you for initiating this reflection and discussion on a subject as relevant as evaluability assessments (EA).

I have had the opportunity to participate in several evaluability assessments (EAs), including those recently carried out by the CGIAR, as well as others related to FAO's work on gender and its strategic objective of reducing rural poverty. These varied experiences have convinced me of the great value of EAs, especially when it comes to tackling complex issues.

In these contexts, where the scope of the evaluation is often blurred, the EA process is essential. It enables key interventions to be mapped out, encouraging early evaluative thinking within the programme team. It also provides a concrete test of the theory of change, the results framework and the monitoring and evaluation tools, highlighting any gaps in dedicated resources, including the budget allocated to the evaluation itself.

Carrying out EAs upstream of the intervention cycle has proved particularly judicious. This approach encourages a reflective pause in the early stages of implementation, enabling adjustments and improvements to be proposed before the programme has progressed too far. This can potentially generate savings in resources and increase the chances of success.

In addition, the EA process provides an excellent opportunity to revisit and refine the theory of change and results framework, aligning them with the realities on the ground and budgetary constraints. This ensures that evaluation remains relevant and feasible throughout the implementation of the programme.

Finally, the EA facilitates the establishment of an early relationship between the evaluation office and those responsible for programme implementation, encouraging smooth collaboration and communication throughout the evaluation. EAs also help to determine the right time for evaluation and the type of evaluation best suited to the intervention.

In sum, my experience with EAs, both within the CGIAR and other organisations, highlights their crucial role in clarifying the scope of the evaluation, strengthening programme design and ensuring the availability of resources for effective evaluation. They encourage early collaboration between stakeholders and help to define the timing and nature of the evaluation, ensuring its relevance and appropriateness to the context of the intervention.

Amy Jersild

United States of America

Amy Jersild

PhD Candidate and evaluation consultant

Western Michigan University

Posted on 03/09/2024

Thank you all for your participation. There’s been a lot of discussion on the pros and cons of EAs, with strong perspectives on either side of the debate. We have varied experiences with EAs as a group, some of us having implemented EAs, some of us not; and some of us having read reports, some of us not. And we have strong perspectives ranging from seeing them as an unnecessary use of scarce M&E resources to identifying specific benefits for their use in planning and maximizing the outcome of an evaluation. 

We will wrap up this discussion by September 10th. Before then, I’d like to invite more reflection on when to implement an EA and when not to - the question of both cost-benefit and perceived benefit to stakeholders, relating to questions 1 and 2 above. I would suggest that EAs need to be proportionate to the cost of a subsequent evaluation, both as good use of financial resources and for stakeholder buy-in. Does anyone have any thoughts to contribute on this, both in terms of actual ratios, and/or addressing organizational policy on EAs on when and how they should be implemented? I know of some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount. It seems to me that in addition to a checklist for implementing an EA, which provides important concepts to think about and address, a checklist for whether to implement an EA could also be useful in providing what to consider in deciding whether one is applicable and/or feasible. 

Kind regards, Amy

Rick Davies

United Kingdom

Rick Davies

Evaluation Consultant

Posted on 04/09/2024

Responding to Amy's post. 
1. Yes, EA's need to be proportionate to likely costs of an evalaution. I agree 100%. Some historical research on past ratios should now be possible to give us ball-park ideas on the range and median values
2. Yes, a checklist re the need for an EA , is at least worth thinking about. One pointer in this direction is in the comment that "some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount" Here I think budget size may in effect be being used as a proxy for programme complexity. But it is not a straight forward correlation. A large simple programme may not need an EA i.e one with a few simple clear objectives and interventions. Would an immuniation programme fit this description?  But as general rule I think large budget programmes do tend to be more complex in  terms of objectives, interventions, partners and stakeholders and geographic locations. Consider these as nodes in a network, the more of these there are the more numerous are the possible and actual causal linkages, and types and sources of information about what is happening. if a checklist of EA need is to be developed perhaps these sources of complexity could be items to considered?

Amy Jersild

United States of America

Amy Jersild

PhD Candidate and evaluation consultant

Western Michigan University

Posted on 04/09/2024

Many thanks, Rick, for your comments. Such historical data on past ratios would be interesting to examine. And yes, budget size may be one of the items on a checklist considered as a proxy for complexity, but I agree, it should not be the only one in depicting complexity, for the reason you pointed out. Your suggestion about depicting nodes in a network makes sense to me. The more numerous the possible causal linkages and sources of data may then result in a higher score, which may then lead to a “yes” decision on an EA. 

Perhaps such a checklist might also help focus an EA, or include a follow-on set of items that can initially explore the four primary areas depicted in the jigsaw diagram you shared below - https://mande.co.uk/wp-content/uploads/2022/05/Austria-diagram.png (institutional and physical context, intervention design, stakeholder demand, and data availability). Such a checklist, if needed, may then not only guide a decision on whether to conduct an EA, but it may also help focus an EA and its priority areas, thus making it a more cost-effective and focused exercise. 

I’d be interested to hear from others on this forum who manage evaluations/EAs. How do you decide in your organization whether or not to conduct an EA? And how are decisions made in how to focus an EA? 

Regards, Amy

DOROTHY LUCKS

Australia

DOROTHY LUCKS

EXECUTIVE DIRECTOR

SDF GLOBAL PTY LTD

Posted on 06/09/2024

Thanks Amy and others for this interesting thread.

We have been involved in many EAs for different organisations - international financing institutions, UN agencies, NGOs and private sector. I agree with Rick that complexity rather than size of investment is most critical in terms of the EA value.  Institutions with a clear mandate and operational procedures, and often a menu of performance indicators and guidelines usually do not require an EA.  

The most useful ones that we have been engaged with are with complex, developmental projects where the expected outcomes may be emergent with process as well as output and outcome indicators. Another useful process for EAs has been where there is limited M&E capacity within the implementation team and they are unsure how to measure what is outlined in the design. So it is the incremental value of the EA and also the investment of cost to benefit - two recent examples below.

One, a very complex natural resource management programme that reached its final years, covering policy, institutional and physical results. The implementation team realised that they did not know how to measure all of the final outcomes - they had assumed that an impact assessment team would produce all data required but did not have the budget for the extent of data gathering required. We did a (very belated) EA and found that the team needed to reconstruct a range of raw implementation data to enable tracking of outcomes - a huge job.  If they had an EA, and capacity development earlier in the programme, they would have been in a much stronger position and the costs involved in solving the issues would have been much lower.

Another was a complex youth and indigenous project - close to commencement - where a culturally sensitive approach to indicators and monitoring processes was required.  That EA was carried out in a very participatory (inexpensive) way that was designed to engage participants in safe and appropriate ways of recording data that would demonstrate levels of progress and learning that would feed back into improving design for later stages of implementation. The costs of the early time investment in the EA reaped huge benefits for both programme outcomes and evaluation.

I also like the idea of the decision-making nodes for whether an EA is required or not. Thanks again for all the points raised.

Rick Davies

United Kingdom

Rick Davies

Evaluation Consultant

Posted on 07/09/2024

At the significant risk of creating a too-rigid conceptualisation, I suggest it could be useful to ask these questions when considering the relevance of an EA:
1. institutional and physical context: How numerous and diverse are the locations?
2. Intervention design:  How numerous and diverse are the interventions, and their interconnections?
3. Stakeholder demand: How numerous and diverse are the funders, implementing partners and beneficiary groups, and their interconnections?
4. Data availability: How numerous and diverse are the M&E systems and their interconnections?

Amy Jersild

United States of America

Amy Jersild

PhD Candidate and evaluation consultant

Western Michigan University

Posted on 13/09/2024

Thank you all for an interesting and engaging dialogue on evaluability assessments. Please check back soon for an excellent summary of our discussion drafted by Gaia Gullotta of CGIAR. It will be provided in English, Spanish and French.

Cheers! 

Esosa Tiven Orhue

Nigeria

Esosa Tiven Orhue

Founder/CEO

E-Warehouse Consulting

Posted on 16/09/2024
Dear colleagues and members, my contribution.
To my knowledge and understanding I 'think' this is a more proactive and faster way of achieving a Sustainable Development Goals. Looking at the existing order of programmes or projects. Its drive, helps to eliminate the bottleneck associated with programme or project management by serving as a back-stop of programme/project, in data management and evaluation.
It is a redirectional mechanisms for programme/project work place and work force, that help actualize and facilitate goals and objectives. Its systematic effectiveness and efficiency in programme and project management industry is a strength. However;
- It helps build the initial gap that may had been created by colleagues and redirect them to the purpose and programme/project objectives.
- It modifies programme/project running and outcome for better funding and stakeholders participation.
- It brings visibility, clarity and faster actionable approaches and mechanisms for delivery.
-It serves as an information management hub and integration of existing programmes/projects in line with funders and stakeholders goals and aims.
Experimentally, both organization and nation can successfully practice this back-stop of assessments for policy makers and programme implementation. Invariably, it is a strength and force to programme/project management dynamics and delivery in the industry and society.

Thank you.
Esosa.