Nearly 25 years of experience in the international development sector as an evaluator, manager, technical advisor and educator working in partnership with donors, governments and civil society organizations in Asia, Africa and the Middle East on development effectiveness and quality programming and policies. Deeply interested in co-creative and evaluative processes that support self-determination and development that is sustainable for both people and planet. MA degree in sustainable development, and PhD candidate in interdisciplinary evaluation studies at Western Michigan University.
Amy's development evaluation experience includes serving as:
• Doctoral Candidate, Western Michigan University, Interdisciplinary PhD program in Evaluation. Completion of degree anticipated in 2024. Research interests include meta-evaluation, and the professionalization and internationalization of evaluation.
• Assistant Professor at SIT Graduate Institute in Washington DC, designing and teaching both graduate level theory and practice-based courses on evaluation in Washington DC, India and Jordan;
• Independent evaluator since 1997 advising international agencies and donors on social development issues and programming. Clients include FCDO, ILO, USDOL, UNDP, CGIAR, Rockefeller Foundation, and Adaptation Fund.
• Internal evaluator as Deputy Director of Quality Programming from 2008-2012 in Bangkok leading international team effort to develop M&E framework for Hewlett Packard’s flagship global entrepreneurship education program.
Active member of EvalPartners (member of EVALSDG group)
Active member of AEA (member of the International Working Group)
Managerial experience in directly negotiating with and reporting to bilateral donors (USPRM, CIDA, SDC, GIZ, AusAID), multilateral (UNICEF, World Bank), and corporate (HP, Adidas, Deutsche Bank); in coordinating with military bodies (KFOR, Kosovo); and in partnering with civil society organizations (Cambodia, Laos, Thailand, China, and India).
In-country work and living experience in Bangladesh, Cambodia, China, Japan, Kosovo, Lao PDR, Thailand, and USA; additional work experience in Egypt, Ethiopia, India, Israel, Jordan, Kenya, Nepal, Philippines, Sri Lanka, Turkey, Uganda, and Vietnam.
Mandarin Chinese proficiency; basic to intermediate skills in French, Khmer, Lao, Spanish and Thai.
Posted on 13/06/2023
Thank you, Svetlana, for the opportunity to participate in this discussion. I respond to two of your questions below.
Do you think the Guidelines respond to the challenges of evaluating quality of science and research in process and performance evaluations?
The Guidelines appear to respond to the challenges of evaluating quality of science and research in process and performance evaluations through a flexible and well-researched framework. I am not sure if a single evaluation criterion captures the essence of research and development. I think the answer would be found in reflecting on its application in upcoming varied evaluative exercises at CGIAR, as well as reflection on previous organizational experience. This may involve identifying how it is interpreted in different contexts, and whether further development of recommended criteria may be considered for a possible second version of the Guidelines.
How can CGIAR support the roll-out of the Guidelines with the evaluation community and like-minded organizations?
I agree with others that workshops and/or training on the Guidelines could be a means for rolling out the Guidelines and engaging with the evaluation community. Emphasizing its flexibility and fostering reflection on its use in different organizational contexts would be productive.
In line with my response to the first question above, I would suggest a meta-evaluative exercise be done when there has been more organizational experience applying the Guidelines. There would be obvious value for CGIAR, possibly leading to an improved upon second version. It would also be of great value to the evaluation community with CGIAR taking an important role in facilitating continued learning through the use of meta-evaluation -- what the evaluation theorist Michael Scriven has called both an important scientific and moral endeavor for the evaluation field.
At Western Michigan University, we are engaged in a synthesis review on meta-evaluation practice over a 50-year period. We’ve come up with many examples of meta-evaluation of evaluation systems in different contexts. We assumed very little meta-evaluation was being done and were surprised to find there are plenty of interesting examples in both the grey and academic literature. Documenting such meta-evaluative work would further strengthen the Guidelines and its applicability as well as add significant value in continued engagement with the international evaluation community.
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 13/09/2024
Thank you all for an interesting and engaging dialogue on evaluability assessments. Please check back soon for an excellent summary of our discussion drafted by Gaia Gullotta of CGIAR. It will be provided in English, Spanish and French.
Cheers!
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 04/09/2024
Many thanks, Rick, for your comments. Such historical data on past ratios would be interesting to examine. And yes, budget size may be one of the items on a checklist considered as a proxy for complexity, but I agree, it should not be the only one in depicting complexity, for the reason you pointed out. Your suggestion about depicting nodes in a network makes sense to me. The more numerous the possible causal linkages and sources of data may then result in a higher score, which may then lead to a “yes” decision on an EA.
Perhaps such a checklist might also help focus an EA, or include a follow-on set of items that can initially explore the four primary areas depicted in the jigsaw diagram you shared below - https://mande.co.uk/wp-content/uploads/2022/05/Austria-diagram.png (institutional and physical context, intervention design, stakeholder demand, and data availability). Such a checklist, if needed, may then not only guide a decision on whether to conduct an EA, but it may also help focus an EA and its priority areas, thus making it a more cost-effective and focused exercise.
I’d be interested to hear from others on this forum who manage evaluations/EAs. How do you decide in your organization whether or not to conduct an EA? And how are decisions made in how to focus an EA?
Regards, Amy
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 03/09/2024
Thank you all for your participation. There’s been a lot of discussion on the pros and cons of EAs, with strong perspectives on either side of the debate. We have varied experiences with EAs as a group, some of us having implemented EAs, some of us not; and some of us having read reports, some of us not. And we have strong perspectives ranging from seeing them as an unnecessary use of scarce M&E resources to identifying specific benefits for their use in planning and maximizing the outcome of an evaluation.
We will wrap up this discussion by September 10th. Before then, I’d like to invite more reflection on when to implement an EA and when not to - the question of both cost-benefit and perceived benefit to stakeholders, relating to questions 1 and 2 above. I would suggest that EAs need to be proportionate to the cost of a subsequent evaluation, both as good use of financial resources and for stakeholder buy-in. Does anyone have any thoughts to contribute on this, both in terms of actual ratios, and/or addressing organizational policy on EAs on when and how they should be implemented? I know of some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount. It seems to me that in addition to a checklist for implementing an EA, which provides important concepts to think about and address, a checklist for whether to implement an EA could also be useful in providing what to consider in deciding whether one is applicable and/or feasible.
Kind regards, Amy
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 23/08/2024
Hi all,
I agree with the argument that the rigid application of a tool, whatever it may be, likely does not result in a positive outcome. This may be the rigid application of theories of change, an overused approach that has become synonymous with “doing” evaluation, yet is still not used to its fullest application in most evaluation reports I read. Or with the over valuing of RCTs based on ideological interests. Or the rigid application of the OECD-DAC criteria based on an expected paradigm. There are expected pathways to what “knowledge” is to be within our field that contributes to this rigidity, particularly when applied in a mechanistic way, and its overuse can indeed perpetuate the bureaucratic nature of our established systems. I fully agreed with the points raised by Dahler-Larsen and Raimondo in Copenhagen several years ago at EES.
Yet I would also argue that any tool, such as an evaluability assessment, should not be dismissed based on this argument. I think a more useful line of inquiry may be to think about when and how EAs could be most useful. In my experience EAs can in effect be a tool for breaking with mechanistic evaluation and bureaucratic systems – and yes, an attempt to breaking management’s capture of evaluation -- through better defining a meaningful and useful focus for an evaluation. Or the decision to not do an evaluation based on its findings. I think the challenge is at the organizational level with the inevitable interest to standardize and create norms for its use across complex realities.
Regards, Amy
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 19/08/2024
Many thanks, Jindra, for sharing your experience and the useful links below. I read through your EA checklist for ex-ante evaluations with interest. Your experience of very few programs having sufficient data resonates. I’d be interested if you have any reflection on stakeholder reception and use of EA results based on your experience (question 3 above).
Warm regards, Amy
United States of America
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
Posted on 04/08/2024