Posted on 19/09/2023
Dear colleagues, and thank you Jean for provoking this.
Please bear with me while I bring a drop of academic content to the discussion, hoping we can expand it a bit.
What are mixed methods after all? I think the quanti x quali debate is quite reductionist; and honestly after all these decades I cannot believe we are still discussing if RCTs are the gold standard.
I would like to bring an approach that called my attention, which was presented by Professor Paul Shaffer from Trent University (Canada). His approach is focused mixed methods for impact assessment – but I understand it can be extrapolated to other types of studies – such as outcome assessment. What I like about his proposal is that it goes beyond and deeper the quanti + quali debate.
In his view, the categories that supposedly would differentiate quanti x quali approaches are collapsing. For example, (i) qualitative data is many times, quantified; (ii) large qualitative studies can allow for generalization (while scale/generalization would be a characteristic of quantitative studies), and (iii) induction and deduction inferences are almost always present.
In light of that, what are “mixed methods” ??
What “mixed methods” means is combining approaches that can bring robustness to your design, different perspectives/angles to look at the same object. Based on the questions you want to answer/what you want to test, ‘mixed methods’ for impact assessment could mean combining two or more quantitative methods. Therefore, different qualitative methods could be used to improve the robustness of an evaluation/research – and this would also be called ‘mixed methods’.
And then - going a bit beyond that: couldn’t we consider the mix of “colonizers’ “ with “indigenous “ approaches also “mixed methods”?
I hope this can contribute to the reflection.
Cheers
Emilia
Emilia Bretan
Evaluation Specialist
FAO Office of Evaluation (OED)
Italy
Emilia Bretan
Evaluation Manager
FAO
Posted on 15/02/2024
Dear colleagues,
Thank you so much for your active participation and engagement in this discussion. A few words on the latest contributions:
Nea-Mari, thank you for the information and links of the Finnish SDG M&E at subnational level. Inspiring examples for other countries and cities! Thanks Esosa for highlighting the critical role of evaluation in evidence-based development programs. And Mark, yes, acknowledging limitations of our studies/evaluations is always a good practice, thanks for highlighting this point.
This discussion comes to a close for now, but there will be more opportunities ahead to further exchange ideas and knowledge on supporting progress towards the SDGs through evaluation.
Wishing you all the best, and stay tuned for future updates.
Emilia
Italy
Emilia Bretan
Evaluation Manager
FAO
Posted on 06/02/2024
Dear colleagues,
Thanks for contributing to a lively discussion.
I comment on a few points, not with the intention of exhausting the conversation (neither to fully summarize it!), but hoping to provoke some additional reflection.
1. We should go beyond the focus on measuring contribution or progress to SDGs: there is a range of dedicated studies/evaluations and indicators (including proxy indicators) which also contribute to understanding development progress. SDGs do not sit in isolation, and there can be several pathways and ways leading to the same direction. Dorothy and John Akwetey particularly articulated this topic, but it is present various contributions. They also emphasize the significance of evaluations at national, institutional, and sub-national levels, beyond large-scale SDG evaluations.
The approach used by the study on evaluation evidence shared by Mark Engelbert, which used impact evaluations as a key input, seems to speak to this last point.
On the same line is the work that the The Global SDG Synthesis Coalition is conducting. The synthesis can be used either as an alternative to an SDG focused evaluation or as part of a larger study. The syntheses follow a systematic and transparent approach to identifying, collating and appraising the quality of individual evaluations, and then synthesizing findings and lessons from bodies of evaluative evidence. The approach includes evidence gap maps and other tools, including a rigorous process (and corresponding framework) to include or exclude studies.
2. The challenges to evaluate SDGs encountered by most countries and development actors, and shared with different lenses by Ram Khanal Lovemore Mupeta and Hadera Gebru include: limited resources, insufficient data, lack of appropriate evaluation techiques and complex interlinked targets. In light of these challenges, we should (i) consider/search for other approaches (synthesis is one of them), rather than launching ourselves into potentially daunting evaluations, (ii) start small and (iii) scope wisely for studies that can be useful. Engaging country-based professionals (evaluators and implementers from different sectors) in the process, could support increasing awareness and build evaluative capacity.
Unfortunately, major political unrest and challenges can result in a complete setback for any attempt to evaluate progress towards the Sustainable Development Goals (SDGs), as exemplified by the situation in Ethiopia where the post-Covid crises and civil war have undermined all developmental progress.
3. The subnational level (local, in particular), is another recognized challenge shared by many. Nea-Mari, I am curious to know a couple of examples of what Finland has been doing at local level – which types of digital solutions have you adopted for the M&E of SDG progress? I am also positively surprised by the influence of the evaluations into parliamentary elections and in the planning of the new government programme. What would you say, Nea-Mari, are the key-elements that make these evaluations powerful in Finland?
4. Examples of reports: Pelagia Monou, Fabandian Fofana, I wonder if the reports of the evaluations you have been involved are public and you could share the link with us? Pelagia, were you able to go beyond the number of projects and budget to tap into contributions or result? Fabandian, did you measure the contributions to SDGs at local level? Who was involved and how?
5. And last but not least (but on a kind of a side note), a comment about the finding of the 3ie report shared by Mark, that evaluation work on the “Planet” SDGs (SDGs 6 and 12 to 15) has been neglected. The report informs that very little (impact) evaluation research was found covering SDGs 12 (Responsible Consumption and Production), 14 (Life Below Water), and 15 (Life on Land). While I have my own hypothesis as an explanation for this finding, I wonder if Stefano D’Errico, Ram Khanal and other colleagues with expertise in the environmental sector would like to chip in on the reasons? 😊
Still a long way to go: Chris, Olivier and Lal remind us that the post A2030 Framework is rapidly approaching!
Thanks all for contributing!!
Warm regards
Emilia