International Research, Monitoring, Evaluation and Learning Practitioner
SAI - Solution Alternatives International
Posted on 25/08/2025
Dear Monica and everyone,
Thank you for initiating such a valuable discussion, and I appreciate the rich insights shared by everyone. The perspectives offered truly echo many of the challenges we face in our field.
I find myself agreeing with most of the points raised, particularly Aurelie's emphasis on syncing evidence generation with stakeholder needs and Brilliant's focus on formalizing feedback processes. However, I'd like to add another layer to this conversation: the critical importance of establishing clear organizational processes that position evaluation not as a standalone exercise, but as an integral component interconnected with program implementation and broader M&E systems.
In my experience, one of the fundamental barriers we haven't fully addressed is the question of ownership. Too often, it feels like evaluations are "owned" exclusively by evaluators or M&E staff, while project teams remain somewhat detached from the process. This disconnect may significantly impact how recommendations are received and, ultimately, implemented.
This raises my first question: how do we better involve project staff in the finalization of recommendations? I've found it particularly valuable to organize joint sessions where evaluators, M&E staff, and project teams come together not just to discuss findings, but to collaboratively shape the final recommendations. When project staff are actively involved in interpreting findings and crafting recommendations that reflect their understanding of operational realities, ownership naturally increases. Have others experimented with this approach?
Beyond the ownership challenge, I think there's another fundamental issue we need to confront more directly: the quality of evidence we produce as evaluators. Are we consistently striving to generate robust, credible data? When we bring evidence to decision-makers, are we confident it truly meets quality standards, or might we sometimes, unintenally, but still be adding another barrier to utilization—especially for already skeptical decision-makers who may use questionable evidence quality as grounds to dismiss findings entirely?
This leads to my second question, which has two parts: First, do we have adequate quality assurance instruments and internal processes to genuinely track and ensure the quality of evaluations we conduct? I've found quality frameworks useful, though admittedly, there's still considerable room for improvement in making these more objective and systematic.
Second, even when our evidence is solid, how effectively are we translating and communicating our findings to decision-makers? Are our reports and presentations truly accessible and relevant to how decisions are actually made within organizations, or are we producing technically sound but practically unusable outputs?
These questions aren't meant to shift responsibility away from the systemic barriers we've identified, but rather to encourage a more reflective approach to our own role in the utilization challenge.
RE: How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making
Georgia
Dea Tsartsidze
International Research, Monitoring, Evaluation and Learning Practitioner
SAI - Solution Alternatives International
Posted on 25/08/2025
Dear Monica and everyone,
Thank you for initiating such a valuable discussion, and I appreciate the rich insights shared by everyone. The perspectives offered truly echo many of the challenges we face in our field.
I find myself agreeing with most of the points raised, particularly Aurelie's emphasis on syncing evidence generation with stakeholder needs and Brilliant's focus on formalizing feedback processes. However, I'd like to add another layer to this conversation: the critical importance of establishing clear organizational processes that position evaluation not as a standalone exercise, but as an integral component interconnected with program implementation and broader M&E systems.
In my experience, one of the fundamental barriers we haven't fully addressed is the question of ownership. Too often, it feels like evaluations are "owned" exclusively by evaluators or M&E staff, while project teams remain somewhat detached from the process. This disconnect may significantly impact how recommendations are received and, ultimately, implemented.
This raises my first question: how do we better involve project staff in the finalization of recommendations? I've found it particularly valuable to organize joint sessions where evaluators, M&E staff, and project teams come together not just to discuss findings, but to collaboratively shape the final recommendations. When project staff are actively involved in interpreting findings and crafting recommendations that reflect their understanding of operational realities, ownership naturally increases. Have others experimented with this approach?
Beyond the ownership challenge, I think there's another fundamental issue we need to confront more directly: the quality of evidence we produce as evaluators. Are we consistently striving to generate robust, credible data? When we bring evidence to decision-makers, are we confident it truly meets quality standards, or might we sometimes, unintenally, but still be adding another barrier to utilization—especially for already skeptical decision-makers who may use questionable evidence quality as grounds to dismiss findings entirely?
This leads to my second question, which has two parts: First, do we have adequate quality assurance instruments and internal processes to genuinely track and ensure the quality of evaluations we conduct? I've found quality frameworks useful, though admittedly, there's still considerable room for improvement in making these more objective and systematic.
Second, even when our evidence is solid, how effectively are we translating and communicating our findings to decision-makers? Are our reports and presentations truly accessible and relevant to how decisions are actually made within organizations, or are we producing technically sound but practically unusable outputs?
These questions aren't meant to shift responsibility away from the systemic barriers we've identified, but rather to encourage a more reflective approach to our own role in the utilization challenge.
Looking forward to your perspectives on this.