Skip to main content

RE: How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making

Posted on 09/08/2025

My response to: How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making: 

I share observations from my own evaluation work, highlight common barriers I have encountered, and propose practical actions that can help organisations make better use of evaluation findings in their decision-making processes.

Introduction
Evaluations are intended to do more than assess performance: they are designed to inform better decisions, strengthen programmes, and improve accountability. In practice, however, the journey from feedback to action does not always happen as intended. Findings may be presented, reports submitted, and then the process slows or ends before their potential is fully realised. This does not necessarily mean that the findings are irrelevant. Often, the reasons lie in a mix of cultural, structural, and practical factors: how evaluations are perceived, when they are conducted, how findings are communicated, the incentives in place, the systems for follow-up, and whether leadership feels equipped to champion their use.
 

From my work as an evaluator, I have noticed recurring patterns across different contexts: the tendency to focus on positive findings and sideline challenges, the loss of momentum when project teams disperse, reports that are seldom revisited after presentation, and leadership teams that value learning but are unsure how to embed it into everyday decision-making.I explore these patterns, illustrate them with grounded examples, and offer possible actions for making evaluation a more consistent part of organisational decision cycles.
Barrier 1: Perception as Judgment
One of the most persistent barriers to the effective use of evaluation results lies in how findings are perceived. Too often, they are seen as a verdict on individual or organisational performance rather than as a balanced body of evidence for learning and improvement.This framing can influence not only the tone of discussions but also which parts of the evidence receive attention.When results are largely positive, they are sometimes treated as confirmation of success, prompting celebrations that, while important for morale, may overshadow the role of these findings as part of an evidence base. Positive results are still findings, and they should be interrogated with the same curiosity and rigour as less favourable results. For example, strong
performance in certain areas can reveal underlying drivers of success that could be replicated elsewhere, just as much as weaker performance signals areas needing attention. However, when the focus remains solely on reinforcing a success narrative, particularly for external audiences, recommendations for further improvement may receive less follow-through.
On the other hand, when evaluations reveal significant challenges, conversations can become defensive. Stakeholders may invest more energy in contextualising the results, explaining constraints, or questioning certain data sources and measures. In settings where evaluations are closely tied to accountability, especially when reputations, funding, or career progression are perceived to be at stake, such responses are understandable. This is not necessarily resistance for its own sake, but a natural human and organisational reaction to perceived judgment.The challenge is that both of these patterns, celebrating positive results without deeper analysis, and defensively responding to difficult findings, can limit the opportunity to learn from the full spectrum of evidence. By prioritising how results reflect on performance over what they reveal about processes, systems, and external factors, organisations risk narrowing the space for honest reflection.
Barrier 2: Timing and the End-of-Project Trap
When an evaluation is done can make all the difference in whether its findings are put to use or simply filed away. Too often, evaluations are completed right at the end of a project, just as staff contracts are ending, budgets are already spoken for, and most attention is focused on closing activities or preparing the next proposal. By the time the findings are ready, there is little room to act on them.I have been part of evaluations where valuable and innovative ideas were uncovered, but there was no next phase or active team to carry them forward. Without a plan for handing over or transitioning these ideas, the recommendations stayed in the report and went no further.
Staff changes make this problem worse. As projects wind down, team members who understand the history, context, and challenges often move on. Without a clear way to pass on this knowledge, new teams are left without the background they need to make sense of the recommendations. In some cases, they end up repeating the same mistakes or design gaps that earlier evaluations had already highlighted.The “end-of-project trap” is not just about timing; it is about how organisations manage the link between evaluation and action. If evaluations are timed to feed into ongoing work, with resources and systems in place to ensure knowledge is passed on, there is a far better chance that good ideas will be used rather than forgotten.
Barrier 3: Report Format, Accessibility, and Feasibility
The format and presentation of evaluation reports can sometimes make them less accessible to the people who need them most. Many reports are lengthy, technical, and written in a style that suits academic or sector specialists, but not necessarily busy managers or community partners, although executive summaries are provided.Another challenge is that recommendations may not always take into account the resources available to the stakeholders who are expected to implement them. This means that while a recommendation may be sound in principle, it may not be practical. For example, suggesting that a small partner organisation establish a dedicated monitoring unit may be beyond reach if they have only a few staff members and no additional budget.It is also worth noting that findings are often introduced in a presentation before the full report is shared. These sessions are interactive and help bring the data to life. However, once the meeting is over, the detailed report can feel like a repetition of what has already been discussed. Without a clear reason to revisit it, some stakeholders may not explore the more nuanced explanations and qualifiers contained in the main text and annexes.
Barrier 4: Incentives and Accountability Gaps
Organisational systems and incentives play a significant role in whether evaluation findings are acted upon. In many cases, evaluators are rewarded for producing a thorough report on time, implementers are measured against the delivery of activities and outputs, and donors focus on compliance and risk management requirements.What is often missing is direct accountability for implementing recommendations. Without a designated department or manager responsible for follow-through, action can rely on the goodwill or personal drive of individual champions. When those individuals leave or when priorities change, momentum can quickly be lost and progress stalls.
Barrier 5: Limited Dissemination and Cross-Learning
The way evaluation findings are shared strongly influences their reach and use. In some organisations, reports are not published or stored in a central, easily accessible database. As a result, lessons often remain within a single team, project, or country office, with no structured mechanism to inform the design of future initiatives. I have seen cases where innovative approaches, clearly documented in one location, never reached colleagues tackling similar issues elsewhere. Without a deliberate system for sharing and discussing these findings, other teams may unknowingly duplicate work, invest resources in already-tested ideas, or miss the chance to adapt proven methods to their own contexts. This not only limits organisational learning but also slows the spread of good practice that could strengthen results across programmes.
Barrier 6: Weak Governance and No Management Response System
When there is no structured process for responding to recommendations, there is a real risk that they will be acknowledged but not acted upon. A Management Response System (MRS) provides a framework for assigning ownership, setting timelines, allocating resources, and tracking progress on agreed actions.I have seen situations where workshops produced strong consensus on the importance of certain follow-up steps. However, without a clear mechanism to record these commitments, assign responsibility, and revisit progress, they gradually lost visibility. Even well-supported recommendations can stall when they are not tied to a specific department or manager and monitored over time.
Barrier 7: Leadership Capacity to Use Evaluation Findings
Clear evaluation recommendations will only be useful if leaders have the capacity to apply them. In some cases, managers may lack the necessary skills to translate recommendations into practical measures that align with organisational priorities, plans, and budgets.
Where this capacity gap exists, recommendations may be formally acknowledged but remain unimplemented. The challenge lies not in the clarity of the evidence, but in the ability to convert it into concrete and context-appropriate actions.


Recommendations
1. Evaluation as a Learning process

Shifting the perception of evaluation from a judgment to a learning tool requires deliberate organisational strategies that address both culture and process. The following approaches can help create an environment where all findings: positive, negative, or mixed are used constructively.
a. Set the Learning Tone from the Outset
Evaluation terms of reference, inception meetings, and communications should emphasise that the purpose is to generate actionable learning rather than to pass judgment. This framing needs to be reinforced throughout the process, including during data collection and dissemination, so that stakeholders are primed to see findings as evidence for growth.
b. Analyse Positive Findings with the Same Rigour
Treat favourable results as opportunities to understand what works and why. This includes identifying enabling factors, strategies, or contextual elements that led to success and assessing whether these can be replicated or adapted in other contexts. Documenting and communicating these drivers of success helps shift the focus from defending performance to scaling good practice.
c. Create Safe Spaces for Honest Reflection
Organise debriefs where findings can be discussed openly without the pressure of immediate accountability reporting. When teams feel safe to acknowledge weaknesses, they are more likely to engage with the evidence constructively. Senior leaders play a key role in modelling openness by acknowledging gaps and inviting solutions.
d. Separate Accountability Reviews from Learning Reviews
Where possible, distinguish between processes that assess compliance or contractual performance and those designed to generate strategic learning. This separation reduces defensiveness and allows evaluation spaces to remain focused on improvement.
2. Avoiding the End-of-Project Trap
Addressing the end-of-project trap means planning evaluations so they lead to action while there is still time, people, and resources to follow through. The approaches below can help ensure that findings are not left behind once a project ends.
a. Match Evaluation Timing to Key Decisions
Schedule evaluations so findings are ready before important moments such as work planning, budgeting, or donor discussions. Midline or quick-turn evaluations can capture lessons early enough for teams to act on them.
b. Include Handover and Follow-Up in the Plan
From the start, be clear about who will take responsibility for each recommendation and how progress will be tracked. Build follow-up steps into the evaluation plan so the work continues beyond the final report.
c. Keep Knowledge When Staff Change
Hold debrief sessions before staff leave to pass on the stories, context, and reasoning behind recommendations. Let outgoing and incoming staff work together briefly, and store key information where it can be easily found later.
d. Share Findings Before the Project Closes
Present key findings while the team is still in place. Use short, focused briefs that highlight what needs to be done. This gives the team a chance to act immediately or link recommendations to other ongoing work.
3. Improving Report Accessibility and Feasibility
Evaluation findings are more likely to be used when they are presented in a way that people can easily understand and act on, and when recommendations are realistic for those expected to implement them.
a. Make Reports Usable for Different Audiences
Prepare different versions of the findings: a concise action brief for decision-makers, a plain-language summary for community partners, and the full technical report for specialists. This ensures that each audience can engage with the content in a way that suits their needs and time.
b. Check Feasibility Before Finalising Recommendations
Hold a short “recommendation review” meeting after sharing preliminary findings. Use this time to confirm whether recommendations are practical given available staff, budgets, and timelines, and adjust them where needed.
c. Link Recommendations to Action Plans
Where possible, show exactly how each recommendation can be implemented, including suggested steps, timelines, and responsible parties. This makes it easier for organisations to move from reading the report to acting on it.
4. Closing the Incentive and Accountability Gaps
To improve the likelihood that evaluation findings are acted on, responsibilities for implementation should be made clear in the recommendation statements.
a. Name the Responsible Department or Manager in the Evaluation Report
Each recommendation should specify the department or manager expected to lead its implementation. This ensures clarity from the moment the report is delivered.
b. Confirm Feasibility Before Finalising Recommendations
During the validation of preliminary findings, engage the relevant departments or managers to confirm that the recommendations are realistic given available resources and timelines.
5. Strengthening Dissemination and Cross-Learning
Evaluation findings are more likely to be used across and beyond an organisation when they are shared widely, stored in accessible formats, and actively connected to future programme design.
a. Share Findings Beyond the Immediate Project Team
Circulate the evaluation report and summary briefs to other departments, country offices, and relevant partners. Use internal newsletters, learning forums, or staff meetings to highlight key lessons.
b. Store Reports in a Central, Accessible Location
Ensure that the final report, executive summary, and any related briefs are uploaded to a shared organisational repository or knowledge management platform that all relevant staff can access.
c. Create Opportunities for Cross-Learning Discussions
Organise short learning sessions where teams from different projects or countries can discuss the findings and explore how they might be applied elsewhere.
d. Publish for Wider Access Where Possible
Where there are no confidentiality or data protection constraints, make the report or key findings available on the organisation’s website or other public platforms so that the wider community can benefit from the lessons.
6. Strengthen Governance and Management Response
A Management Response System (MRS) can help ensure that recommendations are acted on by assigning clear responsibilities, setting timelines, and tracking progress from the moment the evaluation is completed.
a. Establish a Management Response Process Before the Evaluation Ends
Agree with the commissioning organisation on the structure and timing of the MRS so it is ready to accompany the final report. This ensures that follow-up starts immediately.
b. Name Responsible Departments or Managers in the Evaluation Report
For each recommendation, clearly state which department or manager is expected to lead implementation. This creates ownership from the outset.
c. Set Review Points to Monitor Progress
Agree on dates for follow-up reviews within the organisation’s governance framework to check on progress against the Management Response.
d. Link the MRS to Existing Planning and Budgeting Cycles
Where possible, align recommended actions with upcoming planning, budget allocation, or reporting timelines so that they can be resourced and implemented without delay.
7. Strengthen Leadership Capacity to Apply Evaluation Findings
Building the skills of managers to apply evaluation recommendations is essential to ensure that findings are used to guide organisational decisions.
a. Include Capacity-Building in the Evaluation Design
Ensure the Terms of Reference (ToR) require activities aimed at strengthening managers’ ability to apply recommendations, such as practical workshops or scenario-based exercises.
b. Provide Decision-Oriented Briefs
Prepare concise, action-focused briefs alongside the evaluation report, outlining specific options and steps for integrating recommendations into planning, budgeting, and operational processes.
c. Facilitate Joint Planning Sessions
Organise sessions with managers to translate recommendations into concrete action plans, ensuring alignment with organisational priorities and available resources.
d. Offer Targeted Support Materials
Develop templates, checklists, or guidance notes to assist managers in systematically incorporating recommendations into decision-making.


Conclusion
Making full use of evaluation findings requires timely evaluations, clear communication, strong follow-up systems, and leadership capacity to act on recommendations. By addressing barriers such as end-of-project timing, inaccessible reports, unclear accountability, limited sharing, and weak governance, organisations can turn evaluations into a practical tool for learning and improvement.

 

By Supreme Edwin Asare 

Monitoring, Evaluation, Research & Learning (MERL) Consultant (Remote | Onsite | Hybrid) | Author of the AI Made in Africa Newsletter. Experienced in conducting performance and process evaluations using theory-based and quasi-experimental methods, including Contribution Analysis and mixed-methods designs. Skilled in Theory of Change development, MEL system design, and training on AI integration in evaluation.📩 kdztrains@gmail.com