Background and Rationale
In the evolving landscape of development work, the ability to adapt and respond to community needs through data-driven decision-making is more crucial than ever. Feedback and recommendations generated from monitoring and evaluation (M&E) processes are intended to inform strategic decisions, improve programming, and foster accountability. However, despite their potential, these insights are often underutilized or sidelined in decision-making process.
Development organizations face a range of challenges to effectively integrate feedback. These include resistance to organizational change, lack of resources for analysis, and a culture that may not prioritize openness or continuous learning. Additionally, leadership that fails to model and reinforce feedback use often contributes to a cycle where feedback is collected but not acted upon.
When feedback systems are poorly integrated, the result is a disconnect between communities and the programs designed to serve them. This can lead to ineffective or misaligned interventions, diminishing both impact and stakeholder trust. Addressing these challenges is essential for increasing the relevance, responsiveness, and effectiveness of development efforts.
Discussion Purpose: This discussion aims to generate actionable insights into how development organizations can better ensure the effective utilization of evaluation feedback in their decision-making processes. It will bring together practitioners, evaluators, researchers, and organizational leaders to reflect on current barriers and identify practical strategies for enhancing learning and accountability.
Problem Statement: Although development organizations recognize the importance of stakeholder feedback, many fail to meaningfully incorporate it into decision-making. Organizational culture, resource limitations, lack of leadership engagement, and resistance to change all contribute to this issue. Without systematic mechanisms to collect, analyze, and apply feedback, opportunities for learning and improvement are lost, which leads to reduced impact, disengaged stakeholders, and diminished accountability.
Discussion Objectives
- To identify the root causes of barriers to feedback use in development decision-making.
- To explore and assess strategies for overcoming these barriers, such as designing enabling systems and processes.
- To examine the critical role of leadership in fostering an organizational culture that values feedback and promotes its use for learning and strategic growth.
Guiding Questions
- What are the most common barriers to feedback use in development organizations?
- How can organizational culture and leadership influence feedback responsiveness?
- What practical steps can organizations take to embed feedback use into their decision-making cycles?
- What tools, incentives, and systems have proven effective in bridging the gap between feedback and action?
- How can stakeholder trust and engagement be maintained and strengthened through feedback use?
The discussion is open for contributions until 05 September 2025.
This discussion is now closed. Please contact info@evalforearth.org for any further information.
Zimbabwe
Chamisa Innocent
EvalforEarth CoP Facilitator/Coordinator
EvalForward
Posted on 08/09/2025
Thank you all for sharing such rich and diverse insights during the discussion on “How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making.”
A special mention goes to Monica, who initiated this discussion. We now officially close it.
Please note: A comprehensive summary of the discussion will be made available shortly in English, Spanish, and French. Check back soon for updates.
Interested in contributing or initiating a discussion? Propose your topic here: https://www.evalforearth.org/propose-your-topic. Our team will support you throughout the process.
On behalf of EvalforEarth Team
Norway
Lal - Manavado
Consultant
Independent analyst/synthesist
Posted on 08/09/2025
How to Ensure the Use of Evaluation Results in Decision Making
A historical retrospect of the evolution of policies would convince one of their fragmented and less than coherent emergence. A concerted attempt to ascertain their public utility by evaluation is fairly new. Thus, decision-making seems to have had a logical priority over evaluation.
Before one proceeds, it is important to distinguish between decision-making and implementation. Owing to its general character and political considerations, the former is all too often guided by expediency or an obvious public need. This may be far from being ideal, but one would have to take political reality into consideration were one to make a worthwhile contribution to public well-being.
This obvious distinction between decision-making and implementation introduces a temporal component to what kind of evaluation would be of significant utility to decision makers. This may be called pre- and post decision making evaluation feed-back respectively. When a decision is to be made, pre-decision evaluation feed-back could provide some useful guidance as to its appropriateness with reference to the following criteria:
• Competence of decision makers and decision implementers; this ranges from the national and regional to the local field levels.
• Cost of procurement, operation, and maintenance of the tools and other materials required for implementing a decision.
• Its effects on the environment, national employment figures and the equitability of its results.
• Its implications on local culture, public health etc.
The perceptive reader would have noticed at once that the evaluation of those four criteria could only be undertaken with reference to a specific decision, hence, its logical priority over evaluation.
Here, evaluation faces two major challenges:
• Degree of political devolution in a country; for instance, in Canada, the provinces have a great deal of political autonomy, thus the regional policies ought to be dovetailed into their national counterparts. In Scandinavia, local authorities have a great deal of autonomy, hence, in those countries, decision design applicable to the area is carried out locally. In such cases, decision evaluation has to be very flexible because what food each area may successfullyproduce can vary significantly.
• Differences in the type of data on which an evaluation could justifiably be based vary
considerably. While policies and their implementation strategies are concerned with overall national benefit, that at the local or field level one has to pay attention to what contribution a plan/project may make first to the well-being of an area and then to the country as a whole.
When a previous decision to achieve comparable objectives has been implemented, the actual public utility of its results would provide the evaluator some very useful guidelines on what recommendations would be most useful to the designers and implementers of the next decision on the same subject. The utility of such recommendations depends on the willingness and ability of the political leaders and their decision makers to learn better ways of doing things.
Next, one encounters the problem of identifying the data on which an adequate evaluation may be based. Obviously, what ought to be monitored depends on the level at which an evaluation is carried out. For instance, an evaluation of a decision and its implementation strategy will require the pertinent information relative to the four criteria described earlier. It will be noted that at regional and local levels, the relevant data will also vary according to the political powers vested in them.
Finally at the plan/project level one needs clearly to distinguish between monitoring the ‘objective facts that may indicate its successful completion, and the actual benefits it offers the target group. A multi-million Dollar motor way hardly used by vehicular traffic had been quoted in this forum as an example of the former.
This sketch of multi-layered evaluation required at national, regional and local levels
provides one a glimpse of the way forward. It has two dimensions:
• National decision-makers have neither the time or the inclination to peruse evaluations of plans/projects; what they need to know are the achievable goals of national importance, like a way to enhance food production. The decision and implementation strategies needed here are general in character. One might say that those provide a framework aimed at a general goal, while successful plans/projects can be seen as the pieces of a jigsaw puzzle, and if the general goal is attained, then those pieces would fit snugly into the picture of success.
• The four criteria discussed earlier will guide those pieces of the jigsaw puzzle as to their place in the whole, their suitability with reference to a national goal. Therefore, the challenge one faces in incorporating evaluation as an adjunct to national planning is how to make the decision-makers understand its usefulness and persuade them to use it appropriately. Unfortunately, their unwillingness or inability to apprehend the necessity of completeness of the means of implementation in use, failure to grasp the need for inter- and inter decision harmony make the present task rather difficult.
Lal Manavado.
Indonesia
Monica Azzahra
MELIA Specialist
Center for International Forestry Research (CIFOR)
Posted on 03/09/2025
Dear all the contributors,
I extend my sincere gratitude to all who have shared their thoughtful perspectives, insights, and experiences, as well as to those who kindly provided links and references that further enrich our understanding! ❤️❤️❤️ I truly appreciate the collective effort to strengthen the effective utilization of feedback in our work.
Additional key takeaways derived from our contributors:
The contributions highlight that while evaluations generate valuable findings, their impact depends on deliberate strategies to ensure feedback is effectively integrated into decision-making. Key enablers include fostering stakeholder ownership, embedding recommendations into organizational systems, and aligning evaluation processes with institutional culture and leadership.
Contributors emphasized the importance of accessible communication formats, participatory co-creation of recommendations, and mechanisms such as management response systems, after-action reviews, and capacity-building. Structural and cultural barriers—such as siloed work, resistance to feedback, and political contexts—were also noted as critical challenges.
Overall, the discussion underscores the need for institutionalized, context-sensitive, and inclusive approaches that transform evaluations from accountability tools into drivers of learning, adaptation, and sustainable impact.
We will bring this discussion to a close at the end of this week, and additional contributions are most welcome until then. ♥♥♥
Kind Regards,
Monica
Italy
Ibtissem Jouini
Senior Evaluation Manager
CGIAR
Posted on 02/09/2025
Dear Monica,
The practice shows that even the most rigorous and high-quality evaluations do not guarantee that their findings and recommendations will be used underscoring the need for deliberate efforts to embed evaluation use within decision-making processes. While research on the factors influencing the use of evaluations continues to evolve, several best practices have emerged to enhance their uptake. These include producing more accessible formats such as short reports, briefs, and videos, as well as offering one-on-one briefings to engage stakeholders effectively and meaningfully. Participatory approaches, such as workshops to refine or co-create recommendations, have also gained attention as strategies to foster ownership and relevance.
I would like to contribute to the discussion by sharing a few resources that the CGIAR IAES Evaluation Function has been working on to improve the use of evidence and evaluations in CGIAR and beyond:
Link: https://iaes.cgiar.org/sites/default/files/pdf/Mapping%20management%20practices_final%2029%20August_0.pdf
This study explores how evaluation management practices affect the use of evaluation findings and recommendations in decision-making, policies, and actions. It maps practices across independent evaluation entities and highlights perceptions of evaluation use in multilateral organizations. Beyond CGIAR, the study seeks to spark wider dialogue within the evaluation community on how management arrangements can enhance the relevance and influence of evaluations. Drawing on a literature review and the mapping of evaluation management practices, it offers conclusions and recommendations organized around the key phases of an evaluation.
Link: https://cgspace.cgiar.org/server/api/core/bitstreams/46ead1fc-5f24-45a6-a908-478cced8a811/content
This benchmarking study maps existing Management Response structures, processes, and review methodologies, exploring best practices in implementation, oversight mechanisms, and tracking systems to support evaluation uptake.
Link: https://iaes.cgiar.org/sites/default/files/pdf/Review%20of%20CGIAR%20Management%20Response%20System%20to%20Independent%20Evaluations%20_1.pdf
The CGIAR IAES Evaluation Function conducted this review of the CGIAR Management Response System. Aligned with both accountability and learning objectives, the review assessed the effectiveness and efficiency of the CGIAR MR system.
Best regards,
Ibtissem Jouini
Nigeria
Esosa Tiven Orhue
Founder/CEO
E-Warehouse Consulting
Posted on 30/08/2025
Dear members,
Empatically, effective utilization of feedback and recommendations from evaluation reports are determine by the level of interest and engagement of stakeholders involve.
It is the report that determine the input to the impact and outcome of implementation (ie if is evidence based, mplementable) and these most pass through these system or have these mechanism in place.
1. Information preservation system
2. Continuous implementation system
3. Continuous funding system
These help to sustain national or international institutionalization of evaluation system, which manage data system for programmes, projects and policies, for national or international decisions-makers.
However, if it is harnessed, it becomes integrated system for economics growth and development, data will become fully utilized and policies implementation become more effective and efficient in the society.
The cardinal points are interest, engagement and involvement of stakeholders. Feedback strengthens recommendation, when there is always effective and actionable decision on this for the people, it will lead to economy growth and development. Invariably, strengthens evaluation system nationally and globally.
Evaluation is majorly of a continuous research work. Evaluation should be liking to be research work because of the instruments employ and deploy for investigation, which is feedback or findings and recommendations.
The call for effective feedback and recommendations also include these;
- Technological innovative system at workplace which will boost evaluation system for feedback and recommendations as a source of strength to information management.
- Continuous support and protection of workforce for efficient and effective information management system.
- Strong synergy amongst Stakeholders (the Researchers, funders, government, NGO etc agreement on input, impact and outcome), should be strictly adhered to. These will strengthen and guarantee further investigation and implementation of evaluation reports.
Evaluating a programme, project or policy is to ensure progress, impact and outcome meant in the society. This can only be done by consistent decision and action taken from feedback and recommendations reports.
Therefore, consistency should be emphasize and paramount in evaluation reports system.
Esosa Orhue.
Germany
Ines Freier
Senior consultant for NRM and biodiversity, Green economy
consultant
Posted on 29/08/2025
The organisational culture and strategic management decisions influence the uptake of recommendations.
I mainly recommend 4-8 actions which the client can implement in a given timeframe and ask staff what kind of recommendations they want to hear / give themselves / give other projects with the same topic.
Some international organisations or project teams see evaluation reports as a tool for accountability and less for learning; so giving recommendations for learning would be a waste of time., recommendations for data gathering and improving the monitoring system can be given.
Further, recommendations which are outside the theory of change / the understanding of the evaluation object by the client / users of evaluations, can not be given even if they are useful. One example is if a project manager of an international organisations acts like a manager of an INGO despite having a UN passport it is hard to communicate to clients what the role of the UN in a field of action can / should be beyond implementing a one country project . Or Environmental INGOs reinvent the wheel in promoting small green businesses being unable to take on the body of learning from organisations for SME promotion because it is outside their social network .
Mexico
Rasec Niembro
Evaluation Analyst
GEF IEO
Posted on 28/08/2025
From my experience as an evaluation analyst at the GEF Independent Evaluation Office working with the LDCF and SCCF, I have seen that the real challenge is not producing recommendations, but ensuring they are used. In our evaluation of GEF support to Climate Information and Early Warning Systems (CIEWS), we emphasized that sustainability requires more than technical outputs—it depends on embedding financial and institutional strategies that continue after projects close. For recommendations to be useful, they must strike the right balance: too prescriptive and they risk irrelevance; too vague and they are ignored. Incentives matter, too—whether through accountability mechanisms or integration into decision cycles, organizations need to create conditions that make feedback actionable. While “champions” can accelerate uptake, relying on individuals is risky in institutions with high turnover. Instead, systems approaches—such as requiring sustainability strategies across all projects or institutionalizing follow-up mechanisms—provide continuity and resilience. Ultimately, feedback gains influence when it is institutionalized, incentivized, and framed in ways that decision-makers can adapt to their realities.
Guatemala
PATRICK DUMAZERT
Lead
ANUBIS
Posted on 28/08/2025
REFLECTION ON SELF-EVALUATION AND ADMINISTRATIVE COMMUNICATION
Patrick Dumazert, August 2025
In the document “The impact of new technologies on internal communication methods in companies” one sentence caught my attention: “Traditional methods such as the hierarchical network guaranteed the transmission of directives from management to the lower levels…”, and I would like to take this opportunity to raise a key issue for results-based public management.
At the outset, let us note that this statement, centered on companies, can easily be extended to the issue of public management. I do not share the view that this is the case for all subjects, far from it, but yes, in this case, the general principles of administrative communication are valid for both worlds, public and private.
That said, this dominant form of administrative communication, vertical and top-down, which prevails in public administration, especially in countries with an authoritarian political culture, must be complemented by an upward form so that evaluation can play the role expected of it. This role is not only one of ex post control but also one of learning, feedback, and the generation of better practices.
It is not enough to generate evidence at the level where facts occur. Already it is no small matter to select facts and convert them into compiled and organized data, and even more so to “make them speak,” as I like to say—that is, to transform them into information that responds to judgment criteria (and indicators) and ultimately to evaluation questions, themselves organized according to evaluation criteria (the classic DAC criteria, among others). But that alone is not sufficient.
It is then necessary to communicate this information, analyzed and reasoned, upwards to the various levels of management and decision-making. And since there cannot be as much space at the top as at the bottom (otherwise it would no longer be a pyramid; no one has ever seen an army with as many generals as soldiers), this means that upward administrative communication must be organized, shaped to meet the expectations of different levels of management, and adapted to the institutional culture particular to each entity or even each state.
Some cultures are highly centralist, where the highest-level authorities want to know every Monday what happened in each municipality the previous week, while other forms of government are much more decentralized.
Beyond this undoubtedly essential aspect of the specific institutional cultures of each context, there is also a crucial dimension concerning the nature of evaluation objects. These, as we know, are organized (or at least can be organized) within the conceptual framework widely accepted by the evaluation community the theory of change. With all the variations one might wish, this theory posits that actions produce outputs, which generate outcomes, which in turn lead to impacts.
As a reminder, though this is not the main subject here: outputs are substantivized verbs (the act of cutting apples produces cut apples), while outcomes involve changes in practices by actors (consumers may prefer to buy pre-cut apples—or not, which is where one judges the relevance of the action). As for impacts, for which I use the accepted term “contribution,” these are transformational changes. In my trivial example, one might imagine that, over time, children end up believing that apples already grow on trees in pieces.
This reminder makes it clear that evaluation objects, in particular the result indicators at the three major levels of the logical chain, evolve over time, each according to its own temporality. Therefore, the evaluation system must also have a dimension adequate to this temporality. For a long time, external evaluations of results were conducted at the end of projects, not only to provide accountability at the close of operations but also to “allow time” for impacts to materialize.
The problem is that these external, ex post evaluations, which primarily sought impacts through “quantitative” means using quasi-experimental methods, ceased to be useful for management, since the operations were already concluded by the time reports arrived. Their fate was therefore (and probably still often is) to become archived documents, with virtual repositories having replaced shelves without changing their function. Those of us who dream of doing science, including myself, would even like to conduct post-project evaluations a few years later, but no one pays for that, at least not in the development world.
Hence my proposal (formulated more than twenty years ago), which mainly consists of this: to evaluate during implementation the generation of outcomes, thereby reconciling the temporal and logical dimensions, by focusing on the essential link at the center of the theory of change, the outcomes, while also ensuring that the reality transformed by the action is interrogated, beyond the simple monitoring of “outputs.”
Based on these analyses carried out at different moments of implementation, evaluation can feed upward administrative communication mechanisms with generative analyses, that is, analyses informed by those conducted at the previous evaluation moment. It is not enough to state at any given moment that a particular indicator has reached a certain value; one must also explain how this current value compares with that of the previous evaluation moment, accompanied by the interpretation it deserves, without neglecting the usual comparisons with baseline values and with those expected at the end.
Postscript. The proposal also included an additional element: one of the upward communication mechanisms in question could, in certain cases, be the “situation room,” an idea inspired by health management and crisis management in general. However, it is clear that with virtual technologies and even augmented reality, this becomes by nature obsolete, since being “on site” or staying at home in front of a screen ultimately amounts to the same.
Guatemala
Angela M. Contreras
Founder, Director, and Principal Consultant.
Verapax Solutions Inc.
Posted on 27/08/2025
Top of mind for me is the complex issue of leadership and organizational culture.
As Monica rightly concludes, leadership is critical. But in my experience as external consultant helping small civil society organizations. In many instances, junior staff members are the ones with the most refreshing perspectives on issues and solutions. I agree with the conclusion that ‘senior leaders must model learning, reward follow-through, and create no‑blame spaces for reflection so findings positive and negative, drive improvement’, but I also believe we need to design participatory, collaborative and empowerment approaches to monitoring and evaluation where time is periodically scheduled for team members (junior and senior) to engage in retrospective reflective reviews and discussion of case studies and lessons to repeat or avoid.
In my small evaluation practice, I have been finding increasing receptivity from small organizations to select past evaluation reports and study the documents through the lens of the organization’s statement of shared core values and principles. The exercise is part of our capacity development activities. At determined project team meetings or organization-level meetings, 15-30 minutes are allocated to discuss the assigned evaluation report and to co-generate a list of lessons to formally integrate to management and evaluation systems, or failures to learn from and keep working on; all this sense-making is done seeking alignment to the organization’s shared core values and principles.
If anyone else has been using a principles-based approach to developing an organizational culture that values learning and action-oriented exchange of feedback.
Kenya
Eddah Kanini (Board member: AfrEA, AGDEN & MEPAK
Monitoring, Evaluation and Gender Consultant/Trainer
Posted on 26/08/2025
Thank you for initiating this important topic on underutilised feedback in development decision-making.
From my experience, some of the barriers include organisational culture of resistance, where feedback is seen as criticism rather than a learning tool. Working in Silo is also a barrier where feedback remains in one entity, or in one project, instead of feeding into broader institutional decisions.
Leadership and culture play a critical role in influencing feedback responsiveness. Good leadership will set the tone for valuing evidence, allowing open dialogue and reflection, and encouraging adaptive management
Some practical steps for embedding feedback use, include institutionalising after-action reviews and learning forums. Other steps involve integrating feedback in planning cycles, and capacity building of staff not only in data collection but also in interpretation and application.
Brazil
Daniela Maciel Pinto
Manager
Embrapa
Posted on 26/08/2025
Dear Monica and all,
Thank you for this topic. This debate on “feedback-to-action” is very relevant. In my PhD research, I have been exploring precisely this gap between evaluations and the effective use of their results, especially in the context of agricultural research. In a recent paper, we showed that the barriers are not only technical, but also structural, cultural, and relational — something that strongly resonates with what has been raised here.
In the same study, we discussed how the use of research impact evaluation results remains limited and often associated more with accountability than with learning or strategic management (Pinto & Bin, 2025). Building on this, we developed AgroRadarEval (Pinto et al., 2024) — an interactive tool that seeks to systematize this use, aligning evaluation with the principles of Responsible Research and Innovation (RRI) and Responsible Research Assessment (RRA).
To complement the discussion: I see the notion of a “culture of impact” as an important bridge. It shifts the perspective from “evaluation as compliance” to “evaluation as a driver of transformation,” connecting the institutionalization of processes with real changes in planning, collaboration, and communication. This concept can broaden the perspective on feedback-to-action by articulating not only processes and systems, but also values and organizational reflexivity.
Georgia
Dea Tsartsidze
International Research, Monitoring, Evaluation and Learning Practitioner
SAI - Solution Alternatives International
Posted on 25/08/2025
Dear Monica and everyone,
Thank you for initiating such a valuable discussion, and I appreciate the rich insights shared by everyone. The perspectives offered truly echo many of the challenges we face in our field.
I find myself agreeing with most of the points raised, particularly Aurelie's emphasis on syncing evidence generation with stakeholder needs and Brilliant's focus on formalizing feedback processes. However, I'd like to add another layer to this conversation: the critical importance of establishing clear organizational processes that position evaluation not as a standalone exercise, but as an integral component interconnected with program implementation and broader M&E systems.
In my experience, one of the fundamental barriers we haven't fully addressed is the question of ownership. Too often, it feels like evaluations are "owned" exclusively by evaluators or M&E staff, while project teams remain somewhat detached from the process. This disconnect may significantly impact how recommendations are received and, ultimately, implemented.
This raises my first question: how do we better involve project staff in the finalization of recommendations? I've found it particularly valuable to organize joint sessions where evaluators, M&E staff, and project teams come together not just to discuss findings, but to collaboratively shape the final recommendations. When project staff are actively involved in interpreting findings and crafting recommendations that reflect their understanding of operational realities, ownership naturally increases. Have others experimented with this approach?
Beyond the ownership challenge, I think there's another fundamental issue we need to confront more directly: the quality of evidence we produce as evaluators. Are we consistently striving to generate robust, credible data? When we bring evidence to decision-makers, are we confident it truly meets quality standards, or might we sometimes, unintenally, but still be adding another barrier to utilization—especially for already skeptical decision-makers who may use questionable evidence quality as grounds to dismiss findings entirely?
This leads to my second question, which has two parts: First, do we have adequate quality assurance instruments and internal processes to genuinely track and ensure the quality of evaluations we conduct? I've found quality frameworks useful, though admittedly, there's still considerable room for improvement in making these more objective and systematic.
Second, even when our evidence is solid, how effectively are we translating and communicating our findings to decision-makers? Are our reports and presentations truly accessible and relevant to how decisions are actually made within organizations, or are we producing technically sound but practically unusable outputs?
These questions aren't meant to shift responsibility away from the systemic barriers we've identified, but rather to encourage a more reflective approach to our own role in the utilization challenge.
Looking forward to your perspectives on this.
Canada
Astrid Brousselle
Professor
School of Public Administration
Posted on 25/08/2025
Thank you all for your insights and to Monica for summarizing the ideas! As evaluators, we often consider it is our responsibility to do better and we bear the responsibility of improving our practices. However, if the context is not receptive to collaboration, even best practices will have no impact. Contexts are not equal, some are favourable to collaborative approaches and will be eager to use evaluation results, other contexts are political- even polarized- and will likely use evaluations if strategically relevant, and some contexts are just not interested in evaluation results. In these last contexts, whatever the efforts you do to implement knowledge transfer best practices, nothing will happen. We called these contexts "knowledge swamps" in the article we published on evaluation use, some years ago : Contandriopoulos, D., & Brousselle, A. (2012). Evaluation models and evaluation use. Evaluation, 18(1), 61-77.
Understanding the characteristics of the context to anticipate the implications on evaluation use is, in my view, probably the most important thing to do. It will also help relieve the burden put on evaluators' shoulders often bearing most of the responsibility for the use (or non-use) of their results.
India
Kavitha Moorthy
Founder cum Managing Trustee
Vallabhbhai Patel Farmers Welfare Trust
Posted on 24/08/2025
I agree wih Dr Ravindran Chandran that stakeholders are more important in evaluating process and submission of reports in policy decision making and he rigtly pointed out that farmers generations may be apointed as evaluators to speed up the feedback recived from stakholders. Evaluation reports should be circulated or updated among the farmers concern / Directors of Farmers Producers Organization to know the status of feed back given by stakeholder. Our Trust Vallabhbhai patel Farmers Welfare Trust is working for farmers generation eduaction and their employment. I am happy to participate in the important discussion which is very much essential and need based for effective utilization of reports prepared by adminsitrators and I hope this form reports effectively included as early as in policy level
India
Ravindran Chandran
Associate Professor and Head
Tamil Nadu Agricultural University
Posted on 24/08/2025
Stakeholder feedback is always taken from monthly grievance meeting conducted by District collectors of Tamil Nadu in India and the action taken and recommendation will be updated in next meeting based on the priority. Effective utilization of feedback and recommendation from evaluation reports done by scientists or through survey is entirely depends on the administrative officers and ministers of the rolling government and system prevailing in the regions. Evaluation reports should be periodically evaluated until its adoption or benefitted by the stakeholders for which man power or financial support is very much essential. Stakeholders or their generations (Son and Daughters) from Farmers Producer Organization should be included as evaluators in every evaluating process to avoid manpower and speed up the process and effective utilization of feedback and recommendation.
Indonesia
Monica Azzahra
MELIA Specialist
Center for International Forestry Research (CIFOR)
Posted on 24/08/2025
Chers contributeurs,
J’adresse toute ma reconnaissance à chacun d’entre vous pour le partage de vos précieuses expériences et connaissances, qui ont apporté un apprentissage significatif.
Vous trouverez ci-dessous les premières conclusions tirées de notre discussion. S’il existe des points supplémentaires qui n’ont pas encore été pris en compte, j’espère que d’autres contributions pourront être partagées avant la clôture de notre discussion à la fin de ce mois.
Dans divers contextes et organisations, les contributeurs identifient un écart persistant entre la collecte de retours et leur mise en pratique. Celui-ci réside moins dans des aspects techniques que dans des faiblesses structurelles, culturelles et relationnelles. Les obstacles fréquemment mentionnés incluent la mauvaise qualité des données et la fragmentation des systèmes d’information, une culture d’évaluation qui considère les constats comme une exigence de conformité ou de jugement plutôt que comme un apprentissage, des évaluations réalisées en fin de projet sans lien avec les décisions, des formats de rapports inaccessibles ou peu pratiques, des incitations et responsabilités peu claires, une gouvernance faible ou l’absence de systèmes de réponse de la direction, une diffusion limitée et un manque d’apprentissage croisé, ainsi que des contraintes de ressources. Ces problèmes fragilisent la confiance avec les parties prenantes et réduisent la valeur perçue des retours.
Pour surmonter ces obstacles, les praticiens recommandent d’institutionnaliser des processus clairs et formels de transformation des retours en actions, en définissant responsabilités, délais, ressources et suivi dès le départ. Les mesures pratiques incluent l’alignement des évaluations sur les cycles de planification et de budgétisation, l’utilisation de systèmes de réponse de la direction et de tableaux de bord pour suivre l’état d’avancement des recommandations, et la conversion des recommandations en plans concis et exploitables avec des responsables identifiés. Le leadership est déterminant. Les dirigeants doivent donner l’exemple en matière d’apprentissage, récompenser la mise en œuvre et créer des espaces sans reproche pour la réflexion, afin que les constats, positifs comme négatifs, nourrissent l’amélioration. Le renforcement des capacités des gestionnaires et du personnel est nécessaire pour interpréter les données probantes et les traduire en actions réalisables.
L’utilisation effective des retours dépend également de la communication et de l’inclusion. Il s’agit d’adapter les produits aux différents publics (notes d’une page, infographies, tableaux d’action), d’impliquer les décideurs et les communautés dès le début, d’organiser des ateliers de réflexion et de co-construction avec les parties prenantes et de boucler la boucle en rendant compte de manière transparente des changements réalisés. Les outils technologiques et participatifs, tels que les tableaux de bord en temps réel, les enquêtes par SMS et les fiches communautaires de suivi, permettent de raccourcir les boucles de rétroaction, tandis que des référentiels centralisés et des forums d’apprentissage croisé diffusent les enseignements entre équipes et pays. Plusieurs organisations, telles que CRS et le PAM, illustrent que l’intégration de l’évaluation dans la planification courante, le suivi de la mise en œuvre et l’alignement des recommandations avec les ressources et la gouvernance renforcent l’appropriation et l’utilisation.
En somme, transformer les retours en pratique exige de combiner des systèmes et outils formels avec le leadership, des ressources adaptées, une communication accessible, des processus participatifs et une culture d’apprentissage qui relie les données probantes à des actions concrètes et suivies.
Bien cordialement,
Monica
Nigeria
Victoria Onyelu Ola
Research and Grant Intern
HTSF Global Nigeria Limited
Posted on 23/08/2025
The gap between feedback collection and feedback utilization in development organizations is often not just technical but also structural and relational with the following points:
Organizations should adopt clear feedback-to-action protocols where every evaluation recommendation is formally logged, assigned to a responsible unit, tracked, and reviewed periodically. This makes feedback actionable, not optional.
Senior leaders must demonstrate that feedback matters by referencing evaluation insights in strategic decisions, rewarding teams that act on feedback, and holding managers accountable for implementation. Leadership modeling creates a culture where feedback is not symbolic but practical.
Beyond collecting data, organizations must budget for analysis, dissemination, and learning workshops that bring staff, stakeholders, and community representatives together to co-interpret findings. This builds ownership and enhances trust.
Digital dashboards, SMS surveys, and community scorecards can create real-time feedback loops, especially in low-resource contexts. Importantly, participatory approaches give communities a voice in shaping interventions, ensuring alignment with local realities.
Transparent communication back to stakeholders is essential. Without closing the feedback loop, communities may become disillusioned, perceiving evaluations as extractive rather than transformative.
India
Ravindran Chandran
Associate Professor and Head
Tamil Nadu Agricultural University
Posted on 22/08/2025
Feedback and recommendation from the Research Scholars, Scientists, Entrepreneurs and Stakeholders is very much important in any schemes, conference, workshop after its completion to give recommendation for policy makers. In recent years, receiving of feedback is lacking at Course Curriculum besides from participants of conference /symposium/ workshop/ training programme. In most of the time, Recommendation given by session chair of the conference/ symposia is not compiled and submitted to the policy makers which leads to waste of resources. Though some academia following collecting the feedback and giving reports to the government which is not taken into much care for adoption due to financial problems. To ensure effective utilization of feed back and recommendation, the organizer of the event /scientific society /council should have follow up programme or set up a committee to follow up until inclusion of evaluation reports at policy level or come to adoption at field level.
For example, use of plastics for ornamental purpose which is against flower growers, floriculture industry and nature were presented in international events. But recommendation not made and yet to be come to policy level. Hence any evaluation reports through feedback and recommendation should be implemented at policy level as early as possible.
Italy
Aurelie Larmoyer
Senior Evaluation officer
WFP
Posted on 22/08/2025
Dear Monica and colleagues,
Thank you for a very interesting discussion and insights.
Increasing the influence of evidence on decision-making is a long-standing challenge for all of us working on Monitoring or Evaluation functions, which connects to a series of factors at all levels of our Organizations, even when the importance of evidence is well recognized.
In the World Food Programme' evaluation function, we have enforced dedicated strategies to encourage our stakeholders to use evidence as they make decisions. Looking back on a few years of experience, I can share some reflections on what we start seeing to be effective.
I certainly subscribe to many of the points made by colleagues related to impeding factors, such as issues with the quality of their data; the fragmentation of information systems; an organizational culture that still sees monitoring or evaluation as compliance exercises rather than learning opportunities, or on the centrality of senior leadership’s championing role.
Regarding options to better embed evidence into decision, I also concur with the views shared by our colleague Brilliant Nkomo, that formalizing processes is an essential first step. If many organizations do have clear M&E frameworks and develop associated guidance and capacities, it is also important to ensure to somewhat institutionalize the feeding of evidence-based knowledge into any policy or programmatic discussion, by integrating it formally into strategic planning, or regular programme stock take and review processes.
Related to this, our experience has shown that efforts to sync our evidence generation with the needs of our stakeholders do pay off: as generators of evidence and knowledge, we have been more proactive in engaging with those whose decisions we think we can support, to hear what our colleagues are grappling with and understand where they have gaps. In parallel, we developed a capacity to respond to needs as they arise, by repurposing evidence that exists into new products. Since we started developing this capacity, we are seeing that a culture shift slowly operating, whereby evaluation gets to be considered as a knowledge partner.
Last, I would also say that the returns we had from our stakeholders, as we surveyed them on how they value the evidence we provide, has also kept confirming that tailored communication also pays off: we know that various roles will call for various focus and form of evidence preferences. So, the more we adapt to the variety of needs of those we serve, the better our products are received, and likely to be used.
Best to all,
Aurelie
Zimbabwe
Brilliant Nkomo
Senior Evaluation Advisor
Posted on 22/08/2025
Most common barriers to feedback use in development organizations: Most organizations grapple with a lack of confidence in the quality of their data, which undermines its credibility and utility for decision-making. Another significant issue is the fragmentation of information systems, leading to data silos that prevent a holistic view of a program's performance. Sometimes, a pervasive cultural barrier exists in organizations, donors included, as evaluation is viewed as a compliance exercise rather than a genuine opportunity for learning and adaptation.
Organizational culture and leadership influence: Leadership is arguably the most critical factor. When senior leaders visibly champion the use of evidence and model a commitment to continuous learning, it sends a clear signal throughout the organization. This top-down reinforcement helps to shift the organizational culture from one that is risk-averse to one that embraces feedback as a tool for iterative improvement. Leaders must also empower their teams with the necessary skills and resources, such as data literacy training, to cultivate an environment where evidence-based decision-making is the norm.
Practical steps to embed feedback use into their decision-making cycles: To embed feedback, organizations must first formalize the process. This involves establishing clear, well-documented monitoring and evaluation frameworks that provide a strategic roadmap from data collection to strategic review. It is also essential to invest in sustained capacity building, moving beyond one-off training to long-term mentoring that builds deep, practical skills in data analysis and adaptive management. By integrating feedback loops into existing strategic planning processes, organizations can ensure that learning becomes a continuous, core activity rather than an ad hoc afterthought. Institutionalization of Data use or evaluation results-use monitoring could be another critical added layer in the M&E framework.
Effective tools, incentives, and systems in bridging the gap between feedback and action: Effective tools and systems are those that shorten the feedback loop and make data available, affordable, accessible and usable. Real-time data collection tools, for instance, are invaluable for ensuring feedback is timely and relevant for immediate programmatic adjustments. Similarly, data visualization tools transform complex information into clear, actionable insights that can be understood by non-technical audiences. The most potent incentive is a demonstrated link between feedback and tangible improvements. When staff and stakeholders see that their input directly leads to more effective and impactful programmes, the intrinsic motivation to engage with the process is significantly strengthened.
Maintaining and Strengthening stakeholder trust and engagement through feedback use: Stakeholder trust is built through transparent, two-way communication. It is not enough to simply collect data from communities and partners; organizations must share what they have learned and demonstrate how stakeholder input has led to tangible changes in programming. This process validates their perspectives and strengthens their sense of ownership.
Senegal
Ndiacé DANGOURA
MEAL & Knowledge Management - Regional Advisor
Catholic Releif Services (CRS) -West and Central Africa
Posted on 21/08/2025
The challenge of translating evaluation findings into actionable organizational change remains one of our sector's most persistent issues. From my experience in program management and MEAL systems in West Africa, I believe we must fundamentally shift how we conceptualize evaluation within the project lifecycle.
One of the primary barriers to feedback utilization lies in our prevailing evaluation culture that treats evaluations as compliance deliverables rather than strategic learning tools. Once an evaluation report receives internal or donor approval, organizations often consider the process complete, missing critical opportunities for adaptive management and program quality improvement. This "tick-box mentality" creates a disconnect between evaluation investment and organizational learning outcomes.
Systematic Solution:
At Catholic Relief Services (CRS), we have institutionalized a systematic approach that demonstrates how feedback can be effectively integrated into decision-making cycles:
Key processes:
This approach has been successfully implemented across multiple contexts, with practical examples of use in CRS Mali, Burkina Faso, Guinea-Bissau, Benin, and Togo, particularly as part of the McGovern-Dole Food for Education program and other interventions.
Ndiacé DANGOURA - MEAL & KM Regional Advisor -CRS West Africa
United Kingdom
Lea Corsetti
MERL Consultant
Posted on 20/08/2025
As a YEE with experience spanning program implementation and external evaluation, I've observed how the current system can inadvertently create a "game" dynamic between stakeholders and donors, where feedback becomes a procedural requirement rather than a catalyst for meaningful change.
Common Barriers to Feedback Use (Q1) Beyond implementer resistance, I don't think we have fully solved the challenge of context-insensitive recommendations. External evaluators can have limited understanding of on-ground dynamics that program staff navigate daily. Additionally, organizational capacity varies significantly as some are structurally equipped to integrate change while others struggle with even small modifications due to complex dynamics which can be both internal and external.
Culture and Leadership Influence (Q2) I increasingly view evaluators as facilitators for change rather than purely technical professionals. Organizations can still default to "we've always done it this way" thinking. A family friend who runs a business recently shared that she hesitates to hire graduates because "when they join they immediately want to change everything without realizing that even a small change requires months and endless meetings with staff." I don't think this experience is exclusive. I have spoken to organizations that feel similarly about evaluation recommendations—they'll complete the exercise as a donor requirement, but changing aspects of projects or programs feels overwhelming, especially if they are still ongoing. I also wonder if it is enough for leadership to create space to accompany change and thoughtfully consider what and how recommendations are implemented or if there is a more creative approach beyond leadership which might be more successful in implementing change at all levels.
Practical Steps for Embedding Feedback (Q3) My most effective program experience was with a fantastic manager and head of country operations who integrated learning into our weekly cycles. We didn't just check project progression—there were constant feedback loops for improvement and creative problem-solving around delicate stakeholder management for a long-term project. Beyond taking recommendations seriously, I wonder if we as evaluation professionals also need better presentation formats. One evaluation office I have worked for provides recommendations in table formats so organisational decision-makers can cross-check implementation. This is not so much the case when I have worked as an independent evaluator where we have maybe done a sense making workshop, integrated feedback but once the report is handed in it is out of our hands. I wonder if as evaluators we need to figure out effective communication beyond reports. A Korean policy evaluator I met at a conference a few years ago found that presenting maximum 1-page summaries to decision-makers was most effective in her experience—anything longer wasn't useful.
Effective Tools and Systems (Q4) Building low-barrier feedback systems that attend to entire program cycles and all stakeholders proves most effective. Context-appropriate feedback generation is already a significant step, as is implementing adaptive management with monitoring officers equipped with evaluation knowledge (real time learning approach).
Maintaining Stakeholder Trust (Q5) True co-creation is difficult but achievable. As many colleagues can probably attest, we have probably all observed an intervention claiming to be participatory without real meaningful stakeholder involvement. I remember one research project where I realized very quickly that I was told what the beneficiaries thought I wanted to hear due to my positionality as an outsider and stakeholder fears about funding loss if things were deemed to not be working. Building trust requires overcoming these dynamics while fostering cultures of responsiveness to stakeholder needs across different project scales, as the needs of a local community intervention will differ from a large multi country program.
Burkina Faso
Ismaël Naquielwiendé KIENDREBEOGO
Ouagadougou
UNALFA
Posted on 13/08/2025
In evaluation, the effective integration of feedback and recommendations from an evaluation into decision-making relies on three essential levers: removing barriers, structuring utilization mechanisms, and embedding a culture of learning.
To overcome common obstacles to the use of evaluation feedback, it is essential to ensure that recommendations are clear, relevant, and directly actionable, formulated in a specific, measurable, and action-oriented way. Decision-makers’ ownership should be strengthened by involving them from the earliest stages of the evaluation’s design and implementation. Finally, to avoid decision-making being slowed by delays or overly lengthy reports, it is preferable to produce concise, user-friendly formats that facilitate the rapid and effective use of results.
To promote the effective use of evaluation results, it is crucial to establish structured systems. This includes creating a post-evaluation action plan that clearly defines responsibilities, deadlines, and monitoring arrangements. Findings should be integrated into strategic reviews and budget planning cycles to directly inform decisions and resource allocation. The use of interactive channels, such as debriefing workshops or collaborative platforms, also enables discussion, contextualization, and adjustment of recommendations to ensure their relevance and implementation.
Leadership and organizational culture play a central role in the use of evaluation feedback. Leaders must set an example by explicitly integrating evaluation findings into their decisions, thereby demonstrating the importance given to this evidence. It is also about promoting a culture of learning in which mistakes are not stigmatized but regarded as opportunities for continuous improvement. Finally, the use of feedback should be closely linked to accountability, by integrating relevant monitoring indicators into performance reports in order to measure tangible progress.
In summary, the effectiveness of feedback use depends not only on the quality of evaluations but above all on how they are owned, integrated, and followed up within decision-making mechanisms and the organization’s culture itself.
Indonesia
Monica Azzahra
MELIA Specialist
Center for International Forestry Research (CIFOR)
Posted on 12/08/2025
Thank you all for the valuable insights shared!!
I would greatly appreciate any tools, materials, or references on strategies to overcome barriers, particularly those related to leadership, built-in feedback mechanism, and fostering a culture of trust that can be effectively implemented within an organization, specifically when supported by evidence from evaluation reports.
Is there anyone could share a link or document related to?
🔗 LinkedIn Profile
Benin
Emile Nounagnon HOUNGBO
Agricultural Economist, Associate Professor, Director of the School of Agribusiness and Agricultural Policy
National University of Agriculture
Posted on 11/08/2025
1.The main barrier to the use of feedback in development organizations is negligence; that is, the lack of systematic programming of validated recommendations resulting from the monitoring and evaluation activities of projects and programmes. The results of monitoring and evaluation should, by default, constitute another set of activities to complement the ordinary planned activities. Each recommendation should become a planned activity at the operational and organizational level, on the same footing as other regular activities.
2. Generally, leaders are reluctant to take recommendations into account because they represent new flagship activities that are binding for them. Given that the stakes differ for the various actors in projects and programmes, it is ultimately the monitoring and evaluation team that is concerned with these recommendations. Partners and stakeholders are often complicit, while beneficiaries comply without much understanding. This can lead to conflicts if the monitoring and evaluation team insists, which often results in the abandonment of proper follow-up of recommendations.
3. The only cases where the implementation of recommendations has worked well, to my knowledge, are those in which the financial partner has been very demanding on this aspect of recommendations, with coercive provisions for all project/programme actors. This was the case in the implementation of the Millennium Challenge Account (MCA) programmes in Africa, which were highly successful.
4. The instrument that enabled this success was the establishment of favourable conditions to be met for any activity, combined with the systematic involvement of the judiciary (bailiffs) for the validation of recommendations and monitoring of their implementation.
5. The trust and involvement of stakeholders can only be maintained and strengthened through the use of feedback if such a legal arrangement is in place, supported by the financial partner(s) of the programme/project.
Turkey
Esra AKINCI
Programme/Project Evaluator; Management Consultant ; Finance
United Nations and European Commission
Posted on 11/08/2025
"Closing the Feedback Loop – From Insight to Action"
We all know feedback and evaluation findings are meant to guide better decisions. Yet in practice, too many valuable insights never make it past the report stage.
From my experience, turning feedback into action depends on:
1. Leadership that models learning – when leaders actively act on recommendations, teams follow.
2. Built-in feedback pathways – integrating insights directly into planning, budgeting, and review cycles.
3. A culture of trust – where feedback is welcomed as a tool for growth, not as a threat.
The result? More relevant programmes, stronger community trust, and teams that feel ownership over improvement.
If we can make feedback use a habit rather than an afterthought, we don’t just improve projects – we strengthen accountability and credibility across the board.
How have you embedded evaluation findings into your decision-making cycles? Practical examples are welcome.
Ms. Esra AKINCI- European Commission and United Nations Program/Project Management Consultant and Evaluator
LinkedIN profile: 🔗 LinkedIn Profile
Ethiopia
Hailu Negu Bedhane
cementing engineer
Ethiopian electric power
Posted on 11/08/2025
How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making.
Give recommendations top priority. Sort them according to their potential impact, viability, and urgency.
4. Encourage Ownership by Stakeholders
Promote feedback loops. Permit managers to debate and modify suggestions to make them more realistic without sacrificing their core ideas.
5. Monitor and Report on Implementation Development
6. Establish a Culture of Learning
Practical Example
If a manufacturing plant's quality audit suggests improved scheduling for equipment maintenance:
Ghana
Edwin Supreme Asare
Posted on 09/08/2025
My response to: How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making:
I share observations from my own evaluation work, highlight common barriers I have encountered, and propose practical actions that can help organisations make better use of evaluation findings in their decision-making processes.
Introduction
Evaluations are intended to do more than assess performance: they are designed to inform better decisions, strengthen programmes, and improve accountability. In practice, however, the journey from feedback to action does not always happen as intended. Findings may be presented, reports submitted, and then the process slows or ends before their potential is fully realised. This does not necessarily mean that the findings are irrelevant. Often, the reasons lie in a mix of cultural, structural, and practical factors: how evaluations are perceived, when they are conducted, how findings are communicated, the incentives in place, the systems for follow-up, and whether leadership feels equipped to champion their use.
From my work as an evaluator, I have noticed recurring patterns across different contexts: the tendency to focus on positive findings and sideline challenges, the loss of momentum when project teams disperse, reports that are seldom revisited after presentation, and leadership teams that value learning but are unsure how to embed it into everyday decision-making.I explore these patterns, illustrate them with grounded examples, and offer possible actions for making evaluation a more consistent part of organisational decision cycles.
Barrier 1: Perception as Judgment
One of the most persistent barriers to the effective use of evaluation results lies in how findings are perceived. Too often, they are seen as a verdict on individual or organisational performance rather than as a balanced body of evidence for learning and improvement.This framing can influence not only the tone of discussions but also which parts of the evidence receive attention.When results are largely positive, they are sometimes treated as confirmation of success, prompting celebrations that, while important for morale, may overshadow the role of these findings as part of an evidence base. Positive results are still findings, and they should be interrogated with the same curiosity and rigour as less favourable results. For example, strong
performance in certain areas can reveal underlying drivers of success that could be replicated elsewhere, just as much as weaker performance signals areas needing attention. However, when the focus remains solely on reinforcing a success narrative, particularly for external audiences, recommendations for further improvement may receive less follow-through.
On the other hand, when evaluations reveal significant challenges, conversations can become defensive. Stakeholders may invest more energy in contextualising the results, explaining constraints, or questioning certain data sources and measures. In settings where evaluations are closely tied to accountability, especially when reputations, funding, or career progression are perceived to be at stake, such responses are understandable. This is not necessarily resistance for its own sake, but a natural human and organisational reaction to perceived judgment.The challenge is that both of these patterns, celebrating positive results without deeper analysis, and defensively responding to difficult findings, can limit the opportunity to learn from the full spectrum of evidence. By prioritising how results reflect on performance over what they reveal about processes, systems, and external factors, organisations risk narrowing the space for honest reflection.
Barrier 2: Timing and the End-of-Project Trap
When an evaluation is done can make all the difference in whether its findings are put to use or simply filed away. Too often, evaluations are completed right at the end of a project, just as staff contracts are ending, budgets are already spoken for, and most attention is focused on closing activities or preparing the next proposal. By the time the findings are ready, there is little room to act on them.I have been part of evaluations where valuable and innovative ideas were uncovered, but there was no next phase or active team to carry them forward. Without a plan for handing over or transitioning these ideas, the recommendations stayed in the report and went no further.
Staff changes make this problem worse. As projects wind down, team members who understand the history, context, and challenges often move on. Without a clear way to pass on this knowledge, new teams are left without the background they need to make sense of the recommendations. In some cases, they end up repeating the same mistakes or design gaps that earlier evaluations had already highlighted.The “end-of-project trap” is not just about timing; it is about how organisations manage the link between evaluation and action. If evaluations are timed to feed into ongoing work, with resources and systems in place to ensure knowledge is passed on, there is a far better chance that good ideas will be used rather than forgotten.
Barrier 3: Report Format, Accessibility, and Feasibility
The format and presentation of evaluation reports can sometimes make them less accessible to the people who need them most. Many reports are lengthy, technical, and written in a style that suits academic or sector specialists, but not necessarily busy managers or community partners, although executive summaries are provided.Another challenge is that recommendations may not always take into account the resources available to the stakeholders who are expected to implement them. This means that while a recommendation may be sound in principle, it may not be practical. For example, suggesting that a small partner organisation establish a dedicated monitoring unit may be beyond reach if they have only a few staff members and no additional budget.It is also worth noting that findings are often introduced in a presentation before the full report is shared. These sessions are interactive and help bring the data to life. However, once the meeting is over, the detailed report can feel like a repetition of what has already been discussed. Without a clear reason to revisit it, some stakeholders may not explore the more nuanced explanations and qualifiers contained in the main text and annexes.
Barrier 4: Incentives and Accountability Gaps
Organisational systems and incentives play a significant role in whether evaluation findings are acted upon. In many cases, evaluators are rewarded for producing a thorough report on time, implementers are measured against the delivery of activities and outputs, and donors focus on compliance and risk management requirements.What is often missing is direct accountability for implementing recommendations. Without a designated department or manager responsible for follow-through, action can rely on the goodwill or personal drive of individual champions. When those individuals leave or when priorities change, momentum can quickly be lost and progress stalls.
Barrier 5: Limited Dissemination and Cross-Learning
The way evaluation findings are shared strongly influences their reach and use. In some organisations, reports are not published or stored in a central, easily accessible database. As a result, lessons often remain within a single team, project, or country office, with no structured mechanism to inform the design of future initiatives. I have seen cases where innovative approaches, clearly documented in one location, never reached colleagues tackling similar issues elsewhere. Without a deliberate system for sharing and discussing these findings, other teams may unknowingly duplicate work, invest resources in already-tested ideas, or miss the chance to adapt proven methods to their own contexts. This not only limits organisational learning but also slows the spread of good practice that could strengthen results across programmes.
Barrier 6: Weak Governance and No Management Response System
When there is no structured process for responding to recommendations, there is a real risk that they will be acknowledged but not acted upon. A Management Response System (MRS) provides a framework for assigning ownership, setting timelines, allocating resources, and tracking progress on agreed actions.I have seen situations where workshops produced strong consensus on the importance of certain follow-up steps. However, without a clear mechanism to record these commitments, assign responsibility, and revisit progress, they gradually lost visibility. Even well-supported recommendations can stall when they are not tied to a specific department or manager and monitored over time.
Barrier 7: Leadership Capacity to Use Evaluation Findings
Clear evaluation recommendations will only be useful if leaders have the capacity to apply them. In some cases, managers may lack the necessary skills to translate recommendations into practical measures that align with organisational priorities, plans, and budgets.
Where this capacity gap exists, recommendations may be formally acknowledged but remain unimplemented. The challenge lies not in the clarity of the evidence, but in the ability to convert it into concrete and context-appropriate actions.
Recommendations
1. Evaluation as a Learning process
Shifting the perception of evaluation from a judgment to a learning tool requires deliberate organisational strategies that address both culture and process. The following approaches can help create an environment where all findings: positive, negative, or mixed are used constructively.
a. Set the Learning Tone from the Outset
Evaluation terms of reference, inception meetings, and communications should emphasise that the purpose is to generate actionable learning rather than to pass judgment. This framing needs to be reinforced throughout the process, including during data collection and dissemination, so that stakeholders are primed to see findings as evidence for growth.
b. Analyse Positive Findings with the Same Rigour
Treat favourable results as opportunities to understand what works and why. This includes identifying enabling factors, strategies, or contextual elements that led to success and assessing whether these can be replicated or adapted in other contexts. Documenting and communicating these drivers of success helps shift the focus from defending performance to scaling good practice.
c. Create Safe Spaces for Honest Reflection
Organise debriefs where findings can be discussed openly without the pressure of immediate accountability reporting. When teams feel safe to acknowledge weaknesses, they are more likely to engage with the evidence constructively. Senior leaders play a key role in modelling openness by acknowledging gaps and inviting solutions.
d. Separate Accountability Reviews from Learning Reviews
Where possible, distinguish between processes that assess compliance or contractual performance and those designed to generate strategic learning. This separation reduces defensiveness and allows evaluation spaces to remain focused on improvement.
2. Avoiding the End-of-Project Trap
Addressing the end-of-project trap means planning evaluations so they lead to action while there is still time, people, and resources to follow through. The approaches below can help ensure that findings are not left behind once a project ends.
a. Match Evaluation Timing to Key Decisions
Schedule evaluations so findings are ready before important moments such as work planning, budgeting, or donor discussions. Midline or quick-turn evaluations can capture lessons early enough for teams to act on them.
b. Include Handover and Follow-Up in the Plan
From the start, be clear about who will take responsibility for each recommendation and how progress will be tracked. Build follow-up steps into the evaluation plan so the work continues beyond the final report.
c. Keep Knowledge When Staff Change
Hold debrief sessions before staff leave to pass on the stories, context, and reasoning behind recommendations. Let outgoing and incoming staff work together briefly, and store key information where it can be easily found later.
d. Share Findings Before the Project Closes
Present key findings while the team is still in place. Use short, focused briefs that highlight what needs to be done. This gives the team a chance to act immediately or link recommendations to other ongoing work.
3. Improving Report Accessibility and Feasibility
Evaluation findings are more likely to be used when they are presented in a way that people can easily understand and act on, and when recommendations are realistic for those expected to implement them.
a. Make Reports Usable for Different Audiences
Prepare different versions of the findings: a concise action brief for decision-makers, a plain-language summary for community partners, and the full technical report for specialists. This ensures that each audience can engage with the content in a way that suits their needs and time.
b. Check Feasibility Before Finalising Recommendations
Hold a short “recommendation review” meeting after sharing preliminary findings. Use this time to confirm whether recommendations are practical given available staff, budgets, and timelines, and adjust them where needed.
c. Link Recommendations to Action Plans
Where possible, show exactly how each recommendation can be implemented, including suggested steps, timelines, and responsible parties. This makes it easier for organisations to move from reading the report to acting on it.
4. Closing the Incentive and Accountability Gaps
To improve the likelihood that evaluation findings are acted on, responsibilities for implementation should be made clear in the recommendation statements.
a. Name the Responsible Department or Manager in the Evaluation Report
Each recommendation should specify the department or manager expected to lead its implementation. This ensures clarity from the moment the report is delivered.
b. Confirm Feasibility Before Finalising Recommendations
During the validation of preliminary findings, engage the relevant departments or managers to confirm that the recommendations are realistic given available resources and timelines.
5. Strengthening Dissemination and Cross-Learning
Evaluation findings are more likely to be used across and beyond an organisation when they are shared widely, stored in accessible formats, and actively connected to future programme design.
a. Share Findings Beyond the Immediate Project Team
Circulate the evaluation report and summary briefs to other departments, country offices, and relevant partners. Use internal newsletters, learning forums, or staff meetings to highlight key lessons.
b. Store Reports in a Central, Accessible Location
Ensure that the final report, executive summary, and any related briefs are uploaded to a shared organisational repository or knowledge management platform that all relevant staff can access.
c. Create Opportunities for Cross-Learning Discussions
Organise short learning sessions where teams from different projects or countries can discuss the findings and explore how they might be applied elsewhere.
d. Publish for Wider Access Where Possible
Where there are no confidentiality or data protection constraints, make the report or key findings available on the organisation’s website or other public platforms so that the wider community can benefit from the lessons.
6. Strengthen Governance and Management Response
A Management Response System (MRS) can help ensure that recommendations are acted on by assigning clear responsibilities, setting timelines, and tracking progress from the moment the evaluation is completed.
a. Establish a Management Response Process Before the Evaluation Ends
Agree with the commissioning organisation on the structure and timing of the MRS so it is ready to accompany the final report. This ensures that follow-up starts immediately.
b. Name Responsible Departments or Managers in the Evaluation Report
For each recommendation, clearly state which department or manager is expected to lead implementation. This creates ownership from the outset.
c. Set Review Points to Monitor Progress
Agree on dates for follow-up reviews within the organisation’s governance framework to check on progress against the Management Response.
d. Link the MRS to Existing Planning and Budgeting Cycles
Where possible, align recommended actions with upcoming planning, budget allocation, or reporting timelines so that they can be resourced and implemented without delay.
7. Strengthen Leadership Capacity to Apply Evaluation Findings
Building the skills of managers to apply evaluation recommendations is essential to ensure that findings are used to guide organisational decisions.
a. Include Capacity-Building in the Evaluation Design
Ensure the Terms of Reference (ToR) require activities aimed at strengthening managers’ ability to apply recommendations, such as practical workshops or scenario-based exercises.
b. Provide Decision-Oriented Briefs
Prepare concise, action-focused briefs alongside the evaluation report, outlining specific options and steps for integrating recommendations into planning, budgeting, and operational processes.
c. Facilitate Joint Planning Sessions
Organise sessions with managers to translate recommendations into concrete action plans, ensuring alignment with organisational priorities and available resources.
d. Offer Targeted Support Materials
Develop templates, checklists, or guidance notes to assist managers in systematically incorporating recommendations into decision-making.
Conclusion
Making full use of evaluation findings requires timely evaluations, clear communication, strong follow-up systems, and leadership capacity to act on recommendations. By addressing barriers such as end-of-project timing, inaccessible reports, unclear accountability, limited sharing, and weak governance, organisations can turn evaluations into a practical tool for learning and improvement.
By Supreme Edwin Asare
Monitoring, Evaluation, Research & Learning (MERL) Consultant (Remote | Onsite | Hybrid) | Author of the AI Made in Africa Newsletter. Experienced in conducting performance and process evaluations using theory-based and quasi-experimental methods, including Contribution Analysis and mixed-methods designs. Skilled in Theory of Change development, MEL system design, and training on AI integration in evaluation.📩 kdztrains@gmail.com
Italy
Serdar Bayryyev
Senior Evaluation Officer
FAO
Posted on 04/08/2025
“…Organizations can shape their employees' feedback orientation by fostering a feedback culture. Furthermore, organizational feedback develops from a task-based approach to an organizational practice....” (Fuchs et al., 2021)
What do you think about the statement above??
Response: The statement suggests that organizations play a crucial role in influencing how employees perceive and engage with feedback by fostering organizational culture that values and encourages feedback. It also implies that feedback within an organization evolves from being just a task-related activity to becoming an integral part of the organizational learning culture and practice of acting upon feedback.
I think this is a valuable perspective. Cultivating a feedback culture can indeed help employees become more open, receptive, and proactive about giving and receiving feedback. When feedback is embedded into the organizational environment, it moves beyond isolated tasks and becomes a continuous, shared practice that supports learning and improvement across the organization.
Overall, open and transparent communication can help fostering such a culture and lead to better communication, increased trust, and ongoing development, which are essential for organizational growth and sustainable outcomes.
Italy
Serdar Bayryyev
Senior Evaluation Officer
FAO
Posted on 04/08/2025
The success of development agencies depends heavily on their ability to incorporate evaluative evidence into strategic decision-making. While evaluation offices gather valuable insights from monitoring and evaluation activities, turning this feedback into meaningful program improvements remains a challenge. Overcoming these barriers is crucial to ensure that development efforts truly address the needs of vulnerable communities and partners worldwide.
Common barriers in a complex development landscape include:
- Resource Constraints: Limited capacity for thorough monitoring, quality data analysis, and processing—especially in remote, crisis-affected, or resource-limited settings.
- Cultural Factors: Attitudes that prioritize technical expertise over participatory approaches can hinder open dialogue with stakeholders.
- Leadership Engagement: Without committed leadership advocating for the effective use of evaluative evidence, efforts often remain superficial or fragmented.
Leadership plays a vital role in fostering a culture that values transparency, inclusiveness, and continuous learning. When senior management actively supports feedback mechanisms—such as planning, follow-up processes, consultations, and adaptive management—staff and partners are more likely to see feedback as essential to operational success. Creating an organizational environment that rewards openness and learning encourages innovation, supports corrective actions, and enhances accountability.
Strategies for improvement may include:
- Strengthening Feedback Systems: Develop user-friendly, multilingual digital platforms to present evaluation findings and recommendations. Ensure management responses are transparent, monitored for compliance, and acted upon in a timely manner.
- Capacity Building: Offer targeted training for staff and partners on analyzing feedback, making data-driven (results-based) decisions, and adopting participatory approaches.
- Institutionalizing Feedback Loops: Embed structured processes—such as adaptive management frameworks and learning agendas—within project cycles to ensure evaluative insights inform adjustments, scaling, and policy development. Make these adjustments visible and attributable.
- Incentivizing Feedback Use: Recognize and reward offices that effectively integrate evaluation insights into their work.
- Leveraging Technology: Use mobile data collection tools, real-time dashboards, and remote engagement platforms to monitor follow-up actions and facilitate ongoing learning.
Indonesia
Monica Azzahra
MELIA Specialist
Center for International Forestry Research (CIFOR)
Posted on 04/08/2025
“…Organizations can shape their employees' feedback orientation by fostering a feedback culture. Furthermore, organizational feedback develops from a task-based approach to an organizational practice....” (Fuchs et al., 2021)
What do you think about the statement above?
Indonesia
Monica Azzahra
MELIA Specialist
Center for International Forestry Research (CIFOR)
Posted on 31/07/2025
Welcome, everyone!
We're here to reflect on the root causes of barriers to feedback use in development decision-making, explore strategies to overcome them, and highlight the role of leadership in building a feedback-driven culture.
Please feel free to share your thoughts, experiences, or any useful references to help enrich our exchange.
Let’s make this space collaborative, open, and action-oriented. Your voice matters, let’s learn and grow together!