Posted on 23/08/2024
Just pitching in, like Silva, to congratulate Musta on making such a great point. The seeming marginal value and high opportunity costs of EAs.
At 2022’s european evaluation society, the key note by Estelle Raimondo and Peter Dahler-Larsen was striking. They rehearsed an interesting analysis on the indiscriminate application and diminishing returns to the practice of late through its "performative" use. Bureaucratic capture.
Some argue EAs are the least of today’s evaluation community’s concerns.
The keynote’s reference to how "....sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it" reminds of the analogy the famous classicist and poet AE Housman made in 1903:
"...gentlemen who use manuscripts as drunkards use lamp-posts,—not to light them on their way but to dissimulate their instability.”
United Kingdom
Daniel Ticehurst
Monitoring > Evaluation Specialist
freelance
Posted on 08/12/2025
Thanks for posting this. Here are my - rather terse - answers to two of the above questions
Question 1: Bridging Evidence and Action: Strengthening the Link Between Impact Evaluation and Real-Time Decision-Making
For impact evaluation to inform real-time decisions, it must be purpose-built for use, not produced as a retrospective record. As Patton would emphasise, the initial step is decision mapping: recognise what decisions lie ahead, who will make them, and what evidence they require. Evaluations must match operational pace—not outlive it. Evaluations must align with operational tempo—not outlast it. This requires:
a) Embedded evaluators who sit with programme teams and leadership, translating emerging findings into operational options;
b) Short, iterative learning products—rapid briefs, real-time analysis, adaptive monitoring—that inform decisions long before the final report emerges; and
c) Structured feedback loops that carry findings directly into planning, budgeting, and resource allocation.
Arguably, none of this matters unless the system is willing to let evidence disrupt comfort. If findings never unsettle a meeting, challenge a budget line, or puncture institutional vanity, they are not evidence but décor. Evidence influences action only when it is permitted to disturb complacency and contradict consensus.
Localizing Evidence: Making Impact Evaluations Serve Local Priorities and Power Structures
Localization begins with local ownership of purpose, not merely participation in process. Impact evaluations should be co-created with national governments, civil society, and communities, ensuring that questions reflect local priorities and political realities. They must strengthen national data systems, not bypass them with parallel UN structures. And interpretation of findings must occur in-country, with local actors who understand the lived context behind the indicators.
Yet there is a harder truth: calling something “localized” while designing it in Geneva and validating it in New York is an exercise in bureaucratic self-deception. True localisation demands surrendering control—over agenda-setting, determining the objectives and evaluation questions, data ownership, and interpretive authority. Anything less is a ritual performance of inclusion rather than a redistribution of power.
To bridge evidence and action, and to localize evaluation meaningfully, the UN must pair the discipline of use with a discipline of honesty. Evidence must be designed for decisions, delivered at the speed of operations, and empowered to unsettle institutional habits. And localization must shift from rhetoric to reality by making national actors—not UN agencies—the primary authors, owners, and users of impact evaluation.
Otherwise, the system risks perfecting its internal coherence while leaving the world it serves largely unchanged.