Thanks for posting this. Here are my - rather terse - answers to two of the above questions
Question 1: Bridging Evidence and Action: Strengthening the Link Between Impact Evaluation and Real-Time Decision-Making For impact evaluation to inform real-time decisions, it must be purpose-built for use, not produced as a retrospective record. As Patton would emphasise, the initial step is decision mapping: recognise what decisions lie ahead, who will make them, and what evidence they require. Evaluations must match operational pace—not outlive it. Evaluations must align with operational tempo—not outlast it. This requires:
a) Embedded evaluators who sit with programme teams and leadership, translating emerging findings into operational options; b) Short, iterative learning products—rapid briefs, real-time analysis, adaptive monitoring—that inform decisions long before the final report emerges; and c) Structured feedback loops that carry findings directly into planning, budgeting, and resource allocation.
Arguably, none of this matters unless the system is willing to let evidence disrupt comfort. If findings never unsettle a meeting, challenge a budget line, or puncture institutional vanity, they are not evidence but décor. Evidence influences action only when it is permitted to disturb complacency and contradict consensus.
Localizing Evidence: Making Impact Evaluations Serve Local Priorities and Power Structures Localization begins with local ownership of purpose, not merely participation in process. Impact evaluations should be co-created with national governments, civil society, and communities, ensuring that questions reflect local priorities and political realities. They must strengthen national data systems, not bypass them with parallel UN structures. And interpretation of findings must occur in-country, with local actors who understand the lived context behind the indicators. Yet there is a harder truth: calling something “localized” while designing it in Geneva and validating it in New York is an exercise in bureaucratic self-deception. True localisation demands surrendering control—over agenda-setting, determining the objectives and evaluation questions, data ownership, and interpretive authority. Anything less is a ritual performance of inclusion rather than a redistribution of power.
Conclusion To bridge evidence and action, and to localize evaluation meaningfully, the UN must pair the discipline of use with a discipline of honesty. Evidence must be designed for decisions, delivered at the speed of operations, and empowered to unsettle institutional habits. And localization must shift from rhetoric to reality by making national actors—not UN agencies—the primary authors, owners, and users of impact evaluation. Otherwise, the system risks perfecting its internal coherence while leaving the world it serves largely unchanged.
RE: Global Impact Evaluation Forum 2025: Forging evidence partnerships for effective action
United Kingdom
Daniel Ticehurst
Monitoring > Evaluation Specialist
freelance
Publicado el 08/12/2025
Thanks for posting this. Here are my - rather terse - answers to two of the above questions
Question 1: Bridging Evidence and Action: Strengthening the Link Between Impact Evaluation and Real-Time Decision-Making
For impact evaluation to inform real-time decisions, it must be purpose-built for use, not produced as a retrospective record. As Patton would emphasise, the initial step is decision mapping: recognise what decisions lie ahead, who will make them, and what evidence they require. Evaluations must match operational pace—not outlive it. Evaluations must align with operational tempo—not outlast it. This requires:
a) Embedded evaluators who sit with programme teams and leadership, translating emerging findings into operational options;
b) Short, iterative learning products—rapid briefs, real-time analysis, adaptive monitoring—that inform decisions long before the final report emerges; and
c) Structured feedback loops that carry findings directly into planning, budgeting, and resource allocation.
Arguably, none of this matters unless the system is willing to let evidence disrupt comfort. If findings never unsettle a meeting, challenge a budget line, or puncture institutional vanity, they are not evidence but décor. Evidence influences action only when it is permitted to disturb complacency and contradict consensus.
Localizing Evidence: Making Impact Evaluations Serve Local Priorities and Power Structures
Localization begins with local ownership of purpose, not merely participation in process. Impact evaluations should be co-created with national governments, civil society, and communities, ensuring that questions reflect local priorities and political realities. They must strengthen national data systems, not bypass them with parallel UN structures. And interpretation of findings must occur in-country, with local actors who understand the lived context behind the indicators.
Yet there is a harder truth: calling something “localized” while designing it in Geneva and validating it in New York is an exercise in bureaucratic self-deception. True localisation demands surrendering control—over agenda-setting, determining the objectives and evaluation questions, data ownership, and interpretive authority. Anything less is a ritual performance of inclusion rather than a redistribution of power.
To bridge evidence and action, and to localize evaluation meaningfully, the UN must pair the discipline of use with a discipline of honesty. Evidence must be designed for decisions, delivered at the speed of operations, and empowered to unsettle institutional habits. And localization must shift from rhetoric to reality by making national actors—not UN agencies—the primary authors, owners, and users of impact evaluation.
Otherwise, the system risks perfecting its internal coherence while leaving the world it serves largely unchanged.