Evaluation
Posted on 08/04/2025
Reflections on Large Projects and Evaluation Practices
Based on my experience working with over a dozen large-scale projects, I have observed several recurring design and implementation flaws that hinder their effectiveness. These projects often adopt global frameworks with minimal adaptation to the local context. As a result, they struggle to deliver contextually relevant solutions. They tend to be rigid, lacking the flexibility to respond to dynamic, unforeseen changes that arise during implementation. Moreover, they rarely make targeted efforts to address the unique needs of specific groups, leading to exclusion and inequity.
In essence, such projects are not well-grounded in local realities. Their internal structures often lack cohesion, and when internal coordination is weak, interagency collaboration suffers as well. They frequently fall short in terms of timely delivery, and extensions inflate project costs. Effectiveness is often compromised, and sustainability remains a significant concern—local resources and capacities are typically insufficient to maintain results once external support ends.
Regarding monitoring and evaluation (M&E) systems, these are predominantly top-down and rigid, relying heavily on quantitative methods. This approach often fails to uncover the root causes of success or failure and misses critical learning opportunities. Indicators are typically standardized and global, making them difficult for local stakeholders to understand or relate to—and sometimes irrelevant to local contexts. Community participation in M&E is usually minimal and superficial, and large projects struggle to address the diverse needs across all areas they influence.
To enhance evaluation effectiveness, I recommend the following:
- Use a mixed-methods approach that combines quantitative and qualitative insights.
- Co-create indicators with local stakeholders to ensure relevance and ownership.
- Focus on micro-macro linkages as well as horizontal and vertical partnerships.
- Adopt an iterative learning process that allows for course correction.
- Assess project design to ensure that inclusiveness is not just a principle but is operationalized through tools and metrics that capture who is left behind.
- Look beyond the logframe and examine real changes on the ground using participatory, in-depth evaluation tools.
Nepal
Gana Pati Ojha
Community of Evaluators
Posted on 24/04/2026
The #EvalforEarth discussion comes at exactly the right time. Many evaluations still tell us how projects performed yesterday, while leaders increasingly need evidence on how systems can survive tomorrow.
Across food security, agriculture, climate resilience, and governance, one lesson repeatedly emerges: outcomes are shaped less by individual projects than by the systems in which they operate—institutions, incentives, partnerships, learning cultures, and political ownership. Strong projects often fail inside weak systems; modest interventions can succeed when embedded in adaptive and trusted institutions.
This is why retrospective evaluation alone is no longer enough. It may accurately assess past outputs and efficiency, yet miss the critical forward-looking questions:
• Will this programme remain relevant under climate shocks or market volatility?
• Can institutions adapt when assumptions change?
• Are partnerships resilient under stress?
• Will gains endure after funding ends?
Strategic foresight offers practical tools to strengthen evaluation: horizon scanning, scenario planning, Three Horizons, and causal layered analysis. These methods can help evaluators move from static judgement to dynamic learning.
Three practical entry points:
Perhaps we also need to reinterpret OECD-DAC criteria through a future lens:
Relevance = future fit
Sustainability = resilience under shocks
Impact = contribution to long-term system transformation
The future of evaluation is not abandoning hindsight. It is combining hindsight, insight, and foresight so evidence can guide action in an uncertain world.