Skip to main content
Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

Kenya

Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026 Member since 26/03/2019

Monitoring, Evaluation and Gender Consultant/Trainer

Website
  • Resource mobilization
  • Governance and public policy
  • Strategic leadership and planning
  • Information, communication & technology skills
  • Organizational skills: Program/Project Management of large and complex teams and Strategic Plan development
  • Conducting surveys: Evaluations, research,  assessments and reviews
  • Analytic & visualization skills (quantitative & qualitative) with- STATA, SPSS, Excel, QGIS, ODK, R, GIS, ODK, Qualtrics, google forms
  • Team and interpersonal skills: Coaching, Mentoring and diversity & multi-cultural support supervision
  • Presentation skills: Capacity Building and facilitating trainings
  • Writing: Technical reports, Grants, proposals, success stories and stories of change

My contributions

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 03/05/2026

      The topic is very relevant, timely and thought-provoking. 

      In the context of climate uncertainty and its implications for the use of evaluation findings, it is evident that retrospective evaluation is a weak predictor of future outcomes due to climate variability. Methodologically, static measurements are designed for relatively stable environments, and can no longer fit in the dynamic and volatile ecosystems we are experiencing today.

      For example, in food security, the more we measure project success based on the yield increases, the more we risk overlooking critical factors such as soil depletion trends, climate shocks, and indigenous community mobility patterns, among others. As a result, findings may quickly become underutilised. It is therefore clear that yesterday’s solutions cannot remain valid for tomorrow’s problems.

      Foresight Tools and Methods

      Some foresight tools and methods I have encountered include Outcome Harvesting, the 3 Horizons Framework, systems mapping, scenario planning, horizon scanning, and trend analysis. These approaches are valuable because they capture uncertainty and expand thinking beyond linear outcomes, thereby supporting adaptability in programmes.

      However, their application comes with challenges because they require time, skills, and facilitation capacity, which many evaluators may not yet possess. Additionally, some stakeholders perceive these approaches as less rigorous. Another constraint is that they are not often embedded in donor terms of reference, which discourages evaluators from applying them.

      DAC Criteria through a Foresight Lens

      Reinterpreting the DAC criteria through a foresight lens is important. For instance, we need to move from a static relevance to dynamic relevance. Traditionally, we ask whether an intervention was aligned at the design stage; however, we should also be asking whether it will remain relevant under future scenarios.

      Similarly, sustainability is often framed as the continuation of benefits after programme completion. In today’s volatile context, this needs to shift toward assessing whether systems can adapt, absorb shocks, and transform. This includes examining resilience and adaptive capacity at both system and community levels.

      Opportunities in Food Security, Environmental, and Agricultural Sectors

      These sectors are inherently future-facing. In Kenya, since childhood, we were often reminded, “Huu ni uti wa mgongo wa uchumi”, meaning this is the backbone of our economy. Importantly, Indigenous knowledge systems already function as foresight systems.

      There are significant opportunities to integrate foresight into areas such as agroecology, climate-resilient agriculture, pastoralist mobility systems, and early warning systems. Indigenous forecasting methods, such as interpreting weather patterns, seasonal cycles, animal behaviour, and land use patterns, offer valuable insights that can strengthen evaluation practice.

      What Needs to Change to Integrate Foresight into Evaluation

      To make foresight a regular part of evaluation, we need to strengthen skills in systems thinking, futures literacy, facilitation of uncertainty, and multidisciplinary approaches.

      At the institutional level, there is a need for flexible terms of reference, adaptive and real-time evaluation designs, and learning-focused commissioning processes that allow for cumulative learning over time.

      Overall, we need to shift from the narrow focus on accountability to a balanced learning and anticipation, supported by investments in digital data tools.

      My conclusion

      I would conclude by urging the adoption of the indigenous scenario work, that is, planning for multiple possible futures rather than assuming a single predictable path. This involves asking critical questions such as What might happen? What could change? What if things go differently?

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 26/08/2025

      Thank you for initiating this important topic on underutilised feedback in development decision-making. 

      From my experience, some of the barriers include organisational culture of resistance, where feedback is seen as criticism rather than a learning tool. Working in Silo is also a barrier where feedback remains in one entity, or in one project, instead of feeding into broader institutional decisions.

      Leadership and culture play a critical role in influencing feedback responsiveness. Good leadership will set the tone for valuing evidence, allowing open dialogue and reflection, and encouraging adaptive management

      Some practical steps for embedding feedback use, include institutionalising after-action reviews and learning forums. Other steps involve integrating feedback in planning cycles, and capacity building of staff not only in data collection but also in interpretation and application.

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 09/05/2025

      While my evaluation experience cannot be formally labelled as SSTC, I have assessed several multi-country and regional initiatives which have the same principles of SSTC, such as the peer learning, mutual accountability, and capacity exchange between countries in the Global South. Notably, I participated in an evaluation of a regional health systems strengthening program that involved technical cooperation between various countries where they shared innovations in health models, data use for decision-making, and integrated service delivery. Although the cooperation was organically South-South in nature, the absence of an SSTC-specific evaluation framework made it hard to fully capture the unique dimensions of reciprocal learning and ownership.

      With no SSTC-specific guidance, several challenges are faced for example becoming difficult to articulate what success looks like for SSTC, especially when value lies more in process and relationships. You realise what comes out is more of the outputs. It also seems that SSTC successes have not been documented largely, probably due to limited tracking of the process. In this context, Narrative and story-based methods come in handy to be able to capture the mutual benefit and capacity exchange. Therefore, the qualitative methods such as outcome harvesting, storytelling, and appreciative inquiry are very relevant in evaluating SSTC.

      Evaluators can contribute to a more impactful use of SSTC through the Co-creation of frameworks with partners to ensure they reflect Southern values and definitions of success. Documentation and dissemination of different learning of the successful processes and what success looked like, the how and why. Evaluators can also contribute through intentionally embedding equity and inclusion lenses as well as integrating the systems thinking when evaluating SSTC.

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 07/04/2025

      In certain thematic areas, big projects are indeed effective at delivering effective solutions, particularly in emergency response, humanitarian aid, and large-scale infrastructure projects such as roads, railways, and major construction efforts.

      However, in most other thematic areas, especially those relating to social development, big projects often struggle to achieve sustainable effectiveness. Examples include healthcare programs addressing Malaria, Tuberculosis (TB), and Universal Health Coverage (UHC), certain types of Gender-Based Violence (GBV) interventions, and Agriculture and food security initiatives. Despite their substantial financial resources, technical expertise, human resources, and capability to scale activities broadly, big projects frequently face challenges such as limited local context sensitivity, reduced community ownership, and difficulties in maintaining long-term sustainability.

      In contrast, smaller projects implemented by local organizations, despite having fewer resources and a limited operational scope, demonstrate a notable advantage. They tend to be highly responsive to immediate community needs, deeply anchored in local knowledge and cultural practices, and foster strong community ownership and relevance. These small-scale projects possess an inherent flexibility and agility that allow rapid adjustments based on direct feedback from community members, significantly enhancing their overall sustainability and effectiveness in addressing complex and evolving social issues.

      Therefore, while big projects play a crucial role in specific contexts, the complementary strengths of smaller, locally driven initiatives are indispensable for achieving sustainable solutions in the complex social domains of development.

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 08/02/2023

      Thank you for the good quiz. It has rated me well. Allow me to add a point to make it even better. Some questions cannot only have one answer and therefore there is need to put check boxes for multiple answers as opposed to the circle.

      Questions like the ones below need multiple answers

      ...the right type of Eval...

      Which indicator should I use....

      If the evidence is not useful, it is because....

      In order to contribute to better and useful Evaluation.....

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 08/12/2022

      Thank you Eriasafu for this relevant topic. To develop an inclusive MEAL system, the chances are very good chance in starting with developing an all-inclusive theory of change and capturing the baseline of all-inclusivity issues. This then ensures inclusivity in creating the log frames and work plans and eventually trickles down to indicators that include both qualitative and quantitative; inclusive interventions conducted; collecting data that is disaggregated; Data collected by applying diverse methods of qualitative and quantitative methods and reporting. And also inclusivity in getting feedback from the beneficiaries.

      The project/program team may fail to understand what is meant by inclusivity if the implementation starts at a later stage.

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 22/07/2020

      Thank you for this conversation. In fact, the same discussion had been started on Twitter by Tom Archibald (https://twitter.com/tgarchibald) with very fascinating points coming out. He also shared this. https://t.co/ynI88BlvZp?amp=1

      Well, in evaluation, unfortunately, racism is present.

      Looking at the Evaluations conducted in African countries, you realize most consultancies are given to a consultant from the global north even one with less experience or one just starting. The more experienced evaluator from the south is given an opportunity as a data collector (in some instances), and this is only because of existing protocols, or language barrier and terrain challenges. And this goes for Evaluations conducted in the global North too, the opportunity is still given to the same evaluators again, hence very minimal chances of the global south evaluators.

      Payment is also not the same. If we hold all factors constant, the consultant from the global North is highly remunerated as opposed to one from the south. This is in addition to the already incurred expenses, of bringing them into the country, expensive accommodation, and DSAs.

      Some of the donor-funded programs and international NGOs, bring in the consultants from their own country to conduct evaluations for the projects in the global South.

      As an evaluator, I once looked at an Evaluation report of a program conducted by a consultant from the global North and was surprised. The report did not highlight any Evaluation criteria or methodology. Some of the contents in the report included complains about an officer who arrived late and another who got sick during the evaluation process and also expressed anger that at a certain point, an FGD Interviewee mentioned the word mzungu (Mzungu is a Swahili name for a “white person”).

      On another instance, the organization simply brought a photographer from the global North, to take pictures to include in the report, but the person ended submitting our pictures taken via our smartphones, which we were sharing on the WhatsApp group that we had created to communicate when in the field for data collection. And to make it worse, he labeled his name. 

      I could go on and on but racism in Evaluation is present and it’s deep too, but those most affected are the Evaluators in the global South.

       

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 29/04/2020

      Thank you, Nick Maunder, for bringing this up. The Covid-19 pandemic put to test every evaluator on how innovative and resilient one can be. My president’s directive to work from home, reached us when we are in the process of data collection in the field. This was an outcome harvesting. We had a replanning meeting over the night and decided to prioritize the focus group interviews (the stories) to avoid being locked down away from our homes. Focus group discussions would have been challenging to collect through skypes or telephone calls. The storytellers (FGD) are mostly community members. They have challenges in accessing bundles, telephones, internet, and most other forms of communication.

      The rest of the data from the key informants (substantiating) was collected through skype meeting, telephone calls and WhatsApp calls. It was not easy as put, but eventually, we were happy with the effort we put and the data collected.

      Therefore, the evaluators need to continually remind themselves on what the goal of a particular evaluation is. And how best they can gather the data.

    • Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Kenya

      Eddah Kanini (Board member: IDEAS, AGDEN & MEPAK. Former Board member AfrEA 2021-2026

      Monitoring, Evaluation and Gender Consultant/Trainer

      Posted on 28/04/2020

      The practice of Monitoring cannot be replaced by evaluations but should feed into evaluations. Monitoring should go hand in hand with implementation of the development programs, projects and interventions. It is through monitoring that the inputs, outputs and processes applied are checked. As well as the important concept of timeliness as the programs awaits the evaluation to measure and draw important conclusions.

      Therefore as a monitoring and evaluation consultant, I would say that both the M and the E are critical and complete each other.