Skip to main content
Edwin Supreme Asare

Ghana

Edwin Supreme Asare Member since 25/06/2025

PROFESSIONAL EXPERIENCE

Power BI Consultant

UK-Ghana Jobs and Economic Transformation (JET) Programme, FCDO & Palladium

November 2024-March 2025

Supported the £12 million UK-funded JET Programme, which aims to promote inclusive economic growth and industrialisation in Ghana. The programme targets five strategic sectors: Automotive, Pharmaceuticals, Textiles and Garments, Agro-processing, and Light Manufacturing.

Main Achievements:

  • Conducted a comprehensive review of the existing MEL tracker, tools, and dashboard to identify gaps, inefficiencies, and opportunities for improvement.
  • Designed and configured a dynamic Power BI dashboard integrated with the MEL tracker and consolidated reporting documents, enhancing real-time program monitoring and performance tracking.
  • Delivered targeted training sessions for program staff, strengthening internal capacity to use the MEL system and Power BI dashboard for evidence-based decision-making and adaptive program management.

Evaluation Consultant Analyst

CGIAR | Independent Advisory and Evaluation Service (IAES) September 2023 – November 2024

Served as Functional Lead for the evaluation of CGIAR’s Genetic Innovation Science Group, commissioned by IAES. Provided technical and operational leadership for the evaluation,

with responsibility for coordinating fieldwork across multiple countries, ensuring adherence to IAES evaluation protocols and reporting format, facilitating stakeholder engagement and consultations, supporting data collection and analysis using qualitative and quantitative

research methods, and contributing to data interpretation, synthesis of evidence, and drafting of evaluation reports and recommendations. Managed a core team of four

evaluators and coordinated inputs from three subject matter experts from Germany, the USA, and UK.

Main Achievements:

  • Conducted a scoping mission with the Senior Evaluation Manager of IAES to ILRI in Nairobi to engage stakeholders and gather their input for the evaluation of the three Science Groups: Genetic Innovation, Resilient Agrifood Systems, and Systems Transformation.
  • Supported the drafting of the inception report for the Genetic Innovation Science Group Evaluation
  • Led the development of the stakeholder survey for the evaluation and converted it into a digital format using SurveyMonkey for data collection.
  • Conducted field missions to Ghana with two subject matter experts, and subsequently to Kenya with two additional subject matter experts and the Evaluation Function Lead of IAES, to carry out in-person interviews. Across both missions, data was collected from over 60 stakeholders.
  • Produced detailed notes for each interview, along with full transcriptions to support analysis and reporting.
  • Led the analysis of interview data using MAXQDA for qualitative coding and Jamovi for quantitative analysis.
  • Led the development of CGIAR’s Africa Brief, a knowledge product summarizing key findings from the evaluations of the three Science Groups, and supported its presentation to governance stakeholders.
  • Led the post-evaluation response process for the Genetic Innovation evaluation, including compiling the management response

Evaluator

IOM – UN Migration | Asia-Pacific Regional Office June 2022 – August 2022

Led the final independent evaluation of IOM’s multi-phase Document Support Centre Initiative across the Asia-Pacific. Responsible for the full scope of the evaluation, from responding to the RFP and developing the inception report to conducting interviews and delivering a final report.

Main Achievements:

  • Produced an inception report that provided a robust evaluation framework, guiding all subsequent stages and earning endorsement from IOM regional leadership.
  • Designed and deployed a digital survey using KoboCollect, streamlining data collection across diverse country contexts.
  • Collected and triangulated evidence from over 30 stakeholders through interviews and desk review, ensuring a well-rounded assessment of program effectiveness.
  • Applied OECD-DAC criteria to deliver actionable findings and practical recommendations for improving document support services in the region.
  • Led the drafting and dissemination of the final evaluation report, which was presented and validated during a stakeholder workshop, contributing to the redesign and scaling of the initiative’s next phase.

Co-Evaluator (Ghana)

Madiba Consult (GIZ Pan-African E-Commerce Initiative) June 2023 – March 2024

Supported the Lead Evaluator in conducting an independent evaluation of the BMZ-funded Pan-African E-Commerce Initiative in Ghana. The evaluation employed contribution analysis and was guided by the OECD-DAC criteria. Contributed to all phases of the country-level evaluation, including inception reporting, fieldwork, analysis, and dissemination.

Main Achievements:

  • Contributed to the Ghana section of the inception report, aligning country-specific questions with the regional evaluation framework and contribution analysis methodology.
  • Facilitated over 25 stakeholder interviews and focus group discussions across government, private sector, and development partners.
  • Supported the analysis of qualitative data using NVivo, generating evidence to assess program relevance, coherence, effectiveness, and sustainability.
  • Assisted in drafting the Ghana national evaluation report and contributed to the co-authoring of the regional synthesis report.
  • Supported the presentation of evaluation findings at a stakeholder validation workshop with GIZ leadership, helping to inform future programming on digital trade and SME development in Africa.

MEL Consultant

Sabon Sake – Renewing the Earth Initiative June 2023 – September 2023

Provided strategic MERL support to Sabon Sake, a clean-tech initiative focused on

regenerative agriculture and carbon capture solutions. Led the development of the MERL framework, aligned measurement tools to project outcomes, and advised on integrating scientific evidence and real-time feedback into adaptive program management.

Main Achievements:

  • Conducted an organizational needs assessment and operational model review.
  • Developed a results framework and indicators to assess impact in regenerative agriculture.
  • Designed and implemented real-time data collection systems and digital monitoring tools.
  • Analyzed stakeholder performance data to inform continuous program learning.
  • Produced a comprehensive MEL report for internal learning and donor engagement.

MEL Trainer

Union for International Cancer Control (UICC) April 2023 – December 2023

Engaged as a global MEL training consultant to design and deliver tailored monitoring, evaluation, and learning content for over 1,200 civil society organisations involved in cancer control across 170 countries. Training materials were adapted and translated into Spanish and French to support global reach and accessibility.

Main Achievements:

  • Developed and produced practitioner-focused MEL video lessons and downloadable evaluation tools to support application in diverse contexts.
  • Delivered interactive, multilingual training sessions that strengthened MEL capacity across UICC’s global civil society network.
  • Contributed to improved MEL practices among participating organizations by emphasizing practical, outcomes-based approaches aligned with real-world program implementation.

Data Analyst Consultant

Invest in Africa (Mastercard Foundation SME Program) January 2021 – March 2021

Served as a short-term MEL consultant supporting the Mastercard Foundation’s SME resilience program across Ghana, Senegal, and Kenya. Focused on data cleaning, analysis, and synthesis to enhance program learning and reporting.

Main Achievements:

  • Produced visual infographics and concise summary reports to communicate program performance and trends.
  • Analysed SME program data to identify emerging patterns, risks, and opportunities for improvement.
  • Supported donor decision-making by delivering evidence-based insights to guide adaptive program strategies.

Lead Researcher – Ghana

MERL Tech (USA) & Data Innovators (SA) March 2022 – July 2022

Led the Ghana component of a regional study mapping African-led digital monitoring, evaluation, and learning (MEL) innovations. Oversaw the full research process-from design and stakeholder engagement to field coordination, analysis, and reporting.

Main Achievements:

  • Produced a detailed Ghana country profile highlighting local innovation trends, actors, and use cases in digital MEL solutions.
  • Conducted stakeholder interviews and synthesised findings to generate actionable insights for the broader regional analysis.
  • Contributed to the regional synthesis report submitted to the Mastercard Foundation and its partners to inform future investment in African-grown MEL technologies.

RESEARCH PUBLICATIONS/CONFERENCE PAPER

  • Asare, E.S., Akudugu, J.A., Addaney, M., Apraku, A., Appiah, A.O. (2021). Mushroom Cultivation as an Alternative Livelihood in Artisanal Goldmining Affected Communities in Ghana. JENRM Vol. 7, No. 2, 13-20,
    • Asare, E.S., Addaney, M., & Akudugu, J.A. (2016). Prospects and Challenges of Rural Small Scale Industries in the Sunyani Municipality of Ghana. Asian Development Policy Review vol. 4111-126
    • Asare, E. S., & Stanfill, C. J. (2019, March). Change takes time: Transitioning to evaluating student performance longitudinally [Conference presentation]. 10th AfrEA International Conference, Abidjan, Côte d’Ivoire.

EDUCATION

  • MSc in Development Management (Development Impact Assessment Major) | University for Development Studies, Tamale (2014 – 2016)
    • BSc in Actuarial Science | Kwame Nkrumah University of Science and Technology, Kumasi (2006 – 2010)
    • Certificate in Advanced Project Monitoring and Evaluation | KNUST-AACE (2018)
    • Certificate in Evaluation of Development Policies | Italian Center for International Development (ICID), June 2024
    • Cambridge Center of Excellence Certificate: Project Management, 2023
    • YALI Emerging Leaders Training Program Alumnus-Civic Leadership (Cohort 15)

My contributions

    • Posted on 09/08/2025

      My response to: How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making: 

      I share observations from my own evaluation work, highlight common barriers I have encountered, and propose practical actions that can help organisations make better use of evaluation findings in their decision-making processes.

      Introduction
      Evaluations are intended to do more than assess performance: they are designed to inform better decisions, strengthen programmes, and improve accountability. In practice, however, the journey from feedback to action does not always happen as intended. Findings may be presented, reports submitted, and then the process slows or ends before their potential is fully realised. This does not necessarily mean that the findings are irrelevant. Often, the reasons lie in a mix of cultural, structural, and practical factors: how evaluations are perceived, when they are conducted, how findings are communicated, the incentives in place, the systems for follow-up, and whether leadership feels equipped to champion their use.
       

      From my work as an evaluator, I have noticed recurring patterns across different contexts: the tendency to focus on positive findings and sideline challenges, the loss of momentum when project teams disperse, reports that are seldom revisited after presentation, and leadership teams that value learning but are unsure how to embed it into everyday decision-making.I explore these patterns, illustrate them with grounded examples, and offer possible actions for making evaluation a more consistent part of organisational decision cycles.
      Barrier 1: Perception as Judgment
      One of the most persistent barriers to the effective use of evaluation results lies in how findings are perceived. Too often, they are seen as a verdict on individual or organisational performance rather than as a balanced body of evidence for learning and improvement.This framing can influence not only the tone of discussions but also which parts of the evidence receive attention.When results are largely positive, they are sometimes treated as confirmation of success, prompting celebrations that, while important for morale, may overshadow the role of these findings as part of an evidence base. Positive results are still findings, and they should be interrogated with the same curiosity and rigour as less favourable results. For example, strong
      performance in certain areas can reveal underlying drivers of success that could be replicated elsewhere, just as much as weaker performance signals areas needing attention. However, when the focus remains solely on reinforcing a success narrative, particularly for external audiences, recommendations for further improvement may receive less follow-through.
      On the other hand, when evaluations reveal significant challenges, conversations can become defensive. Stakeholders may invest more energy in contextualising the results, explaining constraints, or questioning certain data sources and measures. In settings where evaluations are closely tied to accountability, especially when reputations, funding, or career progression are perceived to be at stake, such responses are understandable. This is not necessarily resistance for its own sake, but a natural human and organisational reaction to perceived judgment.The challenge is that both of these patterns, celebrating positive results without deeper analysis, and defensively responding to difficult findings, can limit the opportunity to learn from the full spectrum of evidence. By prioritising how results reflect on performance over what they reveal about processes, systems, and external factors, organisations risk narrowing the space for honest reflection.
      Barrier 2: Timing and the End-of-Project Trap
      When an evaluation is done can make all the difference in whether its findings are put to use or simply filed away. Too often, evaluations are completed right at the end of a project, just as staff contracts are ending, budgets are already spoken for, and most attention is focused on closing activities or preparing the next proposal. By the time the findings are ready, there is little room to act on them.I have been part of evaluations where valuable and innovative ideas were uncovered, but there was no next phase or active team to carry them forward. Without a plan for handing over or transitioning these ideas, the recommendations stayed in the report and went no further.
      Staff changes make this problem worse. As projects wind down, team members who understand the history, context, and challenges often move on. Without a clear way to pass on this knowledge, new teams are left without the background they need to make sense of the recommendations. In some cases, they end up repeating the same mistakes or design gaps that earlier evaluations had already highlighted.The “end-of-project trap” is not just about timing; it is about how organisations manage the link between evaluation and action. If evaluations are timed to feed into ongoing work, with resources and systems in place to ensure knowledge is passed on, there is a far better chance that good ideas will be used rather than forgotten.
      Barrier 3: Report Format, Accessibility, and Feasibility
      The format and presentation of evaluation reports can sometimes make them less accessible to the people who need them most. Many reports are lengthy, technical, and written in a style that suits academic or sector specialists, but not necessarily busy managers or community partners, although executive summaries are provided.Another challenge is that recommendations may not always take into account the resources available to the stakeholders who are expected to implement them. This means that while a recommendation may be sound in principle, it may not be practical. For example, suggesting that a small partner organisation establish a dedicated monitoring unit may be beyond reach if they have only a few staff members and no additional budget.It is also worth noting that findings are often introduced in a presentation before the full report is shared. These sessions are interactive and help bring the data to life. However, once the meeting is over, the detailed report can feel like a repetition of what has already been discussed. Without a clear reason to revisit it, some stakeholders may not explore the more nuanced explanations and qualifiers contained in the main text and annexes.
      Barrier 4: Incentives and Accountability Gaps
      Organisational systems and incentives play a significant role in whether evaluation findings are acted upon. In many cases, evaluators are rewarded for producing a thorough report on time, implementers are measured against the delivery of activities and outputs, and donors focus on compliance and risk management requirements.What is often missing is direct accountability for implementing recommendations. Without a designated department or manager responsible for follow-through, action can rely on the goodwill or personal drive of individual champions. When those individuals leave or when priorities change, momentum can quickly be lost and progress stalls.
      Barrier 5: Limited Dissemination and Cross-Learning
      The way evaluation findings are shared strongly influences their reach and use. In some organisations, reports are not published or stored in a central, easily accessible database. As a result, lessons often remain within a single team, project, or country office, with no structured mechanism to inform the design of future initiatives. I have seen cases where innovative approaches, clearly documented in one location, never reached colleagues tackling similar issues elsewhere. Without a deliberate system for sharing and discussing these findings, other teams may unknowingly duplicate work, invest resources in already-tested ideas, or miss the chance to adapt proven methods to their own contexts. This not only limits organisational learning but also slows the spread of good practice that could strengthen results across programmes.
      Barrier 6: Weak Governance and No Management Response System
      When there is no structured process for responding to recommendations, there is a real risk that they will be acknowledged but not acted upon. A Management Response System (MRS) provides a framework for assigning ownership, setting timelines, allocating resources, and tracking progress on agreed actions.I have seen situations where workshops produced strong consensus on the importance of certain follow-up steps. However, without a clear mechanism to record these commitments, assign responsibility, and revisit progress, they gradually lost visibility. Even well-supported recommendations can stall when they are not tied to a specific department or manager and monitored over time.
      Barrier 7: Leadership Capacity to Use Evaluation Findings
      Clear evaluation recommendations will only be useful if leaders have the capacity to apply them. In some cases, managers may lack the necessary skills to translate recommendations into practical measures that align with organisational priorities, plans, and budgets.
      Where this capacity gap exists, recommendations may be formally acknowledged but remain unimplemented. The challenge lies not in the clarity of the evidence, but in the ability to convert it into concrete and context-appropriate actions.


      Recommendations
      1. Evaluation as a Learning process

      Shifting the perception of evaluation from a judgment to a learning tool requires deliberate organisational strategies that address both culture and process. The following approaches can help create an environment where all findings: positive, negative, or mixed are used constructively.
      a. Set the Learning Tone from the Outset
      Evaluation terms of reference, inception meetings, and communications should emphasise that the purpose is to generate actionable learning rather than to pass judgment. This framing needs to be reinforced throughout the process, including during data collection and dissemination, so that stakeholders are primed to see findings as evidence for growth.
      b. Analyse Positive Findings with the Same Rigour
      Treat favourable results as opportunities to understand what works and why. This includes identifying enabling factors, strategies, or contextual elements that led to success and assessing whether these can be replicated or adapted in other contexts. Documenting and communicating these drivers of success helps shift the focus from defending performance to scaling good practice.
      c. Create Safe Spaces for Honest Reflection
      Organise debriefs where findings can be discussed openly without the pressure of immediate accountability reporting. When teams feel safe to acknowledge weaknesses, they are more likely to engage with the evidence constructively. Senior leaders play a key role in modelling openness by acknowledging gaps and inviting solutions.
      d. Separate Accountability Reviews from Learning Reviews
      Where possible, distinguish between processes that assess compliance or contractual performance and those designed to generate strategic learning. This separation reduces defensiveness and allows evaluation spaces to remain focused on improvement.
      2. Avoiding the End-of-Project Trap
      Addressing the end-of-project trap means planning evaluations so they lead to action while there is still time, people, and resources to follow through. The approaches below can help ensure that findings are not left behind once a project ends.
      a. Match Evaluation Timing to Key Decisions
      Schedule evaluations so findings are ready before important moments such as work planning, budgeting, or donor discussions. Midline or quick-turn evaluations can capture lessons early enough for teams to act on them.
      b. Include Handover and Follow-Up in the Plan
      From the start, be clear about who will take responsibility for each recommendation and how progress will be tracked. Build follow-up steps into the evaluation plan so the work continues beyond the final report.
      c. Keep Knowledge When Staff Change
      Hold debrief sessions before staff leave to pass on the stories, context, and reasoning behind recommendations. Let outgoing and incoming staff work together briefly, and store key information where it can be easily found later.
      d. Share Findings Before the Project Closes
      Present key findings while the team is still in place. Use short, focused briefs that highlight what needs to be done. This gives the team a chance to act immediately or link recommendations to other ongoing work.
      3. Improving Report Accessibility and Feasibility
      Evaluation findings are more likely to be used when they are presented in a way that people can easily understand and act on, and when recommendations are realistic for those expected to implement them.
      a. Make Reports Usable for Different Audiences
      Prepare different versions of the findings: a concise action brief for decision-makers, a plain-language summary for community partners, and the full technical report for specialists. This ensures that each audience can engage with the content in a way that suits their needs and time.
      b. Check Feasibility Before Finalising Recommendations
      Hold a short “recommendation review” meeting after sharing preliminary findings. Use this time to confirm whether recommendations are practical given available staff, budgets, and timelines, and adjust them where needed.
      c. Link Recommendations to Action Plans
      Where possible, show exactly how each recommendation can be implemented, including suggested steps, timelines, and responsible parties. This makes it easier for organisations to move from reading the report to acting on it.
      4. Closing the Incentive and Accountability Gaps
      To improve the likelihood that evaluation findings are acted on, responsibilities for implementation should be made clear in the recommendation statements.
      a. Name the Responsible Department or Manager in the Evaluation Report
      Each recommendation should specify the department or manager expected to lead its implementation. This ensures clarity from the moment the report is delivered.
      b. Confirm Feasibility Before Finalising Recommendations
      During the validation of preliminary findings, engage the relevant departments or managers to confirm that the recommendations are realistic given available resources and timelines.
      5. Strengthening Dissemination and Cross-Learning
      Evaluation findings are more likely to be used across and beyond an organisation when they are shared widely, stored in accessible formats, and actively connected to future programme design.
      a. Share Findings Beyond the Immediate Project Team
      Circulate the evaluation report and summary briefs to other departments, country offices, and relevant partners. Use internal newsletters, learning forums, or staff meetings to highlight key lessons.
      b. Store Reports in a Central, Accessible Location
      Ensure that the final report, executive summary, and any related briefs are uploaded to a shared organisational repository or knowledge management platform that all relevant staff can access.
      c. Create Opportunities for Cross-Learning Discussions
      Organise short learning sessions where teams from different projects or countries can discuss the findings and explore how they might be applied elsewhere.
      d. Publish for Wider Access Where Possible
      Where there are no confidentiality or data protection constraints, make the report or key findings available on the organisation’s website or other public platforms so that the wider community can benefit from the lessons.
      6. Strengthen Governance and Management Response
      A Management Response System (MRS) can help ensure that recommendations are acted on by assigning clear responsibilities, setting timelines, and tracking progress from the moment the evaluation is completed.
      a. Establish a Management Response Process Before the Evaluation Ends
      Agree with the commissioning organisation on the structure and timing of the MRS so it is ready to accompany the final report. This ensures that follow-up starts immediately.
      b. Name Responsible Departments or Managers in the Evaluation Report
      For each recommendation, clearly state which department or manager is expected to lead implementation. This creates ownership from the outset.
      c. Set Review Points to Monitor Progress
      Agree on dates for follow-up reviews within the organisation’s governance framework to check on progress against the Management Response.
      d. Link the MRS to Existing Planning and Budgeting Cycles
      Where possible, align recommended actions with upcoming planning, budget allocation, or reporting timelines so that they can be resourced and implemented without delay.
      7. Strengthen Leadership Capacity to Apply Evaluation Findings
      Building the skills of managers to apply evaluation recommendations is essential to ensure that findings are used to guide organisational decisions.
      a. Include Capacity-Building in the Evaluation Design
      Ensure the Terms of Reference (ToR) require activities aimed at strengthening managers’ ability to apply recommendations, such as practical workshops or scenario-based exercises.
      b. Provide Decision-Oriented Briefs
      Prepare concise, action-focused briefs alongside the evaluation report, outlining specific options and steps for integrating recommendations into planning, budgeting, and operational processes.
      c. Facilitate Joint Planning Sessions
      Organise sessions with managers to translate recommendations into concrete action plans, ensuring alignment with organisational priorities and available resources.
      d. Offer Targeted Support Materials
      Develop templates, checklists, or guidance notes to assist managers in systematically incorporating recommendations into decision-making.


      Conclusion
      Making full use of evaluation findings requires timely evaluations, clear communication, strong follow-up systems, and leadership capacity to act on recommendations. By addressing barriers such as end-of-project timing, inaccessible reports, unclear accountability, limited sharing, and weak governance, organisations can turn evaluations into a practical tool for learning and improvement.

       

      By Supreme Edwin Asare 

      Monitoring, Evaluation, Research & Learning (MERL) Consultant (Remote | Onsite | Hybrid) | Author of the AI Made in Africa Newsletter. Experienced in conducting performance and process evaluations using theory-based and quasi-experimental methods, including Contribution Analysis and mixed-methods designs. Skilled in Theory of Change development, MEL system design, and training on AI integration in evaluation.📩 kdztrains@gmail.com