Skip to main content
Monira Ahsan

Bangladesh

Monira Ahsan Member since 02/05/2025

Independent

Independent Postdoctoral Researcher
  • Self/decentralized evaluations assessing OECD DAC criteria.
  • Programme and Partnership Management, Institutional Capacity Strengthening, Political Economy and Context Analysis, Legal and Policy Frameworks Analysis, and developing Policy Briefs, Papers, and Policy Advocacy.
  • Intersectional and Multidimensional Analysis of Gender, mainstreaming Gender Equality, Diversity, and Social Inclusion (GESI), and Protection in Health, Education, Protection, Food Security, Livelihoods, and Nutrition programmes.
  • Gendered Impact of Forced Migration and Displacements on Health, Human Security issues of Forced Migrants and Refugees, and Impact of Climate Challenges on Health.
  • Gender-Based Violence (GBV), Protection and Response to Sexual Abuse and Exploitation (PSEA), alongside Child Protection and Social Protection.
  • Mixed Methods, Human Rights, Social Relations, Capability Approaches, Feminist Methodologies, Gender and Intersectionality, Participatory Research, and Policy-Relevant Academic, Applied, and Implementation Research.

 

My contributions

    • Monira Ahsan

      Bangladesh

      Monira Ahsan

      Independent Postdoctoral Researcher

      Independent

      Posted on 20/05/2025

      Dear Fellow Colleagues,

      Thank you for initiating and contributing to this exciting dialogue. Although I have not evaluated SSTC, I am drawing on my 20 years of experience, including the first 10 years as a development practitioner and the last 10 years as a researcher and evaluator in development and humanitarian emergencies across South Asia, Central Asia, Southeast Asia, and East Africa. 

      • What evaluation approaches or tools have you found effective in assessing SSTC initiatives?

      While leveraging the relative strengths of mixed methods for specific projects, I have found in my work that a qualitative and participatory approach, focusing on both process and impact, offers a better understanding, heightened rigour, and greater insight. Assessing the OECD/DAC criteria, I find that the use of Intersectional Feminist Methodologies is a particularly powerful framework for evaluating projects that address gender, intersectionality, decolonization, and participatory research, effectively challenging power structures and inequalities. This framework is underpinned by lived experiences, values cultural and ethical considerations, and promotes social justice. 

      In my most recent decentralized evaluation, I found that a light internal assessment, combined with the mapping of outcomes utilizing Outcome Harvesting and Outcome Mapping, was particularly helpful in gaining a holistic understanding of the effectiveness and impacts of the projects of four implementing Partners. Instead of grounding the findings and results on the Partners’ Results Frameworks, we collected and classified key outcomes at different levels: at the individual level, leading to the empowerment of women and girls; at the community level, where changes in social norms, values, and practices occur; at the organizational level, which provides the settings for implementation; and at the political and policy level, which offers the overall enabling environment. Using these tools, combined with their contribution-focused approach and resource mapping, was also instrumental in addressing attribution complexities in identification, analysis, and reporting, as well as in exploring both tangible and intangible outcomes. Additionally, the utilization-focused approach yielded findings that were both relevant and impactful, ultimately contributing to the shaping of user decision-making and policies. 

      https://cdn.sida.se/app/uploads/2024/08/13134122/62720_DE2024_17_Evaluation-of-Swedens-support-to-Womens_WEB.pdf

      • What challenges have you faced—methodologically, politically, or operationally?

      Methodological Challenges

      One of the recurring methodological challenges I face in decentralized evaluations is the constraint of resources, both in terms of time and budget. It is a particularly classic phenomenon across commissioning organizations, whether international, national, or local, that very little and disproportionate time is allocated for collecting empirical evidence, when expecting rigour in analysis and reporting. Additionally, evaluating multiple and complex outcomes across various dimensions with several partners and producing an aggregated report as per the ToR presents challenges in terms of measuring, interpreting data, and generalizing effectively. The experience further added another layer of complexity when the four individual Partners expected individual organization-specific reports. Besides, particularly in the confined refugee camp context, such as with the Rohingya refugees in Cox’s Bazar, Bangladesh, familiarity, congested space, and overcrowding create distinct ethical challenges in maintaining privacy and confidentiality and thereby pose risks of possible harms to the study participants. 

      Additionally, when working in a team or commissioning an evaluation task, it is often challenging to confront capacity deficiencies among both junior and senior evaluators in the South, particularly in terms of knowledge, technical skills, and adherence to ethical practices. For instance, during the inception phase of a decentralized evaluation of a health partnership in a humanitarian emergency, the senior team lead acknowledged that they had never used OECD/DAC criteria and were uncertain how to apply them, despite their application and experience documents claiming to do so. Consequently, I had to provide extensive technical support for the evaluation, from designing the evaluation framework to data analysis and reporting, which conflicted with my role as Program and Partnerships Manager at the commissioning organization. Similarly, while evaluating the Women’s Rights Organizations in the political context of shrinking civic space in Bangladesh a few years ago, the junior team member remarked, “my today’s FGD group was very critical about the government actions against NGOs human rights activism and how their organizations had been suffered at the hands of the government officials from not releasing the overseas funding to not approving the NGOs activities under Foreign Donation, FD-6. Obviously, I have not included this ‘negative discussion’ in my FGD note today.” Thus, without formal education and training in research and evaluation approaches, methodologies —including methods, techniques, and ethical practices —among professionals engaged in evaluation in Southern contexts may risk generating data and reports that lack reliability and rigour, thereby compromising the quality of the findings.  

      Political Challenges

      I felt vulnerable as an evaluator due to potential physical and psychological dangers associated with collecting data on several occasions in the Rohingya refugee camps in Bangladesh. Assessing human rights issues in the Rohingya context was challenging, particularly during the initial years of the influx. While supporting humanitarian actors in facilitating the humanitarian response, the Bangladesh Government does not recognize Rohingya people as refugees, but as ‘forcibly displaced Myanmar nationals,’ which creates political sensitivities in discussions about the human rights issues affecting Rohingya people. During an initial-year assessment to monitor progress on commitments made under the Global Compact on Refugees (GCR), a group of Rohingya women was trained as Peer Researchers to collect data from their peers at the community level. Due to their lack of political sensitivity in the local context, my three expatriate colleagues dismissed my suggestion to exclude human rights discussions from the training. Eventually, our training was interrupted by the Intelligence Forces of the Bangladesh Government on the second day, resulting in significant changes to our methodology and data collection. I felt concerned about my security as the only native Bangladeshi among my three expatriate colleagues. Similarly, I also felt vulnerable and insecure while collecting data recently in the Rohingya camps, where various political factions were engaged in constant gunfighting, murder, abduction, and causing access constraints.

      Moreover, I also encountered political challenges in evaluation stemming from stakeholder interests, resistance from stakeholders, and power dynamics, which impaired the independence and utilization of evaluation findings. I evaluated the NGO Platform, a forum for NGOs working with Rohingya refugees in Cox’s Bazar, Bangladesh. Once the data were collected and the report was presented to key stakeholders through a validation workshop, I received both in-person and written comments from some of the key stakeholders, indicating that the report effectively explored the challenges behind the NGO Platform's malfunction and proposed a way forward for effective functioning. However, later, I was asked by the Coordinator of the NGO Platform to revise some of the key findings in the report, which contradicted the empirical evidence, stating, “the donor will be highly disappointed with the report, given its current findings.” 

      Operational Challenges

      I also have experience with inadequate data and data quality issues, including a lack of consistent and comparable data, such as baseline data for key indicators, as well as ineffective process monitoring and evaluation systems. For example, in an African context, I was advised to produce baseline data and discovered that the project did not utilize a well-designed Results Framework throughout its entire three-year duration. The issues of inadequate data and data quality are particularly stark in fragile humanitarian contexts, which I experienced while assessing projects in the Rohingya humanitarian response. In these contexts, management can be weak, with high staff turnover among both expatriate and local staff, and leadership changes frequently, resulting in a poor institutional memory. Similarly, issues concerning the availability of adequate and high-quality data for evaluating projects in humanitarian emergencies are also linked to the gaps in translating the localization agenda into practical action. In this regard, United Nations agencies and International NGOs entered into partnerships with local NGOs without necessarily investing in developing the latter's technical skills and institutional capacity, leading to dysfunctional monitoring, evaluation, and reporting systems, procedures, and practices. 

      I have experience involving relevant stakeholders, such as implementing Partners in designing evaluation frameworks in a limited capacity, contributing to the evaluation schedule, and identifying participants. However, my experience in attempting to actively engage all concerned stakeholders, particularly the primary stakeholders who are the beneficiaries, in co-designing evaluation frameworks and criteria in a meaningful way is nearly impossible unless there is a shift in the donors' and the commissioning organizations’ policies, attitudes, and values towards inclusive practices. This has implications for increased resources, including budget, time, and personnel. Finally, I faced difficulties in involving policy planners from the national government and academics, despite a timely schedule.

      • How can we, as evaluators, enhance the visibility, learning, and impact of SSTC in this shifting aid architecture?

      A few ideas to enhance the visibility, learning and impact of SSTC:

      Taking a Systems Perspective in evaluation can be highly impactful, as this approach emphasizes the interconnectedness and complexity of systems, thereby considering the overall relational context in the South, which is often complex and intertwined with political, economic, social, and environmental dynamics. Furthermore, a systems-based evaluation encourages adaptive evaluation management approaches to address the inherently complex and evolving challenges of humanitarian and development sectors, facilitating continuous reflection and learning. Additionally, a systems perspective involves all relevant stakeholders, including beneficiaries, not only as study participants in data collection but also in agenda setting, designing evaluation frameworks, identifying the roots and underlying causes of problems, and considering broader impacts, including unintended consequences. 

      Integrating a systems perspective into self/process evaluation is therefore essential for developing Southern capacity and can be readily applied in impact and decentralized evaluation. In an impact/decentralized evaluation, external independent evaluators may find it challenging to influence donors or commissioning organizations to design an inclusive evaluation framework that actively involves relevant stakeholders, including beneficiaries, due to implications for resources such as time, budget, and personnel. 

      A Systems thinking perspective in evaluation can yield greater insights and nuances if underpinned by relevant theoretical and conceptual lenses and analytical frameworks.  These may include, but are not limited to, the Capabilities Approach, which emphasizes the significance of critically examining socio-cultural, economic and political environments and how they shape individuals’ capabilities to achieve their potential beings and doings. Similarly, the Social Relations Approach can help deconstruct institutional practices by examining the characteristics of rules, activities, resources, people, and power to analyze how gender inequalities are produced and reproduced within various institutions, contributing to inequality and injustice for specific individuals and groups. Additionally, the Intersectional Feminist Framework is powerful in examining how multiple socio-cultural, politico-economic, and other factors, coupled with broader historical and current systems of discrimination, such as colonialism and globalization, act in a powerful way to determine the conditions of inequality and social exclusion among individuals and groups, and thereby to questions of social and economic justice. 

      Setting up an Evaluation Quality Assurance System or conducting an external review for quality assurance of methodology and content, as well as engaging Peer Review of the evaluation function to review issues of credibility, independence, and utility, can strengthen the SSTC evaluation and promote learning. Strategic dissemination of findings tailored to different audiences through various communication channels is crucial for enhancing both learning and visibility. Likewise, establishing strong linkages between evaluation, management response, follow-up, knowledge management, and the utilization of evaluation findings, such as integrating evaluation concerns into policy initiatives or establishing an evaluation resource centre as a public platform, is critical for strengthening learning and increasing the visibility of SSTC. Moreover, conducting meta-evaluations, meta-analysis, evaluation synthesis,  and reviews of recommendations on the same issues, topics, or themes has the potential to yield far richer, more nuanced, and complex evidence and findings, thereby enhancing the learning, visibility, and ultimately the impact of SSTC. 

      Looking forward to continuing this critical conversation.

      Kind Regards,

      Monira Ahsan, PhD