Skip to main content
Dea Tsartsidze

Georgia

Dea Tsartsidze Member since 17/08/2025

SAI - Solution Alternatives International

International Research, Monitoring, Evaluation and Learning Practitioner
Website

International research and evaluation practitioner with 15+ years of experience spearheading comprehensive MEL initiatives across over 100 countries. Currently serves as a Faculty member at the University of Georgia and Founding Partner at Solution Alternatives International. Accomplished in designing mixed-method research and establishing evidence-based evaluation and monitoring systems for projects funded by major international organizations, including UN agencies, USAID, MCC, EU, FCDO, and other key donors.

Passionate about driving impact through data-driven strategy expertise and advancing evaluation methodologies to conduct strong and credible evaluations that matter strategically. Believes that evaluation drives meaningful impact when rigorous methodologies transform complex data into actionable insights for strategic decision-making and organizational learning. Specializes in impact evaluation, evaluation capacity building, and institutionalizing adaptive learning systems. Recognized as a finalist for the Molly Hageboeck Award for MEL Innovation for developing a groundbreaking monitoring and evaluation methodology for governance strategy implementation.

Previously established MEL systems across the South Caucasus region for Action Against Hunger and led the development of Georgia's first policy monitoring and evaluation methodology for OGP National Action Plan, setting new standards recognized by the EU and OECD. Member of the international evaluation community with extensive experience in food security, agriculture, rural development, governance, humanitarian, and media sectors.

My contributions

    • Dea Tsartsidze

      Georgia

      Dea Tsartsidze

      International Research, Monitoring, Evaluation and Learning Practitioner

      SAI - Solution Alternatives International

      Posted on 25/08/2025

      Dear Monica and everyone,

      Thank you for initiating such a valuable discussion, and I appreciate the rich insights shared by everyone. The perspectives offered truly echo many of the challenges we face in our field.

      I find myself agreeing with most of the points raised, particularly Aurelie's emphasis on syncing evidence generation with stakeholder needs and Brilliant's focus on formalizing feedback processes. However, I'd like to add another layer to this conversation: the critical importance of establishing clear organizational processes that position evaluation not as a standalone exercise, but as an integral component interconnected with program implementation and broader M&E systems.

      In my experience, one of the fundamental barriers we haven't fully addressed is the question of ownership. Too often, it feels like evaluations are "owned" exclusively by evaluators or M&E staff, while project teams remain somewhat detached from the process. This disconnect may significantly impact how recommendations are received and, ultimately, implemented.

      This raises my first question: how do we better involve project staff in the finalization of recommendations? I've found it particularly valuable to organize joint sessions where evaluators, M&E staff, and project teams come together not just to discuss findings, but to collaboratively shape the final recommendations. When project staff are actively involved in interpreting findings and crafting recommendations that reflect their understanding of operational realities, ownership naturally increases. Have others experimented with this approach?

      Beyond the ownership challenge, I think there's another fundamental issue we need to confront more directly: the quality of evidence we produce as evaluators. Are we consistently striving to generate robust, credible data? When we bring evidence to decision-makers, are we confident it truly meets quality standards, or might we sometimes, unintenally, but still be adding another barrier to utilization—especially for already skeptical decision-makers who may use questionable evidence quality as grounds to dismiss findings entirely?

      This leads to my second question, which has two parts: First, do we have adequate quality assurance instruments and internal processes to genuinely track and ensure the quality of evaluations we conduct? I've found quality frameworks useful, though admittedly, there's still considerable room for improvement in making these more objective and systematic.

      Second, even when our evidence is solid, how effectively are we translating and communicating our findings to decision-makers? Are our reports and presentations truly accessible and relevant to how decisions are actually made within organizations, or are we producing technically sound but practically unusable outputs?

      These questions aren't meant to shift responsibility away from the systemic barriers we've identified, but rather to encourage a more reflective approach to our own role in the utilization challenge.

      Looking forward to your perspectives on this.