Skip to main content
Cristian Maneiro

Uruguay

Cristian Maneiro Member since 16/09/2022

UNWOMEN, Plan Eval

Evaluation Consultant
Website
Evaluation specialist with over ten years of experience, both conducting and commissioning evaluations for UN Agencies and International Organizations. Regional work experience in Latin American and African countries. MA degree in Sociology and post graduate diploma in Public Policy and Evaluation, along with several short courses taken in development economics, research methods and softwares. Spanish native speaker, fluent in English and Portuguese with working knowledge of French.

My contributions

    • Cristian Maneiro

      Uruguay

      Cristian Maneiro

      Evaluation Consultant

      UNWOMEN, Plan Eval

      Posted on 24/10/2025

      Thanks Silvio for raising this important issue.

      For me, communicating effectively means accepting that there’s no one-size-fits-all approach. The traditional evaluation report still has its place, especially for accountability and documentation purposes—it’s necessary and should remain part of the package. But to really reach different audiences, we need to go beyond that. Short, tailored products like one-pagers, infographics, slide decks, or even social media content can make a big difference. Depending on the audience, this might mean a Twitter thread, an Instagram carousel, or even a short TikTok reel summarizing key messages.

      AI tools have also made this much easier. Platforms like Notebook LLM now allow you to create podcasts or other multimedia products from scratch, often at no cost. UN Women’s Evaluation Unpacked series is a great example of how evaluation findings can be turned into engaging, accessible content. I think there’s a lot of untapped potential in these newer formats to make evaluation results more relatable and widely shared.

      One of the biggest challenges is that communication and dissemination often aren’t built into the evaluation process from the start. They’re usually treated as an afterthought—something to do at the end if there’s time or budget left. As a result, dissemination either happens in a very limited way or doesn’t happen at all.

      Ideally, communication should be part of the planning and resourced properly, just like data collection or analysis. It should also be seen as something that continues beyond the final report—helping keep the findings alive and relevant. I think clients and institutions could give more importance to this, treating communication as a core part of learning and follow-up, not just the “last step” of an evaluation.

    • Cristian Maneiro

      Uruguay

      Cristian Maneiro

      Evaluation Consultant

      UNWOMEN, Plan Eval

      Posted on 09/05/2024

      Hello Colleagues,

      Greetings from Uruguay!

      Thank you, Ibtissem, for bringing up this intriguing topic. Having experienced both sides of the evaluation process (commissioning evaluations for WFP and conducting as independent consultant for UNICEF and UNFPA), I completely agree that the Evaluation Manager plays a pivotal role and bears significant responsibility for ensuring the quality of the evaluation results, which ultimately determines their usefulness.

      Building on what other colleagues have already mentioned, I'd like to offer a couple of additional points that haven't been raised yet on the EM support and its role on Evaluation:

      Ideally, the EM shouldn't shoulder the entire burden alone. It's advantageous for them to be supported by at least one Evaluation Analyst. This team composition mimics the structure of an external evaluation team and facilitates smoother communication and coordination. Evaluation Analysts can handle bilateral meetings with data analysts or other external evaluation team members, allowing the EM to focus on overseeing the calendar, meeting deadlines, and making high-level decisions in consultation with the team leader.

      Furthermore, it's important to acknowledge that when discussing evaluation independence, we often assume we're referring to external evaluations. However, certain evaluation approaches, (e.g  Developmental Evaluation), emphasize a more formative focus. In these cases, the Evaluation Manager's involvement as an integral part of the program being evaluated is essential. This approach fosters greater ownership and promotes internal learning within the organization.

      Thanks and best regards,

      Cristian

       

       

    • Cristian Maneiro

      Uruguay

      Cristian Maneiro

      Evaluation Consultant

      UNWOMEN, Plan Eval

      Posted on 19/02/2024

      Greetings colleagues,

      Thank you, Muriel, for bringing up this topic, and I appreciate all the contributors. I believe that AI holds promising features for evaluators, and it's crucial for us to be aware of them. Personally, the prospect of conducting fast and interactive quantitative analysis without the need for expertise in code-based software (e.g., R or Python) would be a game-changer for professionals like myself with backgrounds in human sciences.

      Additionally, the capability of summarizing extensive raw texts, such as interviews or focus group discussion transcripts, and facilitating accurate analysis of key points, has the potential to save a significant amount of time. However, it's essential to highlight that the evaluator's experience, prior knowledge of the field, insights from stakeholders, and a sense of the evaluation's purpose will continue to be crucial and valued.

      Moreover, ethical dilemmas and decisions on how to present results won't be solved by AI, no matter how powerful it becomes.

      I would love to see examples of AI used in both quantitative and qualitative approaches.

    • Cristian Maneiro

      Uruguay

      Cristian Maneiro

      Evaluation Consultant

      UNWOMEN, Plan Eval

      Posted on 22/09/2023

      Dear Colleagues:

      Greetings from Uruguay!

      I believe that the discussion brought up by Jean is very relevant. Mixed methods are undoubtedly a powerful strategy for addressing an evaluation object from different angles, and it is almost a standard practice in most evaluation Terms of Reference (TDRs) that are currently seen, whether for UN agencies or others.

      However, I agree that sometimes the term becomes a cliché and is used without considering whether a mixed methods strategy is genuinely the most appropriate. It is assumed that different techniques (typically Key Informant Interviews and surveys) will provide complementary information, but there is often not a clear idea from the commissioners on how this information will be integrated and triangulated. In my view, successful cases occur when the integration process is well-defined or when methods are applied sequentially (e.g., conducting focus groups to define survey questions or selecting cases based on a survey for in-depth interviews).

      Furthermore, I understand that with current technological developments, mixed methods have new potentialities. It's no longer just the typical combination of Key Informant Interviews and Focus Group Discussions with surveys; instead, it can include big data analysis using machine learning, sentiment analysis, and more.