Pasar al contenido principal
Alena Lappo Voronetskaya

Italy

Alena Lappo Voronetskaya Member since 30/03/2020

Evaluation Officer (IAEA); Board Member (European Evaluation Society)

Alena Lappo Voronetskaya is an evaluation specialist with over eight years of experience in evaluation and research, most recently for the OECD, the World Bank IEG, FAO’s Independent Office of Evaluation and IFAD. Alena is a Board Member of the European Evaluation Society (EES) and the Board of Trustees of the International Organization for Cooperation in Evaluation (IOCE). In her current role, Alena is responsible for the assessments of quality, usage and impact of OECD products by Member States and the reconceptualization of the OECD’s Value for Money Initiative. 

My contributions

    • Alena Lappo Voronetskaya

      Italy

      Alena Lappo Voronetskaya

      Evaluation Officer (IAEA); Board Member (European Evaluation Society)

      Publicado el 06/03/2023

      Hi Silvia! 

      Thank you for the blog. Your review of the ChatGPT that highlights the strength and weaknesses of this software is greatly insightful. We shared the blog through the European Evaluation Society Newsletter today so that more members of evaluation community can participate in the discussion on technology and evaluation you raised through this blog.

      Referring to the question you posed on whether AI in general is smart enough and can make our evaluation work easier, my answer is yes if we as evaluation practitioners understand its application and limitations.

      Firstly, good evaluation always starts with the right evaluation questions and the methodology designed to answer these questions that take into account contextual factors, limitations, sensitivities, etc. I understand, that your example about coffee suggests that the value should have been placed on “time saving” and not on less “chores”. Methodologies such as Social Return on Investment (SROI) could go further in this example and place value, even financial, on the social activity of drinking coffee with friends beyond “time saving” if this is relevant to answer evaluation questions. This to say, “extra free time” and enriched “social life” should be of interest for the evaluation before the decision on whether to use AI technology for data collection/analysis and the search of the most appropriate software package.

      Secondly, it is important to understand the limitations of applying AI and innovative technologies in our evaluation practice. To provide more insights into the implications of new and emerging technologies in evaluation, we discuss these subjects in the EvalEdge Podcast I co-host with my EES colleagues. The first episodes of the podcast focus on the limitations of big data and ways to overcome them.

      Thirdly, the iterative process of interaction of evaluator and technology is important. As a good practice, data collected through innovative data collection tools triangulated with data collected through other sources. Machine learning algorithms applied, for example, to text analytics as discussed in one of the EES webinars on “Emerging Data Landscapes in M&E” need to be “trained” by human to code documents and to recognise the desired patterns.

      Best regards,

      Alena

    • Alena Lappo Voronetskaya

      Italy

      Alena Lappo Voronetskaya

      Evaluation Officer (IAEA); Board Member (European Evaluation Society)

      Publicado el 24/04/2020

      Estimados miembros,

      Siguiendo este oportuno hilo de discusión sobre cómo adaptar nuestras evaluaciones en la época de Covid-19, comparto el blog de colegas y expertos del Banco Mundial sobre métodos de evaluación. El blog (en inglés) proporciona un árbol de decisiones "Making Choices about Evaluation Design in times of COVID-19" y algunos ejemplos prácticos. 

      Alena

    • Alena Lappo Voronetskaya

      Italy

      Alena Lappo Voronetskaya

      Evaluation Officer (IAEA); Board Member (European Evaluation Society)

      Publicado el 30/03/2020

      Estimados colegas,

      Muchas gracias a Nick por comenzar este tema y discusión muy relevante.

      En particular, quisiera subrayar la responsabilidad ética que tenemos como evaluadores mencionada por Carlos. Es posible que algunos países aún no tengan una restricción importante. De hecho, sería legal que el equipo local llevara a cabo grupos focales y entrevistas cara a cara. Sin embargo, depende de un evaluador decidir si es ético. Esto podría implicar que incluso los consultores locales tendrían que llevar a cabo la recopilación de datos a través de herramientas de interacción en línea. Recientemente le había sucedido a mi colega que gestionaba una evaluación en Indonesia y Brasil, donde el equipo decidió evitar la recopilación de datos cara a cara por parte de consultores de ambos países, ya que lo consideraban poco ético.

      Como aún queda mucho desconocido sobre Covid-19, cualquier decisión que tomemos con respecto a nuestras evaluaciones actuales y futuras se basará en datos imperfectos. La ciencia presenta diferentes escenarios, pero algunos de ellos sugieren que la situación de la salud podría tardar hasta 1,5 años en estabilizarse. Esta emergencia de salud podría ser una buena oportunidad para aprender a diseñar una metodología para una evaluación creíble a distancia.

      El 1 de abril, nuestros colegas de USAID ofrecen un seminario web gratuito " Discussion on Challenges and Strategies for M&E in the Time of COVID-19" (en inglés). Los miembros interesados pueden registrarse aquí:
      https://www.eventbrite.com/e/discussion-on-challenges-and-strategies-for-me-in-the-time-of-covid-19-registration-100817255124

      Saludos

      Alena