I have be doing evaluation since 1996. Done evaluations in the CIS and Eastern Europe region.
Posted on 10/03/2020
I would like to share my experience with applying a “change maps” participatory technique within the framework of the evaluation of the economic empowerment project that worked with female farmers in Kyrgyzstan, Central Asia. The project provided female farmers training on growing vegetables and preserving them and supported them to establish self-help groups and village associations to pool resources, e.g. for procurement of quality seeds and cattle. In some village the project also introduced instruments of the Gender Action Learning System (GALS). Evaluation was conducted at the end of the first phases of the project and was to inform preparation of the second phase.
“Change maps” is a participatory technique where small groups of project participants are offered blank maps (e.g. flipchart sheets) divided into several sections – one per each area where the project was or could be expected to create change – and asked to fill them based on their actual project experiences. In my case the potential change areas were identified in consultation with the project team. For the second phase the team wanted to align the project with the Women Empowerment in Agriculture Index (WEAI) so we agreed to focus the discussion about the changes induced by the project within WEAI domains. As a result our change maps included the following sectors:
• Do you see any changes in how decisions about agricultural production are made?
• Do you see any changes in access to and decision-making power over productive resources?
• Do you see any changes in control over use of income?
• Do you see any changes in leadership in the community?
• Do you see any changes in time use?
• Do you see any other changes?
During the meeting at villages we had up to 45 women involved in the project. Breaking them in small group was easy – each woman was a member of a small self-help group, and each self-help group developed a separate map. Then we gave women three beans each and asked to identify priority changes among those identified in their group. Then each group shared their perspective on key changes that emerged from the project with other groups. And in the end we asked women to assess the “merit” of the project for them on a scale of 10.
The lessons that we learned from application of this approach include:
• The “Change map” technique allowed to turn data collection into a semi-structured discussion among female farmers supported by the about what changed in their lives as a result of the project and its worth and merit. This helped me to distance evaluation from “control” visits the women were used to and enable a more open conversation about their project experiences.
• WEAI domains did not exactly match the way female farmers perceived their daily experiences, but they address this challenge by reinterpreting change sectors of the map. But in the future I would have used change sectors based on what the project was doing rather some external theoretical constructs.
• Filled change maps and discussions around them provided evaluation team with reach material for analysis. For example, based on the content of the maps I was able to identify more nuanced types of changes induced by the project and how common these changes were. One of interesting findings was that engaging women in productive agricultural practices led to women having no free time. This was seen as a positive change by female farmers and their families but came as a negative surprise for the project team.
Sincerely,
Natalia Kosheleva
Evaluation consultant
Russian Federation
Natalia Kosheleva
Evaluation Consultant
Process Consulting Company
Posted on 18/03/2020
Dear Mustapha, thanks for raising this important topic.
In my opinion monitoring and evaluation are complementary and both are necessary for assessing and correcting the performance of development interventions. The reason why they may seem to be mutually exclusive is that in most cases monitoring is fully embedded in the intervention management with specialists doing monitoring being part of the intervention team while evaluation is often positioned as external and independent and evaluation policies adopted by many major players in the development field include serious safeguards to ensure independence of the evaluation team.
To my knowledge in many less developed countries there is a growing number of M&E departments in national executive agencies, which may be interpreted as a sign that monitoring and evaluation are seen as complimentary. Still at present these M&E departments reportedly focus more on monitoring than evaluation and evaluation they do is often limited to comparing extent of achievement of targets for a set of pre-selected indicators.
I would agree that monitoring is not receiving much attention within evaluation community, but it is positioned as an integral part of Results-Based Management (RBM) and is a part of discussions within RBM community.
I also think that both monitoring and evaluation could benefit if we talked more about complementarity of the two practices. For example, in my experience theories of change, an instrument that emerged from the evaluation practice, are most useful when they are developed during the planning phase of the intervention and serve as the basis for development of its’ monitoring system. And evaluations could be more useful in terms of generating lessons from the intervention practice if evaluation ToRs and evaluation questions were informed by questions that intervention teams have when looking on their monitoring data.
When it comes to SDGs implementation, given the complexity of issues that countries and development partners have to tackle to achieve SDGs and hence the need to use innovative approaches and constant adaptation of interventions, I think that we should be talking about further integration between monitoring and evaluation so that intervention team can commission an evaluation when their monitoring data indicates that the intervention may be getting off track and use results of this evaluation to see if any adaptation is necessary.
Natalia Kosheleva
Evaluation consultant