Posted on 05/05/2023
Dear members,
This is an interesting discussion!
Reporting is part of communication and a report is one of the communication tools. Ideally, every project, program, or intervention should have a clear communication plan informed by a stakeholder analysis ( ...that clearly identifies roles, influence, and management strategy). A specific communication plan for evaluations can also be developed. A communication plan typically has activity and budget lines and responsibilities and should be part of the overall project, program, or intervention budget. It may not be practical for the evaluator to assume all responsibilities in the evaluation communication plan but can take up some, particularly the primary ones since communication may be a long haul thing especially if it is targeting policy influence or behaviour change, and, as we all know, evaluators are normally constrained by time. Secondary evaluation communication can be handled by the evaluation managers and commissioners with the technical support of communication partners.
My take.
Gordon
Kenya
Gordon Wanzare
Monitoring, Evaluation, & Learning Expert
Posted on 09/10/2023
Dear Jean and colleagues.
Thanks for clarifying that the discussion is not only limited to programs but also includes projects or any humanitarian or development intervention. Very informative and rich discussion. I am learning a lot in the process!
When I say "when something is too complicated or complex, simplicity is the best strategy" in the context of Evaluations, I mean we do not need to use an array of, or several methodologies and data sources for an evaluation to be complexity-aware. We can keep the data, both quantitative and qualitative lean and focused on the evaluation objectives and questions. For example, use of complexity-aware evaluation approaches such as Outcome Harvesting, Process Tracing, Contribution Analysis, Social Network Analysis e.t.c does not necessarily mean several quant and qual data collection methods have to be applied. For example, in OH, you can use document review and KII to develop outcome descriptors then do a survey and KII during substantiation. I have used SNA and KII to evaluate change in relationships among actors in a market system. I have used SNA followed by indepth interviews in a social impact study of a rural youth entrepreneurship development program. In essence, you can keep the data collection methods to three ( The three legged stool or the triangle concept) and still achieve your evaluation objectives with lean and sharp data. A lot has been written on overcoming complexity with simplicity in different spheres of life, management, leadership e.t.c.
On the issue of who decides on the methodology, evaluator or program team? From my experience, a MEL plan is very clear on the measurements and evaluation methods. And, MEL plans are developed by the program team. Evaluators are asked to propose an evaluation methodology in the technical proposals to serve two purposes - that is to assess their technical competence and to identify the best fit with the evaluation plan. Topically, the evaluator and program team will consultatively agree on the best fit methodology during inception phase of the evaluation and this forms part of the inception report which is normally signed off by the program team.
My thoughts.
Gordon
Kenya
Gordon Wanzare
Monitoring, Evaluation, & Learning Expert
Posted on 29/09/2023
Greetings to all!
Great discussion question from Jean and very insightful contributions!
First, I think Jean's question is very specific - that is how mixed methods are used not just in evaluations but PROGRAM evaluations, right? Then, we know that a program consists of two or more projects i.e. a collection of projects. Therefore, programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know). Oxford English dictionary tells us that a method is a particular procedure for accomplishing or approaching something. Tools are used in procedures. I am from the school of thought that believes that when something is too complicated or complex, simplicity is the best strategy!
Depending on the context, program design, program evaluation plan, evaluation objectives and questions, the evaluator and program team can agree on the best method(s) that helps achieve the evaluation objectives and comprehensively answers the evaluation questions. I like what happens in the medical field, in hospitals where, except in some emergency situations, a patient will go through triage, clinical assessment and historical review by the Doctor, laboratory examination, radiology e.t.c then the doctor triangulates these information sources to arrive at a diagnosis, prognosis, treatment/management plan. Based on circumstances and resources, judgements are made whether all these information sources are essential or not.
Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e. open ended surveys, KIIs (Key Informant Interviews), community reflection meetings, observations, document reviews e.t.c in addition to quantitative methods may not be that smart.
In any case, perhaps, the individual projects within the program have already been comprehensively evaluated and their contribution to program goals documented, and something simple like a review, is what is necessary at the program level.
When complicated or complex, keep it simple. Lean data.
My thoughts.
Thanks.
Gordon