REFLECTION ON SELF-EVALUATION AND ADMINISTRATIVE COMMUNICATION
Patrick Dumazert, August 2025
In the document “The impact of new technologies on internal communication methods in companies” one sentence caught my attention: “Traditional methods such as the hierarchical network guaranteed the transmission of directives from management to the lower levels…”, and I would like to take this opportunity to raise a key issue for results-based public management.
At the outset, let us note that this statement, centered on companies, can easily be extended to the issue of public management. I do not share the view that this is the case for all subjects, far from it, but yes, in this case, the general principles of administrative communication are valid for both worlds, public and private.
That said, this dominant form of administrative communication, vertical and top-down, which prevails in public administration, especially in countries with an authoritarian political culture, must be complemented by an upward form so that evaluation can play the role expected of it. This role is not only one of ex post control but also one of learning, feedback, and the generation of better practices.
It is not enough to generate evidence at the level where facts occur. Already it is no small matter to select facts and convert them into compiled and organized data, and even more so to “make them speak,” as I like to say—that is, to transform them into information that responds to judgment criteria (and indicators) and ultimately to evaluation questions, themselves organized according to evaluation criteria (the classic DAC criteria, among others). But that alone is not sufficient.
It is then necessary to communicate this information, analyzed and reasoned, upwards to the various levels of management and decision-making. And since there cannot be as much space at the top as at the bottom (otherwise it would no longer be a pyramid; no one has ever seen an army with as many generals as soldiers), this means that upward administrative communication must be organized, shaped to meet the expectations of different levels of management, and adapted to the institutional culture particular to each entity or even each state.
Some cultures are highly centralist, where the highest-level authorities want to know every Monday what happened in each municipality the previous week, while other forms of government are much more decentralized.
Beyond this undoubtedly essential aspect of the specific institutional cultures of each context, there is also a crucial dimension concerning the nature of evaluation objects. These, as we know, are organized (or at least can be organized) within the conceptual framework widely accepted by the evaluation community the theory of change. With all the variations one might wish, this theory posits that actions produce outputs, which generate outcomes, which in turn lead to impacts.
As a reminder, though this is not the main subject here: outputs are substantivized verbs (the act of cutting apples produces cut apples), while outcomes involve changes in practices by actors (consumers may prefer to buy pre-cut apples—or not, which is where one judges the relevance of the action). As for impacts, for which I use the accepted term “contribution,” these are transformational changes. In my trivial example, one might imagine that, over time, children end up believing that apples already grow on trees in pieces.
This reminder makes it clear that evaluation objects, in particular the result indicators at the three major levels of the logical chain, evolve over time, each according to its own temporality. Therefore, the evaluation system must also have a dimension adequate to this temporality. For a long time, external evaluations of results were conducted at the end of projects, not only to provide accountability at the close of operations but also to “allow time” for impacts to materialize.
The problem is that these external, ex post evaluations, which primarily sought impacts through “quantitative” means using quasi-experimental methods, ceased to be useful for management, since the operations were already concluded by the time reports arrived. Their fate was therefore (and probably still often is) to become archived documents, with virtual repositories having replaced shelves without changing their function. Those of us who dream of doing science, including myself, would even like to conduct post-project evaluations a few years later, but no one pays for that, at least not in the development world.
Hence my proposal (formulated more than twenty years ago), which mainly consists of this: to evaluate during implementation the generation of outcomes, thereby reconciling the temporal and logical dimensions, by focusing on the essential link at the center of the theory of change, the outcomes, while also ensuring that the reality transformed by the action is interrogated, beyond the simple monitoring of “outputs.”
Based on these analyses carried out at different moments of implementation, evaluation can feed upward administrative communication mechanisms with generative analyses, that is, analyses informed by those conducted at the previous evaluation moment. It is not enough to state at any given moment that a particular indicator has reached a certain value; one must also explain how this current value compares with that of the previous evaluation moment, accompanied by the interpretation it deserves, without neglecting the usual comparisons with baseline values and with those expected at the end.
Postscript. The proposal also included an additional element: one of the upward communication mechanisms in question could, in certain cases, be the “situation room,” an idea inspired by health management and crisis management in general. However, it is clear that with virtual technologies and even augmented reality, this becomes by nature obsolete, since being “on site” or staying at home in front of a screen ultimately amounts to the same.
RE: How to Ensure Effective Utilization of Feedback and Recommendations from Evaluation Reports in Decision-Making
Guatemala
PATRICK DUMAZERT
Lead
ANUBIS
Posted on 28/08/2025
REFLECTION ON SELF-EVALUATION AND ADMINISTRATIVE COMMUNICATION
Patrick Dumazert, August 2025
In the document “The impact of new technologies on internal communication methods in companies” one sentence caught my attention: “Traditional methods such as the hierarchical network guaranteed the transmission of directives from management to the lower levels…”, and I would like to take this opportunity to raise a key issue for results-based public management.
At the outset, let us note that this statement, centered on companies, can easily be extended to the issue of public management. I do not share the view that this is the case for all subjects, far from it, but yes, in this case, the general principles of administrative communication are valid for both worlds, public and private.
That said, this dominant form of administrative communication, vertical and top-down, which prevails in public administration, especially in countries with an authoritarian political culture, must be complemented by an upward form so that evaluation can play the role expected of it. This role is not only one of ex post control but also one of learning, feedback, and the generation of better practices.
It is not enough to generate evidence at the level where facts occur. Already it is no small matter to select facts and convert them into compiled and organized data, and even more so to “make them speak,” as I like to say—that is, to transform them into information that responds to judgment criteria (and indicators) and ultimately to evaluation questions, themselves organized according to evaluation criteria (the classic DAC criteria, among others). But that alone is not sufficient.
It is then necessary to communicate this information, analyzed and reasoned, upwards to the various levels of management and decision-making. And since there cannot be as much space at the top as at the bottom (otherwise it would no longer be a pyramid; no one has ever seen an army with as many generals as soldiers), this means that upward administrative communication must be organized, shaped to meet the expectations of different levels of management, and adapted to the institutional culture particular to each entity or even each state.
Some cultures are highly centralist, where the highest-level authorities want to know every Monday what happened in each municipality the previous week, while other forms of government are much more decentralized.
Beyond this undoubtedly essential aspect of the specific institutional cultures of each context, there is also a crucial dimension concerning the nature of evaluation objects. These, as we know, are organized (or at least can be organized) within the conceptual framework widely accepted by the evaluation community the theory of change. With all the variations one might wish, this theory posits that actions produce outputs, which generate outcomes, which in turn lead to impacts.
As a reminder, though this is not the main subject here: outputs are substantivized verbs (the act of cutting apples produces cut apples), while outcomes involve changes in practices by actors (consumers may prefer to buy pre-cut apples—or not, which is where one judges the relevance of the action). As for impacts, for which I use the accepted term “contribution,” these are transformational changes. In my trivial example, one might imagine that, over time, children end up believing that apples already grow on trees in pieces.
This reminder makes it clear that evaluation objects, in particular the result indicators at the three major levels of the logical chain, evolve over time, each according to its own temporality. Therefore, the evaluation system must also have a dimension adequate to this temporality. For a long time, external evaluations of results were conducted at the end of projects, not only to provide accountability at the close of operations but also to “allow time” for impacts to materialize.
The problem is that these external, ex post evaluations, which primarily sought impacts through “quantitative” means using quasi-experimental methods, ceased to be useful for management, since the operations were already concluded by the time reports arrived. Their fate was therefore (and probably still often is) to become archived documents, with virtual repositories having replaced shelves without changing their function. Those of us who dream of doing science, including myself, would even like to conduct post-project evaluations a few years later, but no one pays for that, at least not in the development world.
Hence my proposal (formulated more than twenty years ago), which mainly consists of this: to evaluate during implementation the generation of outcomes, thereby reconciling the temporal and logical dimensions, by focusing on the essential link at the center of the theory of change, the outcomes, while also ensuring that the reality transformed by the action is interrogated, beyond the simple monitoring of “outputs.”
Based on these analyses carried out at different moments of implementation, evaluation can feed upward administrative communication mechanisms with generative analyses, that is, analyses informed by those conducted at the previous evaluation moment. It is not enough to state at any given moment that a particular indicator has reached a certain value; one must also explain how this current value compares with that of the previous evaluation moment, accompanied by the interpretation it deserves, without neglecting the usual comparisons with baseline values and with those expected at the end.
Postscript. The proposal also included an additional element: one of the upward communication mechanisms in question could, in certain cases, be the “situation room,” an idea inspired by health management and crisis management in general. However, it is clear that with virtual technologies and even augmented reality, this becomes by nature obsolete, since being “on site” or staying at home in front of a screen ultimately amounts to the same.