Skip to main content

If No one reads it, did we communicate? Rethinking evaluation reporting

Posted on 04/12/2025 by Sibongile Sithole
If No One Reads It, Did We Communicate?
Sibongile

Although communication is always discussed as a key soft skill in any organisation, the jarring                   information on UN report going unread reveals that this skill is often lacking or rather simply ignored.  

The Current Challenge

Although communication is always discussed as a key soft skill in any organisation, the jarring information on UN reports going unread reveals that this skill is often lacking or rather simply ignored.  

According to the UN Secretary-General, in 2024 alone approximately 1 100 reports were produced by the UN Secretariat; however, nearly 65% of these were downloaded fewer than 2 000 times (United Nations, 2025). While 2 000 maybe a significant number of downloads, depending on context, type of evaluation and number of stakeholders;  it is important to note that downloading does not necessarily equate to reading the report(Reuters, 2025).  According to Munyayi (2025), evaluation practice increasingly suffers from a “shelf-ware dilemma,” in which reports fulfil accountability requirements but remain largely unused. While they meet donor or organisational mandates, their findings rarely translate into actionable changes or inform subsequent programme decisions.

Furthermore, reports have become significantly longer over time, with an average word count increase of around 40% since 2005, now averaging 11 300 words per report (United Nations, 2025). While this demonstrates that traditional reporting methods used by the UN no longer appeal to a global audience, a similar trend is evident in many other organisations. For example, the World Bank’s Independent Evaluation Group (IEG) has been critiqued for producing evaluation reports that are overly long, outdated, and insufficiently adapted to the needs of operational teams (World Bank, 2014). This trend signals another challenge: the traditional, document-heavy reporting model no longer works. 

Evaluation communication needs to become audience-centred and designed for how different users actually consume information.

One of the main issues with evaluation reports more broadly is that many are too long and filled with heavy jargon (Fatima, 2025). It mostly includes language that may be understood by evaluation specialists but not by an average stakeholder, community member, or even  the programme beneficiary. This raises a critical question: 

Who is the report actually communicating with or to?

 

The Possible Solution

We’re living in a world where one annual report is no longer enough. Audiences now expect 365-reporting short, powerful, continuous storytelling across formats. From podcasts to infographics, from LinkedIn carousels to 30-second video snippets, meaningful engagement comes from stories people remember, not statistics they forget (Jarrar, 2025). This insight was strongly reinforced during a 2025 GLocal session I attended titled Reimagining Evaluation Reports to Create Real Impact, presented by The Social Impact Consultancy (TSIC) (The methodology can be downloaded here). The presenters highlighted an essential shift that evaluations must begin with the user, not with the evaluator. They introduced a four-stage methodology that reframes how evaluation findings should be communicated: 

  1. Map (identify stakeholder use cases),
  2. Match (select suitable reporting products),
  3. Method (design context-appropriate evaluation approaches), and
  4. Make (create clear, engaging, impactful reporting products) (TSIC, 2025).

Applying this method in a recent evaluation project I was part of was incredibly transformative. In the mapping stage, we identified our key audiences, which were youth, women, community leaders, and the elderly. We considered how each group would realistically use the findings. This step made us look beyond generic stakeholder labels and recognise their unique needs, motivations and digital behaviours. The matching phase was equally insightful, requiring us to pair each audience with a communication format that would genuinely reach them. For youth, we considered short videos, Instagram posts, and WhatsApp story summaries. For women’s groups, community dialogues supported by one-page visual summaries; for elderly participants, printed posters and community noticeboard summaries and for programme staff, a concise learning brief accompanied by an interactive dashboard. This stage emphasised that good communication is not one-size-fits-all but different audiences require different ways of seeing and absorbing information.

In the method phase, we designed context-specific approaches for collecting and interpreting data. For example, we used story-based interviews with elderly participants who were more comfortable sharing information conversationally. Finally, the make phase, we had to decide how to translate findings into visually appealing, accessible, and practical products. Among the reporting products included the use of Canva to design infographics and an in-built simple Google Sheets interactive dashboard for programme managers. Each product was created with intention, to ensure the findings were not just available, communicable to the targeted audience.

Using this approach, we saw a noticeable increase in engagement: particularly with the programme staff who used the dashboard during planning sessions, and the youth members increasing their engagement with the programme events and workshops. This demonstrated that the communication products were not only created, but actually used.

Overall, applying TSIC’s methodology improved our approach to evaluation communication. I therefore encourage and support the usage of the methodology amongst evaluators. 

Reports are not supposed to be just overly worded documents. They are experiences, stories, and strategic tools for influence. When they are shared and communicated effectively, they provide room for greater impact.