Why Publishing Results Matters

First published on the BEAM Exchange, 20 May 2015

“Presenting results in development invariably is a bit scary because sadly we know little about what constitutes a ‘good’ result.”

Application of inadequate monitoring and evaluation techniques has resulted in a lack of reliable data. Sometimes we see success claims based on little evidence. And when figures do become available they can be used in ways that may not be entirely appropriate. Such as when the Katalyst programme in Bangladesh published their results in 2005: the figures were informally used to develop targets for programmes working in very different economies in Africa – effectively taking an apple to determine the shape and colour of an orange. 

The Market Development Facility (MDF) is uniquely positioned to contribute to a better understanding of results because soon the facility will run market development interventions in five very different economies. The same organisation, applying the same mix of approaches – albeit in a tailor-made manner – will yield very different results in these countries. Years of agricultural work in Fiji may generate less outreach than a single intervention in Pakistan. Despite both Timor-Leste and Sri Lanka experiencing post-conflict booms, investment and employment creation in Timor-Leste may be a fraction of what is feasible in Sri Lanka. The lowest ranking countries on the ease of doing business index, Timor-Leste and Papua New Guinea, are growing the fastest and may offer more opportunities than Fiji, the highest-raking country in which MDF is active.

It is not despite the relatively small impact that can be achieved in countries such as Fiji and Timor-Leste that MDF decided to be very open about its results, but because of it. We had to preempt the fact that one day someone could make a quick comparison of results between MDF in Fiji and PrOpCom in Nigeria, or Katalyst in Bangladesh, and draw the wrong conclusions. 

Not everyone who compares these programmes will factor in the fact that Bangladesh is far more populous than Fiji, that it grows twice as fast, and that its businesses are typically bigger and more established. All this matters for what a programme can achieve, and as a result an effective outreach of 15,000 households in Fiji may be as significant as 3 million in Bangladesh in the country context.

Thus, very early on in MDF we set up a results measurement system that is able to capture results in near real-time in line with the DCED Standard. We also define headline indicators along the results pathway – from private sector investments leveraged to the value of additional market transactions to additional jobs and income for the poor – so that in time we can communicate progression along this pathway. We also wrote a technical note to explain which factors can influence the scale of results in a country and developed results estimations per country and per headline indicator to manage expectations. And, we started to tell our story including the delays and struggles that each programme faces. 

In December 2014, the results measurement systems in MDF’s two most established countries, Fiji and Timor-Leste, were audited. In February this year we published our second Annual Aggregation of Results, which presents the results and projections for our ‘partnership portfolio’ against six headline indicators. Publishing our results, which were mainly intermediate, was nerve-racking. But the reaction has been positive or at least ‘patient’ – our story was understood and for the time being, accepted. 

Routine results

With the right systems in place and importantly, with a credible story to tell, presenting results can be routine for any large organisation that needs or wants to be accountable for the trust and resources invested in it. Results in Fiji and Timor-Leste did not come easy or fast, but because everyone knew where we stood with the implementation and that results were coming, it was never a problem.

Is results measurement difficult and time, and resource, intensive? No – it requires a programme to have its results measurement system in place and an annual effort to add up and analyse (both of which should be there anyway). But it also requires openness, to dare to admit in public what worked and what did not – something our industry is not very good at. Numbers without a compelling and credible story soon become meaningless.  

Many factors influence what can be achieved in market systems development and there is much to be learned. Telling a credible story, admitting to what worked and what did not, and being fairly judged for this, all gets easier if more programmes do it. I hope that more programmes will proceed to audit their results measurement systems and publish results on an annual basis – with a good story!

Connect with us!