RBM Training

Greg Armstrong

Performance Measurement Framework

We have all seen solid projects that are doing good work but have serious problems communicating their results to the people supporting them or the organizations funding them. 

Even good projects can have problems with results-based monitoring and evaluation. 

  • This is not a project problem. It is a problem with RBM.  


When capable people do not use an idea that could serve them well, the problem is usually not with the people, but with the way the idea is being communicated.

Why We Need Clear Language


Suzie LeBlanc, Field Manager for the Teacher Accreditation and Certification Project, in Kabul, Afghanistan, has written a slim, elegant precis of the reasons why clear language is needed in cross-cultural training, in a memo titled Working Across Translation. What she says is applicable not just in Afghanistan, but across the world, and applies in particular to results-based management, whether it is presented directly in English, or in translation.


​RBM Jargon

Results frameworks and results-based monitoring and evaluation systems, such as managing for development results, are too often developed  in isolation from the realities of what happens on the ground. The terms for results, as they are presented in these RBM frameworks by a host of different donor agencies and government departments, are too often simply bureaucratic jargon.  


Academic in appearance, or vague in meaning, the RBM terms are hard to interpret and implement in the real world where line agencies and their partners work.  They are confusing not just in translation, but for people working with English as a native language, as Mark Schacter pointed out in 2014 in a clear argument for simplifying the language of planning, results and reporting.


Project implementing agencies that just want to get on with the job, get snowed under with a wide range of different RBM terminology, used in different ways, by different agencies, as they struggle to demonstrate development effectiveness.

Some examples we have all come across with different agencies using their agency-specific jargon: 


  • Resources
  • Inputs
  • Activities
  • Outputs
  • Outcomes (immediate, intermediate and ultimate),
  • Impact
  • Purpose
  • Objectives
  • Risk
  • Risk Management
  • Assumptions
  • Indicators (for activities, results and risk)
  • Targets
  • Measures
  • Baseline data
  • Results chains
  • Logic Model
  • Logical Framework
  • Results Framework
  • Performance Measurement Framework
  • Performance Monitoring Framework
  • Monitoring and evaluation system
  • Outcome evaluations
  • Development effectiveness reviews
  • Impact assessment
  • Theory of Change



Monitors, analysts and evaluators trying to use Results-Based Management do not find this terminology any easier than do the project managers or implementors.

Despite some efforts to come up with common RBM definitions, the most commonly used terms in RBM -- "Outputs", "Outcomes", and "Impacts" -- in practice mean different things for different agencies, in different cultures, and most certainly, in different languages.

The term "Results” is often used synonymously with "goals" and "objectives" and therefore expressed in an ambitious, vague, or long-term manner.

Project planners and implementers, busy with the practical issues of delivering complex services, simply do not have time -- or often the inclination -- to work their way through unclear, confusing and, in the case of an organization with multiple funding agencies, often conflicting RBM terminology. Although several donor agencies are now trying to improve their RBM frameworks,  the procedures and terms are becoming longer and more complex, rather than shorter, simpler and more practical. 


Donors are confused about RBM terms too

Implementing agencies such as NGOs or national government agencies are not alone in being puzzled or intimidated by the terms used in results-based management, monitoring or evaluation. It is common to see staff of both bilateral and multilateral donor agencies who are confused by what the jargon means.

One donor agency training session on RBM, in 2010, for example, included a PowerPoint presentation with 153 slides, and almost a thousand lines of dense text.  From what we know of how adults learn , this is not an effective means of encouraging understanding or motivating people to use a new idea.  


The RBM specialists and trainers within these organizations understand what the terms mean, but that is their job.  Many of the project officers and field staff who have to manage dozens of projects, on the other hand, are often just as confused by RBM as are the implementing agencies.  This is true among mulilateral and bilateral donors.  

At least one former Minister responsible for development aid, the UK's Alan Duncan, in 2012 called for less results jargon, and more clear talk at DFID. If this worked at DFID, a policy of clear language has certainly not become visible in other agencies.


And if the aid agency staff themselves do not apply the terms consistently, what hope is there for the implementing agencies and their partners, national government agencies or NGOs, who have do deal with multiple different sets of aid agency terms describing results?

Trapped by Jargon


Given the problems, it is no wonder people on the implementation end of international development programming ask whether results-based management is worth the trouble it often causes them.

In effect, people who actually are interested in explaining their results, get trapped by the jargon.  

But it is reasonably easy to deconstruct RBM, to break down the logjam of dense terminology into its components, to make the tasks of planning, managing and reporting by results simple -- and even enjoyable -- to the for the people doing it.



The Problem with RBM