In modern government, measurement is all the rage. The performance of our public services is measured numerically, schools and universities judged by their positions on a league table, hospitals rated successful or failing based on the time taken to respond to emergencies. In an era of reduced budgets, new policy ideas need to show results measured by the amount of money saved or percentage of the population reached.
In a new book, The Tyranny of Metrics, Professor Jerry Muller of the Catholic University of America has written about the damage which measuring performance against numerical targets can cause to organisations. He argues that, rather than increase accountability and improve outcomes, by tracking the numbers when they don’t fit, public servants can end up chasing arbitrary performance metrics that fail to reflect reality.
What is the problem with people making policy decisions based solely on testing and evidence?
That which can be rigorously measured is often not the most important element of an organisation.
It’s fairly easy sometimes to measure inputs in quantitative terms, and you can measure select things as outputs, but if you’re measuring what actually contributes to the organisation functioning well and what doesn’t, there are often qualities of the personnel that are essential but difficult to measure: qualities such as cooperation between fellow staff, mentoring, initiative and innovation. Those things are almost by definition impossible to measure in quantitative terms.
What kind of problems can this lead to?
If those quantitative measures are developed by the people within the organisation, and used by the practitioners themselves to see what sort of things seem to be working and what sort of things don’t, that’s fine: it’s legitimate and useful.
What more typically happens in governmental situations is a very different use of measurements. That’s where some civil servant who is very far away from the actual practice comes up with a set of criteria to measure. That’s the first problem.
Secondly, if measurement is attached to rewards and punishments, that’s a problem.
Thirdly, if the results are to be made public, on a website or a league table in the interests of so-called transparency or accountability, that also creates a problem.
What effect do these have on government?
When you attach the measures to reward or punishment, then you invite two kinds of behaviour, both of which are often counterproductive for the organisation. The first kind of behaviour is goal diversion, when people within the organisation will focus their attention and efforts on the things that are getting measured, often at the expense of other essential functions.
The second danger is that they will engage in one or another form of gaming, that is to say that they will manipulate the measures to meet some sort of metric target.
“I’m calling for a greater reliance on professionalism and expertise”
To take a famous example, when the National Health Service (NHS) in Great Britain decided that a major problem was that people were having to wait too long to be admitted to emergency wards, they declared that hospitals would be evaluated based on to what extent patients were admitted within four hours.
Some hospitals responded by having the ambulances with patients circle around the hospital until they could be admitted within the four hour window. People in their homes were waiting for ambulances to pick them up, while they were circling around the hospital in order to help it meet this metric. There are infinite varieties of gaming of that sort that occur.
The third thing that happens is that professionals within these organisations become demoralised if they feel that the things that are being measured aren’t the most relevant.
Last but not least, gathering and analysing this information takes up a lot of time and effort, and often that’s the time of the practitioners: doctors and nurses who have to enter the information. The civil servants who want to institute these measures rarely take that diversion of time into account.
Where has this trend towards quantitative evaluation come from?
A lot of this has to do with the rise of a cultural trend of managerialism. That is the notion that management is not based on deep knowledge of a particular field, but rather that every organisation is fundamentally the same, and a manager is a person that has a set of standardised tools that they can apply. Those tools turn out to be of this kind of metric sort, not least because, if you apply them in some mechanical way, it requires less knowledge of the specific organisations.
Why are we seeing so much of this now?
Thanks to new forms of IT, there’s a great groundswell of publicity/propaganda on the utility of data. Sometimes data really are useful, but it’s often oversold, and the question of what sort of data is relevant, and in which ways it’s relevant, often gets ignored.
“Metrics can be profitably used if they are used to inform judgement, as opposed to replacing judgement”
There are lots of consultancies that are in the business of selling their expertise and data without knowing what is relevant to a particular organisation or sector: it’s being oversold for self-interested purposes.
What’s the solution?
Metrics can be profitably used if they are used to inform judgement, as opposed to replacing judgement. I’m calling for a greater reliance on professionalism and expertise. It’s in part possible by allowing decision-making to be made at a more local level: local not just in a geographic sense but in the sense of within the organisation themselves, at a lower level by people who actually know what’s going on as opposed to trying to orchestrate everything from the centre.
It’s in part, I think, necessary to have a recognition and a reckoning of the weaknesses of the system that’s now become dominant.
Is this reckoning happening?
I haven’t seen it yet – one of the reasons that I wrote the book frankly was to provide people with a short and clear presentation of the problems and dysfunctions that are caused by this kind of metric fixation.
I think it’s something that many practitioners in many fields have a sense of, but they’re sometimes not able to articulate it to themselves, or they’re loathe to go public with it, as that’s something that’ll bring negative consequences to them.
(Picture credit: Pexels/Lukas)