Australia’s Labor party has just pledged to transform policymaking by an creating an evaluator general if it wins the election. In his announcement, shadow assistant treasurer Andrew Leigh credited the work of economist Dr Nicholas Gruen.
Here, Dr Gruen, CEO of Lateral Economics and former Chair of the Australian Centre for Social Innovation, considers systemic obstacles to government experimentation, and argues that introducing formal accountability into evidence and evaluation should be part of the solution.
Everyone is entitled to his own opinion, but not his own facts.
— Daniel Patrick Moynihan
The first principle is that you must not fool yourself and you are the easiest person to fool.
— Richard Feynman
Through my work in and around government for over three decades, I’ve become increasingly wary of the fads that run through the system.
Remember “reinventing government” in which the government would work more like the private sector? Today it’s design thinking, putting the intended beneficiaries of programs — what we now call “users” — at the centre of program design and delivery. We’ve been talking like this for at least two decades — since the “third way” came and went, after asserting that “one size fits all” wouldn’t do anymore — except that it still does.
We’re still miles from making any big systemic improvements
And while promising demonstration programs proliferate, proving that there are some real insights and prospects in the fad, we’re still miles from making any big systemic improvements.
Why? Because with the status and big career rewards going to the strategisers of high policy at the top, we look to them for leadership. Given this, they demonstrate their leadership, not by learning from the field, but by embracing the fads doing the rounds on the conference circuit and the TED talks.
The more difficult but promising alternative, and the only way to get our systems to transform themselves around their users’ needs requires that learning rise up through the organisation from below — from users and workers in the field. The role of those in more senior positions would then be to facilitate the structural transformation that this new learning portends.
A nervous system for public services
But how do we build a system that is able to truly understand its own impact, to learn about the actual experiences of its users and practitioners and to use this knowledge to continuously improve its performance?
Let’s start by considering the agencies that provide foundational information and integrity in Britain — for instance, the National Audit Office, the Office for National Statistics and the Office for Budget Responsibility.
We provide them with independence to insulate them from those with vested interest in “good news”. If our system is to function, let alone learn, they must “tell it like it is”.
I propose a new independent agency — the Evaluator General. Its role goes well beyond sitting atop government systems. Rather it enacts a new binary distinction from the top of the hierarchy to the bottom out in the field: between delivering programs on the one hand and understanding how those programs are performing on the other.
Thus a government agency might deliver or fund a program. But officers of the Evaluator General embedded within service delivery agencies would design and resource the monitoring and evaluation system that constitute the program’s “nervous system”. This work would be closely collaborative with, but formally independent of the delivery agency, providing the agency and the wider system with reliable feedback about the performance of its various parts.
Towards a new professionalism
Toyota revolutionised the efficiency and quality of car manufacture by building its own production system up from the self-accountability of production teams striving to improve their performance on the line. The Evaluator General aims to achieve something similar.
Too often the needs of senior managers and politicians dominate management systems and institutional imperatives in service delivery. The Evaluator General would help those in the field build and optimise their practice around validated evidence, which would then have the imprimatur of the Evaluator General to ensure it received due weight further up the management hierarchy.
This would engineer a new professionalism in which validated knowledge from the field and the custodians of that knowledge (whatever their formal position in the hierarchy) were given greater autonomy within the system than is currently the case.
Publicly building the evidence and driving its adoption at the front line
The Evaluator General’s mission would be to publicly build the evidence about what is or isn’t working, and ensure that this was given due regard. It would act as an independent and critical friend, helping those designing and delivering services to measure, understand and so continually improve their impact.
Making evidence generated under the Evaluator General publicly available would be central to its success. This finely disaggregated performance transparency would enable practitioners, their managers and agencies and the politicians directing them to be held to account, even where this disturbed the web of acquired habits and vested interests.
It would institute a rich “knowledge commons” in human services and local solutions (“What Works Where?”) that could tackle the “siloing” of information and effort within agencies.
It would also facilitate more expert and disinterested estimates of the long-term impact of programs to enable a long-term “investment approach” to services.
This is a bold vision, but if we really want to move past our current, scattered approaches to innovation towards a system that continuously improves and holds itself to account, then we’ll need to change the system itself: to nurture the creation of practical, robust evidence to enable improvement, and incentives to use that evidence which cannot be ignored. — Nicholas Gruen
Updated 4 Jan 2019.
(Picture credit: Flickr/Andrew)