This opinion piece was written by Jacqueline Tweedie, a Policy Wonk in Canada’s federal public service. Full disclosure: JT is no enthusiast for deliverology, but is of the scientific method. This piece also appears in our government innovation newsfeed.
Ask a public servant if they know about “deliverology” and what it stands for, and most will acknowledge at least a passing acquaintance with it. Further comments can usually be sorted into two groups: the enthusiasts and the disaffected.
The enthusiast will gush about how deliverology focuses on delivering results for citizens, in an open and transparent way, with well-defined targets and concrete measures towards progress. The disaffected will be familiar with the above and go on to note any number of issues: perverse incentives; old wine, new bottle; false transparency; gaming the results, etc.
For our purposes, deliverology is about “doing” public policy, and being seen to get results for people while sharing with those people the work being done on their behalf.
The Canadian federal public service has taken this approach to delivering on public services for some time now. What we have until recently called “results-based management”, since re-branded as “results delivery” (the Canadian version of ‘“deliverology”), turns up in frameworks from 2002 onwards.
We’ve had a few decades to “push out” evidence-based policy making practices, to create theories of change for policy interventions using performance data to track and report on results delivery.
We’ve “done” accountability with our Departmental Plans (DP) and Departmental Results Reporting (DRR) (formerly, Report on Plans and Priorities (RPP)), and Departmental Plan Reports (DPR) — have pity on us as we try to keep track of acronym salads in all the rebranding.
Have you checked out the Government of Canada’s Mandate Letter Tracker: Delivering results for Canadians? You can use it to keep watch on the numerous (364) commitments made and tracked by the Government of Canada, reported publicly to citizens. The Tracker captures commitments in ministerial mandate letters in support of government priorities — the mandate letters themselves are also available for citizens’ access.
You can also go to the GC InfoBase, and search through the data for all departments across three themes — finances, people and results, using a variety of analytical filters to explore relationships. To satisfy your thirst for data evidence, you can download many a CSV data file over at the Open Data portal.
We are open, accountable, and transparent. We use plain language and consult with our stakeholders. We have strategic plans, operational plans and implementation plans. We use SMART indicators and ensure that relevance and performance (value for money) drive our organisations to deliver results of benefit to Canadians. We do risk assessments, audits, and evaluations according to clear, published criteria. Modern public policy governance? Nailed it.
A 2002 April Report of the Auditor General of Canada reported that back in 2000, federal departments and agencies had made slow progress in the preceding seven years in improving the quality of their performance reporting to Parliament. It was followed in 2003 by a status update, noting again the pace of change was too slow. Managing government — for this is what our results focus is all about — surfaced again in 2004. And again in 2005. By the fall of 2017, the Auditor General had this to say:
“When I look at these audits together, I find that once again, I’m struck by the fact that departments don’t consider the results of their programs and services from the point of view of the citizens they serve. I find myself delivering this message audit after audit, and year after year because we still see that departments are focused on their own activities, and not on the citizen’s perspective. The audits we’ve delivered today are no exception, as you will see…[emphasis added]”
Canada is not alone in its experiences in adopting deliverology or some other form of results-based management and then discovering it has caused a host of unintended consequences (typically negative).
So what’s going on? What’s going — arguably — wrong, with delivering on results? Why is delivering results so hard to get right?
Are we beset by scientism? By managerialism?
These and other questions (a smattering shared below) could be discussed as we share perspectives and realities around doing things for our citizens:
-What is the role of judgement in the face of all these management systems and processes? What is the inter-relationship between these two factors?
-How do reputational and monetary awards attached to measurement and metrics pervert results and performance regimes?
-Transparency and accountability — is that really what’s being delivered/shared with citizens? Or is it all an elaborate shell game?
-What are the worst unintended consequences of a results and performance management regime?
-Do we have a clear sense of what it is about measurement that is desirable?
-Given the cost of the results performance exercise, what value do we obtain for those expenditures? What insights gained? How are programs re-designed to be more effective?
-Is results management the same thing as keeping track of the performance of a government?
(Picture credit: Pixabay)