Policy evaluations fail too often. Here’s how to make them more nimble

Randomised controlled trials don't always help with implementation problems

Development fields like early childhood have, in recent years, coalesced around a gold standard of policy evidence: the randomised control trial. Borrowed from clinical medicine, the method lets researchers rigorously compare a policy intervention’s effects to a control group, eliminating bias and confounding factors. But there’s one problem: their results are often useless.

A million-dollar evaluation of a home visiting program for new mothers might find it didn’t improve health outcomes — not because it’s a bad program, but simply because workers didn’t make all the scheduled visits. Costly, multi-year trials can end up revealing only that you can’t help people with a policy that’s not actually implemented.

That is starting to change. A new, “nimble” approach to policy evaluation, spearheaded by a handful of economists, could give policymakers the information they really need: about how to make sure their programs don’t fail before the first hurdle.

Failing without learning

The Strategic Impact Evaluation Fund, or SIEF, is an arm of the World Bank which funds policy trials to gather crucial evidence in areas like early childhood, health and sanitation. These evaluations are meant to inform subsequent work by the Bank, governments and other development agencies.

But they don’t always deliver on that promise, according to Alaka Holla, SIEF’s program manager.

In Bangladesh, for example, SIEF funded a trial of a program to give mothers more information about the benefits of stimulation for their children. At the end of the two-year evaluation, almost half of mothers involved said they hadn’t received any informational materials, and 85% had had fewer sessions with community workers than intended.

Trials like these were a source of frustration for Holla. “To me, it was a sign that it was premature to do the full impact evaluation,” she said.

Even a policy with transformative potential won’t show much effect if its implementation is a disaster. Evaluations can pick up some evidence about difficulties in implementing a policy, but it isn’t fed back until the trial is finished — often years later. By that stage, it’s too late to be of much use.

And full impact evaluations have another difficulty: they can be extremely expensive. The cost of gathering data — particularly in developing countries — can be prohibitive. Randomised control trials may become rarer and rarer, Holla suggested, if they’re thought to inevitably cost “at least a million dollars and three or four years”.

Get nimble

But the techniques of the randomised control trial could themselves be the solution to these challenges. The new concept of a “nimble evaluation” would see researchers use the same methods to study the initial implementation of policies, rather than their final outcomes.

Championed by American economist Dean Karlan, and now adopted by Holla for SIEF’s latest round of work, nimble evaluations have two key elements: focusing on implementation rather than welfare outcomes, and finding cheaper ways to gather data.

Holla described a project to help people suffering from high blood pressure. One approach could be getting more people to take medication by increasing the portion of the cost refunded.

A nimble evaluation could test that method alongside a range of others — different reimbursement rates, or approaches focused on increasing the convenience of getting medication from a doctor rather than lowering its price, for example.

That would leave some questions, about the program’s ultimate health impacts, unanswered. But its results would be immediately helpful — and the trial can be much shorter than a full outcomes evaluation. “If someone is not going to deliver a program in six months, they’re probably not going to deliver it — they’re not going to be responding to your intervention with a lag,” Holla said.

The other key is to reduce the costs of running such experiments. In the least developed countries, commissioning household surveys may be necessary.

But in middle-income areas, government data may be good enough, especially for implementation efforts. Or projects can be designed so that they generate the data as they go, rather than requiring additional surveys.

“In the field of development economics, these field trials started in the lowest income settings where there was no data,” Holla said. “That somehow became the norm, even in countries and contexts where maybe you could have relied on administrative data.”

These kinds of evaluations – nimble both in their relative brevity and their much lower costs – make it much easier to conduct quick follow-up research and iterated experiments on the best way to make policies actually work

Nudging towards nimbleness

Most researchers and officials at the World Bank and other development organisations have been enthusiastic about the idea of nimble evaluations, Holla said.

But it’s still taken some work to make them a reality. Holla recruited Dean Karlan to present at the World Bank to convince her colleagues of the importance of implementation-focused research.

And now she’s using SIEF’s power as a funding body to get researchers on board. The fund’s latest call for proposals is focused exclusively on nimble evaluations.

SIEF has also set aside additional funding for evaluations which are completed on time to run a second trial building on the lessons of the first. That kind of iteration is difficult with longer, more expensive evaluations — and building it into the structure of a funding offer is a global first.

Holla hopes that this first major round of nimble evaluations will serve as a proof of concept for a cheaper and more dynamic approach to policy research. Recipients will be announced at the end of this month.

“It’s not like we’ve split the atom,” she said. “But this is a nice way for us to signal that we’re willing to put money behind this, because it’s important.” — Fergus Peace

This piece has been updated to clarify some criticisms of randomised control trials made by Alaka Holla.

(Picture credit: Pexels)

Discussion

Leave a Reply

to leave a comment.
Master the skills you need for the public service.

Discover inspiring resources, tools and policies.