The use of robot advisors in government is no longer a novelty. For years public servants have used computational tools — “automated decision systems” — to advise and guide their decisions. These programs process masses of data to assess the likelihood of certain events occurring, or to measure performance. Algorithms are used to judge where crime is likely to occur, what the chances are of convicted criminals reoffending, or to create scores to measure teacher performance in schools.
Although at face value using computers as advisors might sound like it removes human prejudice from judgements, critics fear that many of these automated decision systems contain inbuilt biases. A 2016 ProPublica investigation, for example, found that the algorithms used in the courts of many US states are more likely to rate black people at higher risk of repeat offending.
Now, a group of academics and public policy researchers in New York has established a set of guidelines to prevent the encoding of biases in algorithmic decision-making tools before they are used. This follows the city’s decision in December 2017 to establish a dedicated team to look at making the use of algorithms more transparent, and builds on AI Now’s work lobbying the city to adopt stronger rules for accountability.
• For more like this, see our digital government newsfeed.
The new proposals could help public servants take advantage of the clear benefits of the technology, reducing pressure on time and resources by helping them to crunch through masses of relevant information automatically without discrimination. But if workable guidelines aren’t taken up, then injustice could go unreported and mistrust of government could rise.
Breaking open the black box
The guidelines, devised by New York University’s recently opened AI Now Institute, recommend that every public agency that employs automated decision systems should complete what they call “Algorithmic Impact Assessments”.
The idea is based on the Environmental Impact Assessments required when governments embark on conventional projects. As part of the assessment, agencies should study the impact of the systems they use, evaluate them for bias and allow the public and external auditors to examine their impact on different communities.
The guidelines suggest that agencies complete internal reviews prior to using automated decision systems explaining the reasons behind adopting them. The report also says public servants should publish details of the systems already in use. And when a tool is up and running, the new advice continues, government should allow audits of its effects by external experts: those who are able to interrogate the system’s mechanics, but also journalists, political scientists and sociologists.
“In many cases the various factors which influence an algorithm’s judgement are kept secret”
“I think our proposal is one of the first to actually address algorithmic accountability outside of the academic context,” said Dillon Reisman, technical fellow at the AI Now Institute and one of the report’s authors. “In the US, governments have failed to address this.”
“You could call it fundamental justice,” Reisman added. “Basically people should have notice when major decisions are being made about them by these algorithmic systems, and should have the opportunity to respond and contest the use of them.”
In many cases, the various factors which influence an algorithm’s judgement, and how they are weighted within the algorithm, are unclear, or kept secret by the private companies who manufacture them for government in order to maintain their competitive advantage. This has earned such algorithms the nickname “black boxes,” and it makes it difficult for those at the receiving end of the decisions they make to challenge them.
Skewing the system
Recidivism algorithms used by many states’ justice systems in the US, for example, provide a one-ten score predicting the likelihood of an offender committing a future offence. Such algorithms work by extrapolating from thousands of previous cases to look for the factors which commonly lead a criminal to reoffend, and use this to predict future offences.
The algorithms do not use race explicitly as one of these factors. But due to racial disparities in the US, factors such as home location, income and whether or not immediate family members have committed crimes, which are used, can serve as proxies for race.
Propublica’s investigation found that such algorithms can unfairly discriminate against black people, though manufacturers dispute this. Northpointe, a company which made criminal justice software used in Florida, said it did not believe the methodology Propublica used to make its claims “accurately [reflects] the outcomes from the application of that model.”
“Often, the extent to which automated systems are used in government isn’t clear”
Another problem arises when these algorithms are used to make decisions for public servants, rather than advise them. The recidivism algorithms used by many US courts were not originally intended to guide sentencing, but to determine whether defendants can apply for probation or medical care. But judges were fed a score from a machine whose processes they had little knowledge of, and, according to Propublica, used it to influence their sentencing decisions.
“Public servants are in many ways the experts on their own processes — a big part of our proposal is making sure they stay aware of what the systems they use are actually doing,” said Reisman.
Often, the extent to which automated systems are used in government isn’t clear. The AI Now report emphasises the need for agencies to make public all their use of automated systems, and inform those members of the public affected by the decisions they make. The authors also urge public servants to question just how much influence these systems have on their thinking.
Many countries are now addressing the question of how the use of automation and artificial intelligence in public services can be made accountable. Earlier this year, the British science and research body Nesta released a set of recommendations for the use of AI in government, recommending that the public be made aware of all the inputs which govern how the system comes to a decision or recommendation. Last autumn, the UK announced a Centre for Data Ethics and Innovation to monitor the use of data in public life.
Government is still just beginning to understand how it should deal with the effects of modern technologies. While automation offers the chance to improve services and increase efficiency when budgets are tight, algorithms can have unintended and malign consequences when adopted without proper oversight. Algorithmic impact assessments could be a means to anticipate these before they happen.
(Picture credit: Pixabay)