Where did it algo wrong? The threat and promise of predictive analytics

Attitudes to 'artificial intelligence' and predictive algorithms seem to oscillate between hype and hysteria. The true picture is a good deal more mixed, but as more examples of predictive analytics in government come to light, it's time for some proper oversight.

(With Ali Knott, James Maclaurin and John Zerilli)

Last week, Immigration Minister Iain Lees-Galloway put a hold on the use of a computer-based tool to profile over-stayers. The tool is a ‘predictive analytics’ system: in this case, it learns to predict the likely harms and costs of an overstayer remaining in New Zealand from a set of other facts about that person, using a database of historical cases. Claims that the tool relied on ‘ethnic profiling’ have been denied by Immigration NZ, but its use has still proved highly controversial. We believe this is a good moment to take stock more generally about New Zealand’s use of predictive analytics in government.

Predictive analytics systems are widely used in government departments around the world. However, the public is often unaware of the existence of these systems, and of how they work. New Zealand is no exception. Last year, there was a minor furore when it emerged that ACC uses a predictive tool to profile its clients. Three years ago, there was a larger controversy around a study proposed by the Ministry of Social Development to help build a tool for predicting children at risk of abuseUse of predictive analytics by the Inland Revenuewas also in the news this week.

In the Artificial Intelligence and Law in New Zealand Project at the University of Otago, we have been studying the use of predictive analytics in government. We are convinced there is a place for such systems. They are an invaluable resource for decision makers tasked with making sense of large amounts of data. Used well, they help us to make decisions that square with the facts.

However, we believe there should be more public oversight of predictive systems used in government, and more transparency about how they work. How many predictive systems are currently in use in New Zealand government agencies? We don’t know. It’s not clear if anyone actually does. In the Immigration NZ case, even the Immigration Minister was in the dark: he has only just become aware of the tool, even though it has been in development for 18 months.

Even for those systems we do know about, we only have partial information about how they work, what data they use, and how accurate they are. We are told, for instance, that the Immigration NZ tool is ‘just an Excel spreadsheet’. But many algorithms can be run in Excel: what algorithm is being run in this case? On what data? With what results? And what margin of error?

These questions are particularly pressing now, in the light of the recent scandal surrounding Facebook’s use (and misuse) of personal data. The algorithms under the spotlight for Facebook are also predictive analytics tools: in this case, tools that predict a Facebook user’s personality from what they have ‘liked’ on the site. There are growing calls (which we fully support) to regulate the use of personal data gathered by social media sites.

However, the process of regulating giants like Facebook is likely to be complex: a matter for lengthy international negotiations. In the meantime, there is no reason why New Zealand should not put its own house in order as regards the use of these same tools in its own government. In fact, scrutiny of these tools is of particular importance, because of the huge impact decisions made by government agencies can have in people’s lives—not only in immigration, but in health, social services, criminal justice and many other contexts.

Of course, the use of algorithmic decision tools in the private sector (potential employers, banks, insurers etc) can also have major impact, and might merit a regulatory response of their own. But public sector use could be a good place to start, modelling best practice and ensuring that public funds are spent on projects that are a good fit for purpose.

Our proposal is that an agency should be established in New Zealand to oversee the use of predictive analytics by publicly funded bodies. This agency would publish a complete list of the predictive tools used by government departments, and other public institutions such as ACC. For each system, it would also supply some basic information about its design: which variables constitute its input and output, and which techniques are used to learn a mapping from inputs to outputs.

In addition, it would answer some key questions about the performance of the system, and about its use. Specifically:

  • How well does the system work?(I.e. how often are its predictions correct?) We presume that all predictive tools are evaluated. But the results of evaluations are not always easy to obtain. What’s more, evaluation of government systems is currently piecemeal: there are no standard processes for evaluation and no indication about how frequently evaluations should be carried out. With so much hype around “AI”, it seems likely that there will be a clamour to get on board with it. This may lead to the purchasing of systems that aren’t fit for purpose or which aren’t better than those we have already.
  • Is there any indication the system is biased in relation to particular social groups? Bias is a particular concern for predictive models used in government. In the US criminal justice system, there is evidence that tools for predicting a defendant’s risk of reoffending are biased against black peoplein the errors that they make. In fact there is often a tradeoffbetween biased errors of this kind and overall system accuracy. But the public should be aware of this tradeoff, and there should be an open debate about how to deal with it. In its report published earlier this week, the UK’s Lords AI Committee warnedthat ‘The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.’ This is no less of a concern here.
  • Can the system offer explanations about its decisions? New Zealanders already have a legal right to access and correct information held about them, but there is a concern that decisions made within the ‘black box’ of an algorithm will lack the transparency needed to allow a correction or challenge if a person thinks that a decision about them is wrong. The Department of Internal Affairs recently proposed a specific right to challenge decisions made by algorithms, along the lines of a right that exists in European law. If something like that were to be effective, though, it would seem to require that some kind of explanation can be given about how those decisions were made. Such explanations are easier to supply for some predictive systems than others, something that should perhaps be a factor in the choice of system design. 
  • How are human decision makers trained to use a predictive system? At the moment, predictive systems in government are used to ‘assist’ human case workers in making decisions: we presume final responsibility always lies with a person.  However, when machines take over some part of a human’s job, the resulting human-machine interaction requires careful scrutiny. It’s important to make sure the human doesn’t fall into ‘autopilot mode’, assuming the machine is always right. This is a recognised problem in cars with semiautomated control, and is a problem in decision-making systems too. The solution is good training of human case workers. There are many areas where human expertise far outweighs computers: understanding language, complex social scenes, subtle nonverbal cues. Human decision makers must continue to rely on evidence from these sources, and to query a system’s predictions when they run against it. 

The exact form of the body that oversees these questions is obviously a matter for further discussion. We envisage a body that advises government departments in the procurement or development of predictive systems, as well as their subsequent evaluation. This body could be part of Statistics New Zealand, which already plays an advisory role in many cases, or it could be delivered as part of the ‘Government as a Platform’ project currently under way at the Department of Internal Affairs. It would also be useful to examine frameworks for managing predictive analytics used in industry—in particular, the recent concept of an ‘analytics centre of excellence’, which is becoming widespread in large companies (and has already motivated government initiatives in Australia).

Whatever approach is followed, we have an opportunity to take leadership in the oversight of predictive analytics tools as they’re used by our own government institutions. This oversight will help to allay public concerns about how these important tools are used in government bodies. And it will be a useful first step in the wider project of regulating how these tools are used in our society more generally. 

The authors would like to acknowledge the generous support provided by the NZ Law Foundation for the Artificial Intelligence and Law in New Zealand Project at the University of Otago.