Fair decision making in justice with AI

Thema:
Artifical intelligence

The Amazon recruiting tool made painfully clear that algorithms in AI systems have the tendency to acquire human prejudices. That means that, whenever these tools are used in the criminal justice system, the consequences could be very serious. It puts the principle of equality at risk. TNO is therefore researching what is needed in order to create AI that is fair and transparent.

Human intervention

In the Netherlands, we unfortunately know only too well how things can go wrong. For example, blind adherence to the strict rules of the Tax and Customs Administration resulted in thousands of parents spending many years facing allegations of fraud in relation to childcare allowance. As it turned out, meaningful human intervention in this case was not actually very meaningful.

Artificial intelligence and algorithms really can add value to the work of detecting, detaining, and convicting criminals. However, you have to know in such cases what exactly these algorithms do. That’s because AI systems that work on the basis of machine learning not only have to deal with objective data. There is therefore the danger of algorithms imitating human prejudices and consequently reaching conclusions that are blatantly discriminatory, as has been shown by Amazon. This problem could be resolved by a meaningful human role between algorithm and conclusion.

Ethical AI that can be easily verified

The big question is what can we learn from Amazon and the allowance affair? And what still needs to happen if we want to use AI systems safely in the criminal justice system? After all, that is a development that is almost sure to come.

Because of spending cuts in the criminal justice system, the pressures on judges, prosecutors, and prisoner assessors are rising sharply. Artificial intelligence can help reduce these pressures somewhat. For that to happen, though, we need AI that is fair, ethical and transparent. It should be free from bias and actively bring about meaningful human intervention. In short, humane AI.

AI systems under the microscope

Together with the Public Prosecution Service, the Custodial Institutions Agency, and the Central Judicial Collection Agency, TNO has analysed various AI systems in recent years.

This includes COMPAS, an advanced computer program that American courts use to assess the likelihood of a suspect reoffending. These analyses have revealed different forms of unfairness which notably sometimes led to contradictory conclusions. Fairness cannot just be captured in a simple formula.

Three challenges

How can we teach AI systems to recognise prejudices and to rectify them accurately and fairly? This is the issue that TNO and its project partners are currently addressing. At the same time, it is important to make the working method used by AI as transparent as possible. The final, and perhaps most important challenge, is to make sure that users do not start to rely too heavily on AI systems.

Get inspired

31 resultaten, getoond 6 t/m 10

Eppo Bruins on AI in different government domains

Informatietype:
Insight
27 September 2022

Michiel van der Meulen, chief geologist for the Geological Survey of the Netherlands (GDN), speaks with Eppo Bruins. Bruins was educated as a nuclear physicist and has spent many years working in the world of science, innovation, and technology. Between 2015 and 2021, he was a Dutch member of parliament for the Christian Union. He was recently appointed chairman of the Advisory council for science, technology and innovation (AWTI). What will AI mean for the various government domains in the coming years?

Georgette Fijneman on the promise of AI for health insurers

Informatietype:
Insight
27 September 2022

Hanneke Molema, senior consultant healthy living at TNO, interviews Georgette Fijneman, CEO of health insurer Zilveren Kruis since 2017. Both look at the same topic, health, from a completely different perspective. What is the promise of AI for one of the Netherlands’ largest health insurers?

Bas Haring on AI, science and philosophy

Informatietype:
Insight
27 September 2022

Michiel van der Meulen, chief geologist for the Geological Survey of the Netherlands (GDN), speaks with Bas Haring. Haring originally studied artificial intelligence, which at the time still fell under the umbrella of philosophy, which is why people started calling him a philosopher. He himself feels more like a ‘folk philosopher’: Haring tries to make science and philosophy accessible to a wider audience. In 2001, he published a children’s book about evolution, Cheese and the Theory of Evolution. What better springboard for a geologist and a philosopher to talk about AI?

Bram Schot on the impact of AI on mobility

Informatietype:
Insight
27 September 2022

Marieke Martens, science director at TNO and professor of automated vehicles at the Eindhoven University of Technology, talks to Bram Schot. Schot was the CEO of Audi until 2020, having previously held management positions at various car makers, including Mercedes and Volkswagen. Their conversation concerns the influence of AI on mobility. How will AI impact the production process? And what does a future with autonomous vehicles look like?

Arnon Grunberg on AI, creativity and morality

Informatietype:
Insight
27 September 2022

Peter Werkhoven, chief scientific officer at TNO, talks to Arnon Grunberg from his base in New York. Grunberg made his breakthrough in 1994 with his novel, Blue Mondays. He has since become one of the Netherlands’ best-known authors. The two talked about AI over dinner some years ago. Today, they finally get the chance to continue their conversation. What is Grunberg’s view on creativity? Can it be taught to machines? And how do humans morally relate to machines?