Fair decision making in justice with AI
The Amazon recruiting tool made painfully clear that algorithms in AI systems have the tendency to acquire human prejudices. That means that, whenever these tools are used in the criminal justice system, the consequences could be very serious. It puts the principle of equality at risk. TNO is therefore researching what is needed in order to create AI that is fair and transparent.
In the Netherlands, we unfortunately know only too well how things can go wrong. For example, blind adherence to the strict rules of the Tax and Customs Administration resulted in thousands of parents spending many years facing allegations of fraud in relation to childcare allowance. As it turned out, meaningful human intervention in this case was not actually very meaningful.
Artificial intelligence and algorithms really can add value to the work of detecting, detaining, and convicting criminals. However, you have to know in such cases what exactly these algorithms do. That’s because AI systems that work on the basis of machine learning not only have to deal with objective data. There is therefore the danger of algorithms imitating human prejudices and consequently reaching conclusions that are blatantly discriminatory, as has been shown by Amazon. This problem could be resolved by a meaningful human role between algorithm and conclusion.
Ethical AI that can be easily verified
The big question is what can we learn from Amazon and the allowance affair? And what still needs to happen if we want to use AI systems safely in the criminal justice system? After all, that is a development that is almost sure to come.
Because of spending cuts in the criminal justice system, the pressures on judges, prosecutors, and prisoner assessors are rising sharply. Artificial intelligence can help reduce these pressures somewhat. For that to happen, though, we need AI that is fair, ethical and transparent. It should be free from bias and actively bring about meaningful human intervention. In short, humane AI.
AI systems under the microscope
Together with the Public Prosecution Service, the Custodial Institutions Agency, and the Central Judicial Collection Agency, TNO has analysed various AI systems in recent years.
This includes COMPAS, an advanced computer program that American courts use to assess the likelihood of a suspect reoffending. These analyses have revealed different forms of unfairness which notably sometimes led to contradictory conclusions. Fairness cannot just be captured in a simple formula.
How can we teach AI systems to recognise prejudices and to rectify them accurately and fairly? This is the issue that TNO and its project partners are currently addressing. At the same time, it is important to make the working method used by AI as transparent as possible. The final, and perhaps most important challenge, is to make sure that users do not start to rely too heavily on AI systems.
Christopher BrewsterFunctie:Senior scientist
Christopher Brewster is a Senior Scientist in the Data Science group and Professor of the Application of Emerging Technologies in the Institute of Data Science, Maastricht University. His research has focussed on the application of Semantic Technologies, Open and Linked Data, interoperability architectures and Data Governance, mostly to the food and agriculture domains.
Daniël WormFunctie:Senior consultant
Jok TangFunctie:Deputy Research Manager Data Science
Joris SijsFunction not known
Looking for another expert?View all experts
AI Systems Engineering & Lifecycle Management
The AI system for the future. At TNO, we work on AI systems that remain reliable and can handle new functions in the future.
You can read about how AI is educated in Chapter 1. How can we make clear to AI which goals we want to pursue as humans? Andhow can we ensure intelligent systems will always function in service of society?
Innovation with AI
What does that world look like in concrete terms? Using numerous examples, TNO has created a prognosis for the future in Chapter 2. Regarding construction, for example, in which AI will be used to check the quality, safety, and energy efficiency of buildings before they are actually built. Or healthcare, where robots will partly take over caregivers’ tasks and AI will be able to autonomously develop medicines.
Innovating with innovation AI
How AI will change research itself is explained in Chapter 3. For example, what role will AI be permitted to play in knowledge sharing? And what will happen when we make machines work with insurmountably large data sets?
David Deutsch on the development and application of AI
Peter Werkhoven, chief scientific officer at TNO, joins physicist, Oxford professor, and pioneer in the field of quantum computing, David Deutsch, for a virtual discussion. Deutsch set out his vision in 1997 in the book, The Fabric of Reality. Together, they talk about the significance of quantum computing for the development and application of AI. Will AI ever be able to generate ‘explained knowledge’ or learn about ethics from humans?