Governments are making abundant use of artificial intelligence (AI) to address social issues. Its approach is often successful, but not always. Sometimes things go wrong because its use of AI lacks transparency, or simply because it does not have the necessary expertise when it comes to AI systems. That may make such systems difficult to control. High time to tackle this problem, but how? By experimenting with AI testing grounds that put transparency and people at the centre.

The human dimension is not put sufficiently at the centre during the development phase of AI algorithms.

All too often, AI applications are deployed that do not do what they are supposed to do. For example, some algorithms are known to stigmatise and disadvantage certain groups of people. That suggests that the human dimension is not put sufficiently at the centre during the development phase. In many cases, people do not even know when government is deploying AI systems in a particular domain. And even if they do, they are unaware of the role that the system plays in government decision-making. They therefore do not know which data form the basis for decisions. Transparency and explicability are lacking. That does not help to build trust in AI.

Whitepaper 'AI: In search of the human dimension'

Citizens must be at the centre of AI development

read paper

A problem in public sector that requires quick action

What is also becoming increasingly clear is that people are not actively involved enough in AI developments. As a result, their interests are not adequately safeguarded. Meanwhile, AI-based decisions are having a growing impact on people’s everyday lives. That too is a problem that requires quick action.

Whitepaper 'AI: In search of the human dimension' cover image', cover image

Working towards transparent, human-centric AI

In our view, the solution lies mainly in developing transparent and human-centred AI systems and in better oversight. We envisage inclusive AI systems that take the interests of all stakeholders into account. That will require more than ethical codes of conduct and legislation. What we need, above all, is an ongoing process subject to continuous monitoring and improvement. Our AI experts suggest setting up ‘dynamic testing grounds’, communities in which data scientists, data engineers, policymakers, government officials and members of the public cooperate on developing AI solutions for the public sector.

Are you interested in developing AI solutions for the public sector? Then read our whitepaper AI: In search of the human dimension.

Roadmap

Application areas

Read more
Knowledge

Governing privacy and ethics in AI

The expected societal impact of AI is considerable. Think of the application of AI to recruit employees, where algorithms may be used to select employees. However, some AI applications in this field are... Read more
Knowledge

Fair, transparent and trustworthy AI

Current AI systems are far from perfect and make decisions that are difficult to understand. Acceptance of AI systems in society demands transparent algorithms and compliance with legislation and regulations.... Read more
Knowledge

Data-driven policy: AI in public policy and services

Every day, policy makers and public servants face complex challenges. Some of these challenges are: fighting long-term unemployment, meeting sustainability goals and anticipating technological impact.... Read more
Expertise

Policy Lab: developing data-driven policies

The use of new data sources and technological developments such as Artificial Intelligence offers opportunities for policy makers to gain new insights and respond better to societal developments. TNO has... Read more
Contact