Governments are making abundant use of artificial intelligence (AI) to address social issues. Its approach is often successful, but not always. Sometimes things go wrong because its use of AI lacks transparency, or simply because it does not have the necessary expertise when it comes to AI systems. That may make such systems difficult to control. High time to tackle this problem, but how? By experimenting with AI testing grounds that put transparency and people at the centre.
All too often, AI applications are deployed that do not do what they are supposed to do. For example, some algorithms are known to stigmatise and disadvantage certain groups of people. That suggests that the human dimension is not put sufficiently at the centre during the development phase. In many cases, people do not even know when government is deploying AI systems in a particular domain. And even if they do, they are unaware of the role that the system plays in government decision-making. They therefore do not know which data form the basis for decisions. Transparency and explicability are lacking. That does not help to build trust in AI.
Whitepaper 'AI: In search of the human dimension'
Citizens must be at the centre of AI development
A problem in public sector that requires quick action
What is also becoming increasingly clear is that people are not actively involved enough in AI developments. As a result, their interests are not adequately safeguarded. Meanwhile, AI-based decisions are having a growing impact on people’s everyday lives. That too is a problem that requires quick action.
Working towards transparent, human-centric AI
In our view, the solution lies mainly in developing transparent and human-centred AI systems and in better oversight. We envisage inclusive AI systems that take the interests of all stakeholders into account. That will require more than ethical codes of conduct and legislation. What we need, above all, is an ongoing process subject to continuous monitoring and improvement. Our AI experts suggest setting up ‘dynamic testing grounds’, communities in which data scientists, data engineers, policymakers, government officials and members of the public cooperate on developing AI solutions for the public sector.
Are you interested in developing AI solutions for the public sector? Then read our whitepaper AI: In search of the human dimension.