Dr. ir. Anne Fleur van Veenstra
- Policy Lab
- Data-driven policy
- Digital governance
- Public sector innovation
During the webinar ‘AI’s role in government decision-making’, we received many viewer questions. Here, you will find answers to the most common topics.
Appl.AI Webinar #1: 'How can AI help the government in transparent and fair decision-making?'
The role of data cannot necessarily be linked to a 'problem-solution', because data-driven decisions are creating all kinds of new problems. We can now learn and anticipate the possible consequences of a policy or decision better through data. Before it wasn’t possible on this large scale. The other side of the coin is that there are also social expectations about the use of general digital resources in society. In other words, you have to keep up with the times. People expect them to be able to complete a form digitally with the government.
Because the government does not have the right skills and tools to build an app quickly and agile. Also because we are increasingly understanding worldwide that an app in itself is not a solution and that you need much more. In other words, reaching for technological solutions too quickly before you understand the social problems remains one of the biggest pitfalls of digitization.
There is a huge difference between a Western culture like ours and South Korea, for example.
There are enormous cultural differences when it comes to privacy and the role of the government in the protection of personal data. In the Netherlands (and in the Scandinavian countries), confidence in the government, also to use personal data for the right purposes and to protect it properly, is relatively high. The trade-off is a political one, but within the bounds of the law.
At TNO we have developed a method that we use as the 'Policy Lab'. In a series of sprints we take all the expertise to really understand each other's language within a project. Even between statisticians and data scientists there are big differences about labels, categories and weighting factors of variables. It is a very fascinating topic at the interface of data-driven algorithms and expression-based systems (something we call 'hybrid AI'. Research has only just begun on this theme!
Good point. Companies that want to take advantage of the development of such an app should not be shortlisted anyway. The dominant role of Apple and Google in such apps is worrying in that regard, as they sit in the government seat and thereby determine the rules (without releasing them). In Europe, an open-source protocol for Corona tracking apps has been developed and is recognized, criticized and improved by experts. The question remains why the Dutch government does not join this initiative.
AI is very good at 3 things:
1) Copying experts (expert systems)
2) Predicting / classifying / clustering (structured & unstructured machine learning)
3) Making choices that lead to a specific goal (reinforcement learning)
All this is valuable and useful, but in isolation it is insufficient to let go of many decisions without control. For example, AI is not able to think creatively or to appreciate context. So as a predictor when an infrastructure object needs maintenance, it is fine to use AI. As an assessor what kind of punishment is needed, you do not want AI to function autonomously. Understanding how AI works is important to be able to assess for which tasks AI can be used and what level of autonomy we allow.
This is one of the dimensions on which we conduct applied research. AI systems are now often isolated systems that do one job well. But we expect more: AI systems must be able to work together and cooperate with people. That brings a lot, but it is not easy to realize. As always, it starts with the question: what purpose does it serve? From there, subsystems and solutions can be searched for.
Explainability is essentially a semantic and communicative process. To be able to reason about an AI analysis as a human with AI, the frame of reference of AI must be compatible with that of humans. There must be a communicative relationship between man and model. AI often surpasses humans by learning complex connections from very large amounts of data, which complicates the matter.
Transparency and verifiability are prerequisites for the deployment of AI systems. This means that there must be a process in which people can indeed question government decisions and have an explanation of how a decision was made.
If adequate measures have been taken to remove this bias, it may not be an AI problem (anymore). But until then, it still is. In both cases, the real problem lies in the organization that provided the data on which the algorithm was trained. At Amazon, the data apparently showed that there was bias in the offline recruitment process, just like with the tax situation.
We will have to accept that in a number of cases the relationship between the AI model and human knowledge cannot be properly established. A complex deep learning system that detects cancer is better trusted if it appears afterwards through an additional diagnostic test (by humans). When the system is right 99% of the time, explainability is less important. Being able to calibrate quality and explainability in this way offers an opportunity to include models in a human workflow: in some contexts, lower explainability and higher accuracy are preferable. In other situations, higher explainability at the expense of somewhat lower accuracy is preferable.
Appl.AI Webinar #1: 'How can AI help the government in transparent and fair decision-making?'
Your question has been sent.
Sorry! Something unexpected happened. Please try again later.