Our work

Appl.AI Webinar #1 - Viewer questions

During the webinar ‘AI’s role in government decision-making’, we received many viewer questions. Here, you will find answers to the most common topics.

 

The webinar consisted of a panel discussion with AI expert Anne Fleur van Veenstra (r), TNO, and government expert Marieke van Putten (BZK).

Rewatch the webinar here

Appl.AI Webinar #1: 'How can AI help the government in transparent and fair decision-making?'

Watch Webinar

1. What specific problem do you solve by using citizen data and what data do you need for that?

The role of data cannot necessarily be linked to a 'problem-solution', because data-driven decisions are creating all kinds of new problems. We can now learn and anticipate the possible consequences of a policy or decision better through data. Before it wasn’t possible on this large scale. The other side of the coin is that there are also social expectations about the use of general digital resources in society. In other words, you have to keep up with the times. People expect them to be able to complete a form digitally with the government.

2. The corona app seems to me to be of general use. Why then is a solution left to companies?

Because the government does not have the right skills and tools to build an app quickly and agile. Also because we are increasingly understanding worldwide that an app in itself is not a solution and that you need much more. In other words, reaching for technological solutions too quickly before you understand the social problems remains one of the biggest pitfalls of digitization.

3. Who weighs up the extent to which the individual interest relates to the group interest?

There is a huge difference between a Western culture like ours and South Korea, for example.
There are enormous cultural differences when it comes to privacy and the role of the government in the protection of personal data. In the Netherlands (and in the Scandinavian countries), confidence in the government, also to use personal data for the right purposes and to protect it properly, is relatively high. The trade-off is a political one, but within the bounds of the law.

Anne Fleur van Veenstra, senior researcher at TNO.

4. If you involve multiple disciplines in the design process (such as a lawyer, policymaker, citizen, algorithm developer, possibly even a philosopher), how could you overcome the language barrier that they do not speak each other's language?

At TNO we have developed a method that we use as the 'Policy Lab'. In a series of sprints we take all the expertise to really understand each other's language within a project. Even between statisticians and data scientists there are big differences about labels, categories and weighting factors of variables. It is a very fascinating topic at the interface of data-driven algorithms and expression-based systems (something we call 'hybrid AI'. Research has only just begun on this theme!

5. Couldn't you give everyone insight by making the corona app open source? This allows everyone (who has knowledge of it) to participate in the development.

Good point. Companies that want to take advantage of the development of such an app should not be shortlisted anyway. The dominant role of Apple and Google in such apps is worrying in that regard, as they sit in the government seat and thereby determine the rules (without releasing them). In Europe, an open-source protocol for Corona tracking apps has been developed and is recognized, criticized and improved by experts. The question remains why the Dutch government does not join this initiative.

6. Which government tasks could you easily replace with autonomous AI systems and which ones would you rather not replace?

AI is very good at 3 things:
1) Copying experts (expert systems)
2) Predicting / classifying / clustering (structured & unstructured machine learning)
3) Making choices that lead to a specific goal (reinforcement learning)
All this is valuable and useful, but in isolation it is insufficient to let go of many decisions without control. For example, AI is not able to think creatively or to appreciate context. So as a predictor when an infrastructure object needs maintenance, it is fine to use AI. As an assessor what kind of punishment is needed, you do not want AI to function autonomously. Understanding how AI works is important to be able to assess for which tasks AI can be used and what level of autonomy we allow.

7. How can you monitor the interaction between separate AI systems?

This is one of the dimensions on which we conduct applied research. AI systems are now often isolated systems that do one job well. But we expect more: AI systems must be able to work together and cooperate with people. That brings a lot, but it is not easy to realize. As always, it starts with the question: what purpose does it serve? From there, subsystems and solutions can be searched for.

8. A self-learning algorithm continuously bases its outcome on newly obtained data and correlations, so that each outcome has a different basis. Doesn't this make it very difficult to check and provide insight into a self-learning algorithm?

Explainability is essentially a semantic and communicative process. To be able to reason about an AI analysis as a human with AI, the frame of reference of AI must be compatible with that of humans. There must be a communicative relationship between man and model. AI often surpasses humans by learning complex connections from very large amounts of data, which complicates the matter.

9. Will it soon become possible as a citizen to ask how a decision of an AI system was made?

Transparency and verifiability are prerequisites for the deployment of AI systems. This means that there must be a process in which people can indeed question government decisions and have an explanation of how a decision was made.

Government expert Marieke van Putten (BZK).

10. As with the aforementioned problem of women's discrimination at Amazon, we also have a problem close to home: the tax authorities and dual citizenship. As long as the bias is known, and is included in retraining to prevent the amplifying effect of the bias, is there really a problem?

If adequate measures have been taken to remove this bias, it may not be an AI problem (anymore). But until then, it still is. In both cases, the real problem lies in the organization that provided the data on which the algorithm was trained. At Amazon, the data apparently showed that there was bias in the offline recruitment process, just like with the tax situation.

11. AI Techniques are becoming more and more advanced, and so the explanation (read transparency) is becoming increasingly complex. What does the future hold with regard to a so-called 'knowledge gap' between AI techniques (and their explanation) and decision-making about AI?

 We will have to accept that in a number of cases the relationship between the AI ​​model and human knowledge cannot be properly established. A complex deep learning system that detects cancer is better trusted if it appears afterwards through an additional diagnostic test (by humans). When the system is right 99% of the time, explainability is less important. Being able to calibrate quality and explainability in this way offers an opportunity to include models in a human workflow: in some contexts, lower explainability and higher accuracy are preferable. In other situations, higher explainability at the expense of somewhat lower accuracy is preferable.

Rewatch the webinar here

Appl.AI Webinar #1: 'How can AI help the government in transparent and fair decision-making?'

Watch webinar
Our work

Appl.AI Webinar #1: AI’s role in government decision-making

The main question we addressed in our webinar is ‘How can AI help the government in transparent and fair decision-making?’. You can watch this webinar that took place on Wednesday, 27 May 2020. Read more
Knowledge

Data-driven policy: AI in public policy and services

Every day, policy makers and public servants face complex challenges. Some of these challenges are: fighting long-term unemployment, meeting sustainability goals and anticipating technological impact.... Read more
Expertise

Policy Lab: developing data-driven policies

The use of new data sources and technological developments such as Artificial Intelligence offers opportunities for policy makers to gain new insights and respond better to societal developments. TNO has... Read more
Knowledge

AI helps to define information clearly

Whenever we want to make a decision or check something, correct information is crucial. The more clear the information, the better. If not, and for example if different terms are used for the same concept,... Read more
Knowledge

Fair Machine Learning combats biases  

An AI tool bases its calculations on data. If the data is biased, the calculations will be biased. If there was once a male preference within a profession, then this will be adopted by AI tools for recruitment. So... Read more
Knowledge

Towards human-machine teaming

The rising potential of AI, intertwines our lives with the use of AI-technology. It will increasingly behave as a partner rather than a tool. Yet AI-technology is always embedded within a larger organisation,... Read more
Roadmap

Techniques

Read more
Knowledge

Deep Vision extracts information from images

A photo says more than a thousand words. However, it’s difficult for people to get useful information from many photos or videos. Through Deep Vision, we’re developing AI algorithms to make automatic... Read more
Contact

Dr. ir. Anne Fleur van Veenstra

  • Policy Lab
  • Data-driven policy
  • Digital governance
  • Public sector innovation

Appl.AI Webinar #2: How safe are self-driving cars?

FOLLOW TNO ON SOCIAL MEDIA

Stay up to date with our latest news, activities and vacancies

TNO.nl collects and processes data in accordance with the applicable privacy regulations for an optimal user experience and marketing practices.
This data can easily be removed from your temporary profile page at any time.
You can also view our privacy statement or cookie statement.