During the Appl.AI webinar #4 'Applying responsible AI, how do you do that?' we have many viewer questions. Below are the answers from our panel members to the most common subjects.

The webinar consisted of a panel with Chief Data Scientist Jasper van Vliet (left) and Lead Scientist Artificial Intelligence Cor Veenman (right)

Rewatch the webinar here

Appl.AI Webinar #4: How do you apply responsible AI?

Watch here

1. In which sectors do you foresee difficulties/problems related to AI?

There are examples in the news from both the governments and the business community where a moral compass for AI/data use appears to be inadequate. All automatic decisions with potentially high impact or long-term personal consequences deserve extra attention. A one-off automatic (but questionable) decision about a traffic fine is less far-reaching compared to being refused permanently as a customer at a bank.

2. How can you find out where in a company High Risk AI poses a risk (for customer, inspector and employee)?

The proposed EU Directive Article 13 refers to the obligation of transparency and the provision of information to users. In some form, as a user you can then find out who uses AI for which application. We'll have to wait and see what that obligation will look like.

A proactive example of how something like this can work out can be found on the website 'City of Amsterdam Algorithm Register'.

3. With regard to the bottleneck of the 'AI knowledge level', can we train people in the Netherlands or should we import knowledge from abroad?

Yes, we can train the basic knowledge of users in the Netherlands. For example, the the National AI course is a basic training for anyone who uses an AI product. ‘Using an AI product' is very different from making AI products. That knowledge is scarce, and there are good training courses abroad as well as in the Netherlands. Increasing the level of knowledge can therefore be done both by recruiting within and outside the Netherlands.

4. What knowledge of ethics is the minimum required, especially at the board level, to make a responsible choice for the deployment of AI?

Some of that knowledge should already be there by now. The AI methods often make existing processes more efficient and effective. The AI experts should know the upcoming regulations and the consequences for the tools they use. The translation to the organization indeed requires coordination and proper ethical and legal knowledge. From these disciplines, there is certainly attention for Responsbile AI.

5. How does AI regulation in Europe compare to China and America? I have the impression that outside Europe they attach much less value to 'rules and ethics'. Do you think we will lose the competition because of this?

You could say that our society tries to pursue at least two goals. First, to facilitate a thriving economy. And secondly, the pursuit of norms and values. Indeed, the balance between these is different in Europe than in the US and China. However, we see a tendency in the US where they starrt to recognize ethical values more (see, for example, Facebook).

6. The proposition is that gender should not play a role in relation to salary and job selection. But are there situations/jobs where gender is simply a relevant aspect? And that it is therefore valid information to know what the gender is?

Possibly, although it will more often be about certain skills that we link to gender. When it comes to heavy work, you can indicate that as a required skill. Nevertheless, if there is a real reason to need gender as a characteristic, then the GDPR method probably applies. So make purpose limitation and subsidiarity explicit. Such matters will be included in the AI regulations of the EU.

Rewatch the webinar here

Appl.AI Webinar #4: How do you apply responsible AI?

Watch here
Our work

AI in the public sector: In search of the human dimension in artificial intelligence

Governments are making abundant use of artificial intelligence (AI) to address social issues. Its approach is often successful, but not always. Sometimes things go wrong because its use of AI lacks transparency,... Read more
Our work

Appl.AI Webinar #3: How can we improve preventive care with AI?

Good care stands for a healthy and resilient society. However, the focus lies strongly on medical care, while so much impact can be made in preventive care. People can be prevented from becoming sick... Read more
Our work

Appl.AI Webinar #2: How safe are self-driving cars?

After Singapore, the Netherlands is the best prepared country for the arrival of the self-driving car. This became evident after an analysis by consultancy firm KPMG. Internationally, The Netherlands... Read more
Our work

Appl.AI Webinar #1: AI’s role in government decision-making

The main question we addressed in our webinar is ‘How can AI help the government in transparent and fair decision-making?’. You can watch this webinar that took place on Wednesday, 27 May 2020. Read more