Responsible decision-making between people and machines

Thema:
Artificial intelligence

Prejudices in facial recognition and recruitment systems. Accidents involving self-driving cars. This type of failure shows that much remains to be done in the development of AI. The fastest way of moving that development forward is for AI and people to work closely together.

Let us not deny that there have been some major success stories thanks to artificial intelligence. For example, there are AI systems that are able to lip read or recognise tumours, with the help of deep learning. But AI systems also regularly slip up, and that can have very serious consequences. This is especially the case with ethically sensitive applications or in situations in which safety is at stake.

It is therefore time for the next step – a closer partnership between AI and people. This will enable us to develop AI systems that can assist us when taking complex decisions, and with which we can work enjoyably and safely.

AI still faces a stiff learning curve

To start with, AI has a tendency to unquestioningly replicate human prejudices. This is most particularly a problem if social and ethically sensitive applications are involved. Examples include the recruiting of new employees or predicting the likelihood of a delinquent lapsing into crime.

Another case in point is that of opaque decision-making processes. AI systems have to learn to be more transparent. This is certainly true if they are to be used in sensitive contexts, such as law enforcement or detecting social security fraud.

AI systems must not be rigid. They have to adapt to their users and to the changes in society, without losing sight of ethical and legal principles.

Finally, data analysed by AI may be private and confidential. This is especially the case when businesses and organisations operate together in a decision-making system.

What artificial intelligence still needs

TNO is currently investigating what is needed to achieve a responsible decision-making process between people and machines. There are four points to consider:

  1. Responsible AI: by incorporating ethical and legal principles into AI.
  2. Accountable AI: by enabling different types of user to understand and act upon advice and recommendations given by the system.
  3. Co-learning: by adapting, with the help of people, to a changing world. This should be done in a way that ensures that ethical and legal principles are anchored in the system.
  4. Secure learning: by learning from data without actually sharing them with other parties.

The aim: reliable and fair AI

TNO is helping to bring about reliable AI systems that clearly operate fairly. Always. And that includes complex and fast-changing environments. We are seeking to develop AI systems that can explain to different types of user why they take a particular decision.

Delving deep with use cases

It’s perhaps also useful to know that, in addition to this area of research, we are also working on safe autonomous systems in an open world. In other words, we are exploring solutions to today’s challenges facing AI from a range of perspectives. But the focus here is very much on ‘responsible decision-making processes between people and machines’. With the help of the following use cases, we are trying out our AI technology in practical situations:

Get inspired

40 resultaten, getoond 1 t/m 5

Boost for TNO facilities for sustainable mobility, bio-based construction and AI

Informatietype:
News
23 November 2023
The Dutch Minister of Economic Affairs and Climate Policy, Micky Adriaansens, has allocated budget to applied research facilities.

GPT-NL boosts Dutch AI autonomy, knowledge, and technology

Informatietype:
Article

The Netherlands starts realisation GPT-NL, its own open AI-language model

Informatietype:
News
2 November 2023

New AI Lab for effective and responsible oversight

Informatietype:
News
20 October 2023

System integration for robots in greenhouses

Informatietype:
Article