Responsible human machine interaction
AI still has a long way to go to avoid mistakes such as bias in facial recognition and accidents with self-driving cars. This development is most rapid if it is done in close cooperation with people. In use cases we work out our AI technology in practical situations.

Responsible decision-making between people and machines
We are currently investigating what is needed for responsible decision-making between humans and machines. There are four focus areas: responsible AI, explainable AI, co-learning and secure learning.
Our latest developments
46 resultaten, getoond 1 t/m 5
ObjectivEye: AI-assisted human recruitment
Organisations are facing unprecedented staff shortages, yet they are unable to fill their vacancies. To address this, TNO has developed ethical and explainable AI technology.


Large dataset news organizations for Dutch AI language model GPT-NL


Working on reliable AI


AI model for personalised healthy lifestyle advice


AI in training: FATE develops digital doctor's assistant

