Responsible human machine interaction
AI still has a long way to go to avoid mistakes such as bias in facial recognition and accidents with self-driving cars. This development is most rapid if it is done in close cooperation with people. In use cases we work out our AI technology in practical situations.

Responsible decision-making between people and machines
We are currently investigating what is needed for responsible decision-making between humans and machines. There are four focus areas: responsible AI, explainable AI, co-learning and secure learning.
Our latest developments
40 resultaten, getoond 1 t/m 5
GPT-NL boosts Dutch AI autonomy, knowledge, and technology
Large language models such as ChatGPT offer promising technical opportunities to address societal challenges, but there are also concerns.


The Netherlands starts realisation GPT-NL, its own open AI-language model


New AI Lab for effective and responsible oversight


System integration for robots in greenhouses


AutoAdapt: Self-adaptive machine learning to propel us into the future

