Soort project:
Project
Thema:
Artificial intelligence

AI Oversight Lab

Public organisations that deploy AI in critical domains must stay firmly in control of fairness, transparency and compliance. The AI Oversight Lab provides guidance. We help teams make responsible decisions about algorithms and AI through hands-on guidance, applied research and a safe environment to learn, test and reflect. The result: faster insight into risks, better‑grounded decisions, and AI solutions that demonstrably align with public values and legislation.

AI is growing and so are the challenges

AI adoption in the public sector is accelerating rapidly. Applications range from decision support in council meetings to predicting and preventing traffic accidents. In 2025, the number of generative AI applications within public organisations was eight times higher than the year before. Recent incidents show how quickly unintended effects can escalate into major societal impact. Public organisations therefore need stronger capabilities to assess, monitor and substantiate that their AI systems are reliable, fair and compliant with legislation, such as the AI Act.

At the same time, many public‑sector AI initiatives stall in the pilot phase. Limited data availability, skills shortages, and uncertainty about legal obligations and standards prevent scaling. Promising applications fail to move into production, and valuable knowledge often remains within the organisations. This creates a clear need for practical support, clear frameworks and concrete tools to develop and scale AI responsibly across the public sector.

How the AI Oversight Lab helps

The AI Oversight Lab (AIOL) was established by TNO to support public organisations in tackling these challenges. Our mission is clear: to enable responsible decision‑making about the use of algorithms and AI. From initial idea to deployment and evaluation.

The Lab offers a unique environment to assess AI models for bias, transparency and compliance. We help organisations understand and control risks, safeguard impact, and scale AI solutions responsibly. Acting as a critical mirror, AIOL helps organisations align their AI use with public values and legal requirements.

From IAMA, AIIA and DPIA to the AI Act, GDPR and DSA: how do your algorithms measure up against an increasingly complex landscape of laws, standards and obligations? How do you scale AI across your organisation? How do you make strategic choices and involve all relevant stakeholders and expertise? And which techniques should you use to test the fairness and reliability of your AI systems?

AIOL provides practical answers to these questions:

We conduct joint research, guide development trajectories, and provide a safe space for learning and reflection. Interdisciplinary collaboration is central to our approach, because we believe ethical, organisational and technical aspects must be addressed together to create valuable and responsible AI.

You can work with us on:

  • Evaluating algorithms and (generative) AI models for robustness, reliability and bias
  • Responsible design and development of algorithms and AI systems
  • Structured decision‑making on goals, values, risks and resources using our BVRR method
  • Setting up governance structures and ethical frameworks
  • Applying standards and regulatory frameworks in practice
  • Assessing and strengthening AI maturity within your organisation
  • Tailored training courses and workshops on responsible AI

We conduct interdisciplinary research on cross‑government challenges such as scaling AI, translate scientific insights into practice (and vice versa), and support the translation of scientific and practical insights into policy.

We help with:

  • Developing guidelines and frameworks
  • Trend analyses

By sharing experiences and best practices, we make scientific knowledge more accessible and foster a network built on learning, reflection and collaboration.

Use cases

Since 2021, we have worked every year with multiple partners across different layers of the public sector, including municipalities, executive agencies and ministries. With each of these organisations, we jointly address challenges that arise in the development and implementation of AI. Examples of the issues tackled in these collaborations include:

The challenge

The Immigration and Naturalisation Service (IND) developed a risk model to identify violations by recognised sponsors more quickly. The organisation wanted to be certain that the model was reliable and free from unwanted bias. It was also important to understand which factors influenced the outcomes and how inspectors could use the model responsibly.

The solution

AIOL held discussions with inspectors, managers and developers to gain insight into work processes and model usage. In joint workshops, we mapped potential forms of bias and carried out technical analyses of how the model handled sensitive variables such as nationality. This combination of practical insight and technical assessment made it possible to clearly identify risks.

The result

The findings were included in a report shared with the House of Representatives. IND acted on the recommendations and, together with AIOL, conducted follow‑up research into the alignment between the risk model and operational practice.

The challenge

The municipality of Nissewaard used an algorithm to select potential social‑benefit fraud cases. The key question was whether the model could be used responsibly: did it operate reliably, fairly and transparently? The municipality wanted clarity before continuing to use the model.

The solution

AIOL assessed the model against existing guidelines for responsible AI use. We interviewed municipal staff and the developer to understand how the model was built and applied. The technical evaluation focused on three aspects: whether the model was verifiable, whether it behaved as intended, and whether it fits well within the way of working of the organisation. This made it possible to clearly identify the main risks.

The result

The evaluation showed that the algorithm could not be used responsibly at that time. The municipality therefore decided to stop using it. The study underlined the importance of critical evaluation and careful consideration before deploying AI applications.

The challenge

The State Supervision of Mines (SodM) is exploring the use of generative AI to support inspectors in desk research. It is essential that any systems developed are responsible, reliable and valuable. The challenge was to evaluate the AI system holistically in order to assess whether it adds value and can be deployed responsibly.

The solution

Together with SodM, AIOL developed a holistic evaluation methodology based on the seven principles of responsible AI. Each principle was assessed at different levels to create an overall picture of the performance. An application developed by SodM was evaluated both technically and with users.

The result

SodM gained insight into how different aspects of responsible AI can be incorporated and assessed during the development of generative AI applications. This enables SodM to design a process for well‑founded decision-making about further development and deployment of AI applications.

The challenge

SodM is running several pilots to explore the potential of AI. To enable responsible AI use on a structural basis and to accelerate innovation, organisational AI maturity is essential. Moving from pilot to successful adoption across the organisation is not straightforward. The organisation needs to be prepared at multiple levels, such as AI literacy and AI governance. In addition, AI systems must be and remain compliant with legislation and other (internal) requirements.

The solution

Together with SodM, AIOL developed an AI maturity and innovation capacity scan, providing insight into the various aspects required to enable responsible AI use and accelerate innovation. We also mapped how legal, regulatory and organisational requirements can be monitored across different types of AI applications.

The result

Based on the maturity and innovation capacity scan, SodM can determine where to invest in order to facilitate responsible AI use across the organisation. This allows the organisation to seize AI opportunities more quickly and responsibly, while enabling innovation to progress.

Looking ahead to valuable and responsible AI applications

In the coming years, the AI Oversight Lab will continue to build a strong foundation for responsible AI use in the public sector. We are expanding our expertise into new technologies and domains and developing practical tools and best practices that organisations can apply directly. We are working towards an interdisciplinary and growing community of organisations committed to reliable, human‑centred AI. In doing so, we strengthen not only the quality of AI applications, but also the trust in the technology.

Want to know more?

The AI Oversight Lab also produces scientific publications, reports and advisory documents.

Would you like to know more about how the AI Oversight Lab can support your organisation?

Get inspired

51 resultaten, getoond 1 t/m 5

Appl.AI Splash: Challenges unlocked

Informatietype:
Event
Join us on Monday, 11 May for a sector-focused AI event where recent AI results will be translated to your organisation’s specific needs.
Startdatum:
Locatie:
Bar Beton Utrecht Centraal - Stationshal 2-9, 3511 CE Utrecht

Balancing skepticism and blind trust: critical thinking as the key to responsible and effective use of GenAI

Informatietype:
Insight
14 January 2026

From reactive to proactive: How organisations gain control over GenAI governance

Informatietype:
Insight
16 December 2025

How TNO is leading the drive towards sovereign, responsible Dutch AI

Informatietype:
Insight
23 October 2025

TNO’s vision for responsible AI that works

Informatietype:
Article