From reactive to proactive: How organisations gain control over GenAI governance
Generative AI (GenAI) is fundamentally transforming how organisations, private and public, need to handle AI and other emerging transformative technologies. Unlike systems based on data analytics and predictions, GenAI creates human-like content with the potential to reshape the way we think about, interact with and relate to each other and the technology. This makes governance exponentially more complex than previous technological shifts. TNO develops practical pathways that guide organisations through this uncharted governance territory, transforming reactive ‘firefighting’ into proactive control.
The governance maze
‘You cannot expect to introduce a system that profoundly changes your way of working, and not expect to fundamentally change your organisation within as well’, says Ilina Georgieva, specialist scientist in AI Governance and Regulation at TNO. Through her work with large organisations, Ilina witnesses how traditional governance approaches of technology fall short when confronted with GenAI's complexity.
The core difference between what we now call traditional AI systems and GenAI is the ‘generative’ part. It’s a game-changer in content creation based on prompts that are different for every user. This entails text, image or sound generation. This means that where traditional governance systems make use of more static frameworks, the governance of GenAI demands new competences and skills, together with continuously evolving methods. This is why TNO is developing flexible GenAI governance ‘pathways’.
Five critical challenges organisations face
In the process of making sense of the many applications of the technology, organisations struggle to define clear roles and responsibilities for GenAI governance. Who monitors the system, its varied applications and the risks that come along? Who ensures ethical use? And then there are questions completely new with GenAI, Ilina points out: ‘Who checks how reliable the output is? What if the system hallucinates? These challenges didn't exist with ‘traditional’ AI systems.’ The maze of duties, roles and interactions becomes increasingly complex as GenAI impacts every department in an organisation differently.
Many organisations are eager to set up their GenAI governance, but resort to ad-hoc approaches that fall short. ‘When organisations embark on their GenAI journey, they're often not aware that they need an overarching AI strategy that reflects their organizational mandate, mission and values’, Ilina explains. ‘You also need AI skills and literacy within your organisation. Everybody needs to understand what this innovation means for their role and way of working, and how their way of working is going to influence the system as well.’
This confusion is compounded by regulatory requirements that can be overwhelming. While the European AI Act provides important boundaries, navigating its requirements for GenAI creates what Ilina describes as 'AI fatigue'. ‘The AI Act is an important geopolitical statement, an attempt to imprint EU values on the technology’, she says. ‘But it also prescribes such extensive risk classification and mitigation practices that it creates compliance bottlenecks. Especially for smaller organizations that do not have resources to spare.’
Ilina recalls a public agency that spent two years on developing an AI tool to help employees with their workload, in processing inspection signals. ‘It was the newly founded data science department that started with it’, she says. ‘But it never came to an organisation-wide implementation, because there was resistance from within the organisation. The inspectors whose work this new system affected were never asked about their wishes or concerns about the system’s functionality.’
While this siloed approach in organizational innovation rarely pays out, it becomes even more complex with GenAI, as the technology’s deployment changes key functions within organizations.
The efficiency pressure many organizations try to solve with the help of a GenAI application often conflicts with fundamental requirements for transparency, fairness, autonomy or human oversight. This is true for both in-house developed, as well as procured technology. Finding the balance requires an active and hands-on approach with governance. This includes an iterative process of documenting potential risks and harms, as well as an impact assessment. It also goes beyond technical solutions and demands a larger conversation with the whole organisation.
Government organisations face a unique challenge: they must regulate the very tech companies they depend on for their AI innovation journeys that increasingly reshape public functions and policy-making. The challenge lies in balancing AI adoption with accountability. Meanwhile, public organisations face stricter transparency requirements. They cannot hide behind 'black box' algorithms, as AI results and decision-making processes need the be traceable and explainable.
TNO's pathway approach in three pillars
These challenges show why traditional governance staggers, but they don’t mean you have to reinvent the wheel. ‘That’s why we try to develop pathways’, Ilina explains. ‘There are already so many frameworks to tame and control things, the challenge is knowing where to start. We see that pilot-projects often get stuck at similar points. We want to help organisations pass those hurdles.’ TNO is working on an Oversight Instrument that gets organisations started and guides them in establishing governance pathways that make sense for them. It is based on three pillars:
Pillar 1: Organizational oversight
This pillar focuses on organizational layers and conditions. We examine how these layers interact and depend on each other. This is essential for responsible GenAI implementation. The pillar for instance, clarifies who monitors what. It also shows how internal and external oversight connect. We translate abstract principles into concrete guidance tailored to GenAI's dynamic nature.
Pillar 2: Regulatory oversight
Beyond regulatory checkboxes, organisations must navigate complex questions about AI ethics, risks and harms, liability, and their interplay. This pillar helps in understanding the regulatory parameters of GenAI systems. It helps with choosing the appropriate assessments and transform them into organizational best practices.
Pillar 3: Organisational and stakeholder alignment
‘Organisational alignment and commitment is key’, Ilina emphasises. ‘Never underestimate it. You need to actively acknowledge which knowledge gaps exist and where. Map the in-house expertise, check what you can handle, what skills or capacities you still need to develop.’ This pillar prevents organisations from frustrating resources by facilitating knowledge and expertise sharing across departments and beyond.
Responsible AI that works
TNO is building responsible AI that works: systems that are not only effective but also ethical. Responsible AI empowers people, increases societal impact, and keeps control with the user. European values such as privacy and security make AI more usable and reliable, and ensure broad acceptance in society.
Where to start: practical first steps
While a solid AI governance strategy is important, waiting for perfect conditions probably means never starting, because they don't exist. When asked what organisations can do right now, Ilina offers a few first steps.
The foundation begins with mapping all stakeholders affected by GenAI and initiating cross-departmental conversations from day one. ‘Organisational alignment is often a big problem’, Ilina says. By ensuring everyone understands not just what you're implementing, but also why you're innovating in the first place, and how it impacts their role, organisations can create broader internal buy-in and support.
Next, assess internal expertise across technical and non-technical teams. This assessment is not just about the necessary know-how in the inception or deployment phase of your GenAI system, but also for its ongoing operations. ‘Organisations often underestimate the costs, both financial and in capacity, to sustainability unfold a system’s application within the organization’, Ilina warns.
These steps contribute to a holistic strategy that goes beyond technology performance metrics alone. ‘You need commitment, maintenance, and monitoring’, Ilina stresses. This ensures governance is built-in before any system goes live. ‘Setting it up responsibly takes time, but you're going to benefit greatly from it in the long run. Don't go fast and furious.’
Develop your governance toolkit with TNO
TNO actively seeks partners across public and private sectors to co-create governance instruments and pathways in practice. ‘If organisations are looking for ways to meaningfully structure their AI governance journey, we can help’, Ilina says. ‘You don't have to do this alone.’
Contact us to explore how collaborative expertise and evidence-based tools can transform your governance journey.
TNO's research group GRAIL
TNO's research group GRAIL (Generative Responsible AI League) is building comprehensive toolkits that help organisations responsibly implement GenAI across all facets. We're seeking partners to co-develop practical instruments for:
- governance structures
- critical thinking among users
- evaluation of AI system performance.
Together, we can ensure GenAI becomes a force for positive transformation in your organisation.
Get inspired
How TNO is leading the drive towards sovereign, responsible Dutch AI


TNO’s Vision for Responsible AI That Works


ObjectivEye: AI-assisted human recruitment


Large dataset news organizations for Dutch AI language model GPT-NL


Working on reliable AI


