Cybersecure AI and Emerging Technology

Thema:
Cybersecurity

The rapid development of AI, as well as its inherent unpredictable characteristics, make it particularly challenging to secure this technology against cyberattacks. Embedding AI into existing systems introduces new types of vulnerabilities. How can organisations leverage the benefits of AI, but also mitigate the risks?

Securing AI throughout its entire lifecycle

Before integrating AI into your digital ecosystem, it’s critical to understand and mitigate the unique risks it introduces. AI evolves rapidly, and new vulnerabilities emerge daily, making security a moving target. TNO provides the expertise and tools to ensure your AI solutions deliver efficiency and speed without exposing you to targeted attacks. Our approach combines deep technical knowledge with proven methodologies to help you deploy AI confidently and securely.

Secure-by-design AI

Addressing security from the very beginning saves substantial time and costs, compared to patching vulnerabilities after they’re discovered. TNO provides technical guidelines, technologies, and advice. This enables developers to design, build, and implement AI models, products, and systems securely from the start. We help build resilience into AI-enabled software to make it more robust against attacks, and develop autonomous responses to adversarial attacks.

TNO ensures that new AI systems can be rolled out as securely as possible. We enable all of the needed controls in vital sectors like security and defence, but also for high-tech industry applications and beyond. Our techniques focus on making AI systems more robust against attacks, often applying AI to defend against AI attacks. Our recent publication outlines effective defences for five types of attacks that target Machine Learning (ML) models.

AI security assessment and risk management

Currently, there is no single, shared methodology to assess the security of AI systems or ensure compliance with EU and Dutch regulations. Often, developers assess their own models, but lack independent verification that those assessments are complete and effective. TNO offers a trustworthy, independent security assessment framework to help organisations make conscious, risk-based decisions that further protect their AI solutions. As well as to inform future standards and methodologies that will lead to clear regulations and guidelines for future AI development.

Organisations may not fully grasp the scope and scale of AI security issues, let alone how to defend against them. The non-deterministic nature of AI makes security assessment challenging. TNO helps to identify and prioritise the implementation of relevant AI security controls and ensure both compliance and effective protection.

We developed TAISHA to help organisations navigate the vast and complex landscape of AI security risks and mitigations. This tool helps organisations conduct a high-level evaluation of the security of AI applications. By following an extensive questionnaire, our expert system will gather all the information needed to assess which aspects of an AI application pose possible security risks. It will also supply an actionable output to manage and prioritise these risks. Contact us to learn more about TAISHA and start using it today.

TAISHA-image
TAISHA provides a clear overview of potential security risks in AI applications, and actionable output to mitigate them.

AI red teaming

Once an AI system is developed, it must be thoroughly tested and evaluated for its resilience against attacks. Through AI red teaming, TNO offers the tools and technical guidelines to simulate adversarial attacks on models. This identifies AI-specific flaws and actively tests the robustness and resilience of the system.

Our highly technical red teaming techniques are tailored to the type of AI we are testing. For example, when testing LLM systems, we feed it with adversarial prompts to trigger undesirable answers. When dealing with a computer vision system, we need to craft adversarial samples by injecting invisible noise to images, to disrupt correct model classification. Lastly, for models trained on sensitive data, we test whether this data can be extracted from said models. The common denominator in this red teaming is to validate if we can trigger security flaws in the AI system, resulting in new insights to help mitigate risks.

Large Language Models (LLMs) are particularly challenging to protect, given their broad-scale application and vast variety of ways in which they are used. TNO is actively participating in a PCSI joint research project to uncover how to control the access, privileges, and scope of LLMs without hindering their benefits. The research will contribute to an LLM security benchmark using open-source LLM security evaluation tools, to enable more secure LLM deployment.

Connection with Trustworthy AI

The security of AI is a subfield within the broader field of Trustworthy AI. At TNO, we possess expertise in the full range of Trustworthy AI.

We help to align AI applications with our core societal values like transparency, fairness, accountability, privacy, and more. Since the behaviour of AI cannot be made fully predictable, building the necessary protections and processes around it is key. This ensures that systems and platforms are trustworthy enough to combat the errors, benign failures, and unpredictable outcomes that are intrinsic to AI.

TNO also works on Privacy Enhancing Technologies (PETs). Data drives AI innovation, and TNO can enable you to ensure that your data is robust and remains secure to protect this valuable asset. We apply technologies to protect data from attacks both during training and operational phases, with techniques such as federated learning and Multi-Party Computation (MPC). We encourage you to contact TNO, no matter what stage of AI development you are currently in.

Podcast

In this episode of TNO Insights, we delve into the intersection of cybersecurity and AI. What are the main threats that AI poses to cybersecurity? Where can it actually offer a solution? And how can policymakers prepare for the future?

Security for emerging technologies

In addition to AI, emerging technologies also pose significant security risks. From 6G telecommunications to IoT devices, and from satellite communication to quantum networking, new and complex technologies bring new vulnerabilities. Rapid deployment of AI- or ML-enabled devices, systems, and applications brings new questions about our ability to effectively secure them, and ensure security for the product’s or system’s entire lifetime.

TNO stands ready to explore this ecosystem and develop cybersecurity solutions that are fit-for-purpose and secure for the lifetime of these unique products and systems. Our secure-by-design and autonomous resilience solutions, combined with our multidisciplinary expertise and regulatory knowledge, enable solutions that support a trustworthy future among these increasingly complex technologies.

Partnership for a secure future

Are you developing an AI-driven technology or a novel product concept and want to ensure it is secure, robust, and compliant? Interested in evaluating the security of your system or testing it in real-world circumstances to verify its resilience? TNO is ready to develop the cybersecurity solutions you need to ensure that your systems are as secure as they need to be throughout their lifetimes.

Get inspired

33 resultaten, getoond 1 t/m 5

Autonomous Cyber Resilience

Informatietype:
Article
TNO’s methodologies and tools embed autonomous cyber resilience (ACR) into critical digital systems for improved resilience against increasing threats.

Software & System Security

Informatietype:
Article

Advanced Detection and Threat Management for IT and OT

Informatietype:
Article

Province Noord-Brabant, TNO and partners join forces on cybersecurity

Informatietype:
News
10 December 2025

Cybersecurity by design: our vision

Informatietype:
Article