AI Systems Engineering & Lifecycle Management

Thema:
Lifecycle management

To what extent can an AI system adapt to new laws and user requirements? And is such an AI system flexible enough to quickly respond to societal developments? For engineers, developing future-proof AI systems is a tremendous challenge.

Future-proof AI-systems

To start with, engineers must consider the needs and requirements of an AI system’s stakeholder and the environment in which it will be deployed. Furthermore, the right moral, ethical, and legal choices must be made and recorded.

And this doesn’t just need to happen during the development phase; doing so is most important during an AI system’s actual deployment. After all, the goal is to arrive at AI solutions that can demonstrably quickly and safely adapt to the demands and requirements of the future and its changing environments.

Continuously keeping an AI system up to date requires multidisciplinary engineers working closely with all stakeholders andmaking both technical and organisational adjustments.

AI learning

A key feature of AI is the fact that it’s a dynamic technology, featuring self-learning algorithms with ever-improving problem-solving capacities.

However, the more advanced AI systems become, the harder it becomes for humans to assess their reliability and capacity to give the right advice and make the right decisions in all foreseen circumstances. For some stakeholders, it even means drastically changing their way of working.

Learning how algorithms work

RDW, the Netherlands Vehicle Authority, is one of many organisations currently in full learning mode to keep up with the myriad implications of AI’s rapid development.

With an increasing number of organisations claiming it will soon be possible to safely drive self-driving cars on public roads, RDW inspectors need to not only understand the mechanical parts of cars, but also the algorithms that enable the (semi-)autonomous controlling of these vehicles.

A key question here is how type approvals should be set up in the future. After all, what’s the value of these approvals when vehicles receive new functionalities after a software update that affect their driving characteristics and road safety?

And this is just one of many examples of how AI may turn an existing situation on its head.

Four concerns for the lifecycle of autonomous AI

TNO is committed to developing autonomous AI systems that can be deployed safely and reliably throughout their entire lifecycle. To achieve this, we’ve distinguished four areas of attention:

during an AI system’s development, considering the requirements that will ensure its safe and reliable deployability and maintainability.

ensuring that AI algorithms can be reliably evaluated to demonstrate, both during their development phase and while in operation, that they meet all set requirements.

efficiently and effectively determining where and how to perform maintenance on AI algorithms (possibly in response to errors or changing requirements).

implementing and integrating ethical and legal guidelines in AI algorithms to comply with changing frameworks.

Get inspired

32 resultaten, getoond 1 t/m 5

‘Giant AI goes down the European road’

Informatietype:
Insight
31 March 2023

TNO supports the alarming call of the Future of Life Institute regarding AI. Regulation is urgent and cannot be left solely to the market. Read TNO's response.

Rob de Wijk on the rise of AI in geopolitical context

Informatietype:
Insight
27 September 2022

Anne Fleur van Veenstra, director of science at TNO’s SA&P unit, interviews Rob de Wijk, emeritus professor of international relations in Leiden and founder of The Hague Centre for Strategic Studies. Rob is also a much sought-after expert who appears on radio and television programmes. What does the rise of AI mean geopolitically and in armed conflicts?

Bram Schot on the impact of AI on mobility

Informatietype:
Insight
27 September 2022

Marieke Martens, science director at TNO and professor of automated vehicles at the Eindhoven University of Technology, talks to Bram Schot. Schot was the CEO of Audi until 2020, having previously held management positions at various car makers, including Mercedes and Volkswagen. Their conversation concerns the influence of AI on mobility. How will AI impact the production process? And what does a future with autonomous vehicles look like?

Eppo Bruins on AI in different government domains

Informatietype:
Insight
27 September 2022

Michiel van der Meulen, chief geologist for the Geological Survey of the Netherlands (GDN), speaks with Eppo Bruins. Bruins was educated as a nuclear physicist and has spent many years working in the world of science, innovation, and technology. Between 2015 and 2021, he was a Dutch member of parliament for the Christian Union. He was recently appointed chairman of the Advisory council for science, technology and innovation (AWTI). What will AI mean for the various government domains in the coming years?

Bas Haring on AI, science and philosophy

Informatietype:
Insight
27 September 2022

Michiel van der Meulen, chief geologist for the Geological Survey of the Netherlands (GDN), speaks with Bas Haring. Haring originally studied artificial intelligence, which at the time still fell under the umbrella of philosophy, which is why people started calling him a philosopher. He himself feels more like a ‘folk philosopher’: Haring tries to make science and philosophy accessible to a wider audience. In 2001, he published a children’s book about evolution, Cheese and the Theory of Evolution. What better springboard for a geologist and a philosopher to talk about AI?