AI Systems Engineering & Lifecycle Management
To what extent can an AI system adapt to new laws and user requirements? And is such an AI system flexible enough to quickly respond to societal developments? For engineers, developing future-proof AI systems is a tremendous challenge.
To start with, engineers must consider the needs and requirements of an AI system’s stakeholder and the environment in which it will be deployed. Furthermore, the right moral, ethical, and legal choices must be made and recorded.
And this doesn’t just need to happen during the development phase; doing so is most important during an AI system’s actual deployment. After all, the goal is to arrive at AI solutions that can demonstrably quickly and safely adapt to the demands and requirements of the future and its changing environments.
Continuously keeping an AI system up to date requires multidisciplinary engineers working closely with all stakeholders andmaking both technical and organisational adjustments.
A key feature of AI is the fact that it’s a dynamic technology, featuring self-learning algorithms with ever-improving problem-solving capacities.
However, the more advanced AI systems become, the harder it becomes for humans to assess their reliability and capacity to give the right advice and make the right decisions in all foreseen circumstances. For some stakeholders, it even means drastically changing their way of working.
Learning how algorithms work
RDW, the Netherlands Vehicle Authority, is one of many organisations currently in full learning mode to keep up with the myriad implications of AI’s rapid development.
With an increasing number of organisations claiming it will soon be possible to safely drive self-driving cars on public roads, RDW inspectors need to not only understand the mechanical parts of cars, but also the algorithms that enable the (semi-)autonomous controlling of these vehicles.
A key question here is how type approvals should be set up in the future. After all, what’s the value of these approvals when vehicles receive new functionalities after a software update that affect their driving characteristics and road safety?
And this is just one of many examples of how AI may turn an existing situation on its head.
Four concerns for the lifecycle of autonomous AI
TNO is committed to developing autonomous AI systems that can be deployed safely and reliably throughout their entire lifecycle. To achieve this, we’ve distinguished four areas of attention:
during an AI system’s development, considering the requirements that will ensure its safe and reliable deployability and maintainability.
ensuring that AI algorithms can be reliably evaluated to demonstrate, both during their development phase and while in operation, that they meet all set requirements.
efficiently and effectively determining where and how to perform maintenance on AI algorithms (possibly in response to errors or changing requirements).
implementing and integrating ethical and legal guidelines in AI algorithms to comply with changing frameworks.
Looking for another expert?View all experts
‘Giant AI goes down the European road’
TNO supports the alarming call of the Future of Life Institute regarding AI. Regulation is urgent and cannot be left solely to the market. Read TNO's response.
You can read about how AI is educated in Chapter 1. How can we make clear to AI which goals we want to pursue as humans? Andhow can we ensure intelligent systems will always function in service of society?
Innovation with AI
What does that world look like in concrete terms? Using numerous examples, TNO has created a prognosis for the future in Chapter 2. Regarding construction, for example, in which AI will be used to check the quality, safety, and energy efficiency of buildings before they are actually built. Or healthcare, where robots will partly take over caregivers’ tasks and AI will be able to autonomously develop medicines.
Innovating with innovation AI
How AI will change research itself is explained in Chapter 3. For example, what role will AI be permitted to play in knowledge sharing? And what will happen when we make machines work with insurmountably large data sets?
David Deutsch on the development and application of AI
Peter Werkhoven, chief scientific officer at TNO, joins physicist, Oxford professor, and pioneer in the field of quantum computing, David Deutsch, for a virtual discussion. Deutsch set out his vision in 1997 in the book, The Fabric of Reality. Together, they talk about the significance of quantum computing for the development and application of AI. Will AI ever be able to generate ‘explained knowledge’ or learn about ethics from humans?