‘Giant AI goes down the European road’

Thema:
Artificial intelligence
31 March 2023

In an Open Letter in 2017, the Future of Life Institute called for research to ‘make AI systems do what we want them to do’. The latest alarming Open Letter from the same institute shows that the point of accountability has now been passed. The Big Five have invested1 223 billion US dollars in developing AI but they are not yet doing ‘what we want’. Urgent action is needed.

The operation of recently launched chat applications such as chatGPT4 and Bard, based on large language models (Giant AI), appears to be largely misunderstood, exhibits unexpected ‘intelligent’ behaviour, and now holds society in its grip.

Because of potential disruptive ‘economic and political’ effects, the letter proposes a six-month pause in AI development, in order to establish proper regulation and intensify AI safety research on transparency, societal values, human oversight, and certification, among other issues.

TNO supports this urgent call for reflection. The signatories’ letter effectively proposes the ‘European way’: developing Responsible AI, as proposed by the EU High-Level Group on AI.

It works, but we don’t know why

The notion that drugs can enter the market without being tested and certified in relation to their effects and side effects is inconceivable.

But things are often a little different with digital technology, such as AI. Anne Fleur van Veenstra (TNO Science Director for ICT, Strategy & Policy) says: ‘Digital technologies are usually introduced first and it’s often only afterwards that the impact of their application on society becomes clear. As a result, there’s little adjustment in advance and little democratic control over these developments.’

Giant AI became publicly available quite suddenly, with unprecedented scaling up in use and no understanding of how it operates or its impact on society. Stephan Raaijmakers (TNO senior scientist and professor of language modelling; author of the book ‘Deep Learning for Natural Language Processing’) says of how Giant AI operates: ‘While we understand the architecture of these models, we don’t understand the complex relationship between quantities of data, parameters, and computational power and emergence on the other side.

That makes all this confusing and not immediately reliable for a great many applications (such as decision support). So far, classic explainability falls short here and it’s highly questionable whether we will ever fully understand these models.’

"AI is already disrupting our scientific and educational systems, as well as our democratic processes"

Leon Kester

TNO researcher

Tempting but risky

The actual or potential practical value of Giant AI is substantial. Giant AI provides interactive, accessible, and articulate access to an unprecedented amount of knowledge and information, which, unlike search engines, is summarised and explained in a personalised manner.

It generates content and serves as an inspiration for creative processes. But those who experience Giant AI notice that it still lacks self-knowledge, still frequently reasons inconsistently, and ‘hallucinates’ because it does not know what it does not know and is programmed to give an answer to every question.

Its use is also vulnerable to deliberately misleading knowledge and information. Responsible use requires mastering ‘critical thinking’ and applying multiple methods and sources.

Leon Kester (TNO researcher on AI safety and the European AI standardisation committee) says of the effects on society: ‘AI is overestimated as well as underestimated. It is overestimated in the sense that Artificial General Intelligence (the level of human intelligence in a moral and scientific sense) is thought to be just around the corner. It is underestimated how much AI compromises the integrity of knowledge and the functioning of science. AI is already disrupting our scientific and educational systems, as well as our democratic processes.

Societal system transition

If Giant AI is to develop into a public knowledge infrastructure, it must be public and accountable. It must not be an uncontrolled ‘arms race’ between international technology companies. A knowledge infrastructure is an infrastructure that is critical to society and that affects Dutch and European sovereignty.

The pause in Giant AI development proposed in the Open Letter appears to ignore the geopolitical nature of an arms race. It is precisely now that knowledge about this subject must remain on the frontline. And that knowledge is not only technological in nature, but also societal, economic, ethical, and legal.

It is a transdisciplinary challenge for our society! Jasper van der Waa (AI researcher at TNO and Delft University of Technology, winner of the YER 2022 prize) says on this subject: ‘Giant AI concerns a system transition in which technological, societal, and institutional innovation must go hand in hand.

Industry self-regulation is rarely effective. This requires a socially transparent dialogue between governments and companies, supported by the right expertise based on scientific understanding.’

AI safety research is top priority

The Open Letter calls for ‘robust’ funding accompanied by an intensification and acceleration of AI safety research. TNO very much agrees that there is an existential need for such support; it feels responsibility in this matter and is prioritising it.

On this subject, Tjark Tjin-A-Tsoi (CEO of TNO) says: ‘I take the urgent call in the Future for Life Institute’s Open Letter very seriously. AI safety is at the heart of our mission to innovate for a safe and digital society. We therefore not only call on our partners and government to invest quickly and intensively in this, but we are also prioritising it ourselves.’

For the roll-out of AI products and services, TNO previously developed a practical translation of proposed ethical frameworks into specific measures and agreements related to responsibilities.

Ways of creating such responsible AI include enabling democratic control, a proper system of oversight, enforcing transparency and accountability of technical systems, developing mechanisms that enable citizens to invoke their rights, and setting up facilities to test technology before it is applied outside the lab.

The TNO AI Oversight Lab is shaping the embedding of AI in government organisations, such as the involvement and responsibilities of management and implementing bodies in the development, implementation, and use of AI. It thereby anticipates the European AI Act and its recommendation to create nationwide ‘regulatory sandboxes’.

For high-risk applications of AI, a methodology has been developed (socio-technological feedback loop) to achieve responsible monitoring, design, and operation of ‘autonomous’ AI systems in the security domain.

A basis has therefore been created, but practical operationalisation of ethical frameworks and regulatory concepts needs to be accelerated with the advent of Giant AI. Given the disruptive nature of the coming generations of Giant AI, which are expected to be with us quickly, it is essential for the business community and government to be involved.

In conclusion: The alarm sounded by the Open Letter should be taken very seriously. Giant AI has unprecedented – but also poorly understood – potential for society. Regulation is urgent and cannot be left to the market alone. The Dutch government is in a position to give direction here and make the necessary investments, supported by public knowledge institutions.

1: The Economist, 26 March, Big Tech and the Pursuit of AI Dominance).

Get inspired

40 resultaten, getoond 1 t/m 5

Boost for TNO facilities for sustainable mobility, bio-based construction and AI

Informatietype:
News
23 November 2023
The Dutch Minister of Economic Affairs and Climate Policy, Micky Adriaansens, has allocated budget to applied research facilities.

GPT-NL boosts Dutch AI autonomy, knowledge, and technology

Informatietype:
Article

The Netherlands starts realisation GPT-NL, its own open AI-language model

Informatietype:
News
2 November 2023

New AI Lab for effective and responsible oversight

Informatietype:
News
20 October 2023

System integration for robots in greenhouses

Informatietype:
Article