Rob de Wijk on the rise of AI in geopolitical context
Anne Fleur van Veenstra, director of science at TNO’s SA&P unit, interviews Rob de Wijk, emeritus professor of international relations in Leiden and founder of The Hague Centre for Strategic Studies. Rob is also a much sought-after expert who appears on radio and television programmes. What does the rise of AI mean geopolitically and in armed conflicts?
A conversation with Rob de Wijk
Do we really need to see AI as a system technology that will affect all domains and sectors?
The Scientific Council for Government Policy said last year that we really need to see AI as a system technology that will affect all domains and sectors. Would you like to say something about that?
‘Artificial intelligence is part of the new industrial revolution, the so-called data revolution. We must recognise that whoever wins this data revolution will be able to set the rules of the international game. This makes it fundamental to the future of the world. You can compare it to what happened with the British in the 19th century. At that time, the United Kingdom became the leading player in the industrial revolution. This allowed them to set the rules of the game. In the 20th century, that was the United States. The big question now is: “Who will it be next?” It could be China. But the United States will try to prevent that. As a result, we’re now seeing that because of that data revolution – a new industrial revolution – a trade war has erupted. First under Trump, and now, in a higher gear, under Biden.’
'Data are simply a new raw material that allow us to develop new products. For the most part, we don’t yet know what these products will be.’
If you had to give your opinion, who would you say is winning the race?
‘I think it’s 1-1 at the moment. China’s size allows it to invest an unprecedented amount of money in the development of that data revolution. And the struggle for influence in the world is an area in which the big tech companies play a huge role. So it’s not only about the military system; it concerns all sectors. The military system is just one of them. It’s about autonomous driving. It’s about the future of logistics. It’s about new products. Data are simply a new raw material that allow us to develop new products. For the most part, we don’t yet know what these products will be.’
2022 is something of a turning point. There are an incredible number of Data Acts and AI regulations coming, from Europe in particular, aimed at regulating and influencing this development. What is your view on this?
‘They present a significant problem in connection with the race that is currently going on with China. China doesn’t have all these regulations. What is currently happening in China is quite remarkable. Technicians have complete freedom and are permitted to make mistakes. They have access to huge budgets and quantities of data, much larger than in Europe. This will ultimately have major economic benefits and put China in a better position.’
How do you see this AI Act?
Still, we see that Europe has succeeded to some extent in using its consumer power, through privacy legislation, to make large manufacturers worldwide adopt European standards. This is sometimes referred to as Europe’s ‘soft power’. How do you see this AI Act? Could it trigger something similar?
‘No, I don’t think so. What matters is that you have access to data. It seems to me that it would be incredibly difficult, also for Europe, to counteract this completely. It’s true that countries that want access to Europe’s internal market have to abide by certain rules and standards. Those standards are currently gaining a global reach. Big tech companies in the United States must, for example, also adhere to the privacy standards that Europe imposes. But in my opinion, this is a different matter than having access to data streams. And China simply has fewer issues with that; if it can’t get access through the front door, it tries the back door.’
‘My focus is on international relations. How is the European Union acting? The European Union as a ‘rule-creating superpower’ is much more powerful than many people think. And also much more successful. And at the same time, it is competing against a country that takes a very different view. Because it’s an autocracy, a state capitalist country, in which economics is politics. In all honesty, we simply don’t know how this will work out. Until now, countries have been adopting European standards. But that doesn’t mean that it will stay that way. Especially if China starts developing its own internal market and uses the “Belt and Road” initiative to bind countries to it.’
What more could we as the Netherlands do if we consider this important, given our economic position?
‘This really needs to be regulated at a European level. The Netherlands is simply too small to be able to do anything in this area.’
‘AI will also play an increasingly important role in conventional combat. But also in cyber-warfare. Everywhere.’
Go or no-go: autonomous weapons systems.
AI is extremely advanced in the military domain. TNO is also doing a lot of research into this. An incredibly hot topic is that of autonomous weapons systems. For many politicians, it’s an absolute no-go.
‘We don’t decide; our opponent decides. In principle, I would be in favour of banning them. We could try to ban them as much as possible, but it never works in the military field. The whole loop from observation to action can be shortened tremendously using AI. Whoever shortens this loop will gain huge advantages on the battlefield. Of course, we can respond, “We shouldn’t be doing that”. But that too is a function of the changing balance of power. Like I said, the country that wins an industrial revolution sets the rules. China will also be setting these rules, meaning there will be no rules. We can say, “We will not participate, because we are principled”. That’s all very ethical. But warfare isn’t ethical. Look at Ukraine and what Russia is doing. I think the situation there is an eye-opener. I think that what’s happening in Ukraine, the way Russia is going about things there, is going to affect this kind of discussion. These are discussions that are fantastic in peacetime, when there’s no threat. But as soon as things go wrong and there’s a war, you meet your own limitations.'
‘Thinking that you can set the rules of the game because you have ethics and morality on your side is something you can forget about in the new world order. I hate it too, but that’s how it works.’
But how can we pass some of these ethical values that we have here in Europe on to weapons systems?
‘We can’t. I’m incredibly pessimistic about this topic. When a country’s vital interests are or are believed to be at stake, ethics will no longer be an issue. Ethics and morality simply go out the window. We shouldn’t be naive about the realities of international relations.’
‘If science would be able to come up with a way of integrating ethics and morality at the front end of the human in the loop, that could be a major breakthrough.’
And what does that mean for the role of researchers? When they, for example, say they have found ways of doing this ethically and responsibly.
‘If researchers have really good solutions for this, we should embrace them. I’m not arguing against ethics and morality. I think they are extremely important. One of the discussions concerns the ‘OODA loop’, which is the system from an initial observation to a decision, which you try to complete as quickly as possible. We believe there should always be a person in the loop. But take out the human, and the time a decision-making process takes is reduced from seconds to nanoseconds. In such a case, having a human in the loop would present an unprecedented disadvantage. If science would be able to come up with a way of integrating ethics and morality at the front end of the human in the loop, that could be a major breakthrough.’
‘For a long time, we’ve thought the future of warfare would mainly be cyber-warfare. But it’s part of a total package, containing a multitude of parts. And many different types of battles. AI will also play an increasingly important role in conventional combat. And in cyber-warfare. Everywhere. But I really think we need to look at it differently. And look at what our opponents do. Because that’s what we will need to find an answer to. And try to build in ethics and morality at the front end. But when things truly get going, you need to have things in order.’
What message would you like to give TNO? What should we focus on?
‘What is really crucial is understanding what our opponent thinks, does, and is capable of. This means not only seeing things from our own perspective. The ethical discussion is an important one, but more crucial is what the opponent does. And how the geopolitical relations are now.’
Download vision paper
Download vision paper ‘Towards Digital Life: A vision of AI in 2032’
More about 'Towards Digital Life: A vision of AI in 2032'
AI Systems Engineering & Lifecycle Management
The AI system for the future. At TNO, we work on AI systems that remain reliable and can handle new functions in the future.
Responsible decision-making between people and machines
Bias in facial recognition and accidents with self-driving cars. AI must be developed further. The fastest way to do this is in close cooperation with people.
Knowledge representation and reasoning
Correct and unambiguous information is needed when making a decision. That is why we use AI technology called "knowledge representation & reasoning".
Natural language processing
What is natural language processing (NLP) and how do we use it intelligently? Find out how we use this AI technique to gather information from textual data.
Robotics and autonomous agents
Robotics brings a future-proof industry a big step closer. For example, we are working on automatic path planning with AI techniques.