The wise use of scientific knowledge can improve our lives enormously; the problem is that in practice we do so far too little. TNO is researching how we can best employ this valuable knowledge, for example to solve challenging problems such as climate change, with Artificial Intelligence (AI) playing a central role.
Want to know more?
Would you like to talk with us about Artificial Intelligence and its applications? Get in touch with Leon Kester.
We already have answers to many societal problems, says cognitive psychologist Josephine Sassen: “Take climate change. If we were to really scale down our meat consumption, we could greatly reduce the detrimental effects of stock breeding on our environment. This is something we could do tomorrow. And we could do this tomorrow. But we don’t.”
So what is holding us back? “Our biological intelligence is limited and biased,” explains psychophysiologist Hans Korteling. “We have great difficulty assessing where our individual and collective interests lie, and our brains are not adept to grasp the complexity of some technological developments, including those in AI. Also, we think too much in terms of short-term solutions. Just like our Stone Age ancestors did, really. When there was plenty of food they ate as much as they could. It was a sensible strategy at the time, but now it’s just unhealthy.”
Desirable and undesirable consequences
Climate change is an example of the consequences that the rapid development of science and technology has had, says Sassen: “There have been both desirable and undesirable consequences. Plastic is useful, but not as plastic soup in the ocean. Knowledge of genetics can heal people, but it can also help create designer babies.”
“Knowledge of genetics can heal people, but it can also help create designer babies”
Scientific knowledge therefore improves our lives but also leads to new problems. Fortunately, says Korteling, science can also help to solve those problems: “But we find it hard to acknowledge and accept those solutions.”
Ethical and cognitive intelligence
This has serious consequences, says Sassen: “Scientists have no overview of the full societal significance of their findings. Citizens demand better policy, for instance in governing the development of AI, but politicians have no overview of the potential consequences of these policies. Ethicists tell us that in order to make policy, you first have to define where you want to go. And psychologists will bring in that as primitive thinkers we tend to stubbornly maintain certain misconceptions when deciding on what is best for us. So we need wisdom; not just amongst scientists or politicians, but in the whole of society.” Korteling adds: “So what is ‘wisdom’? I think it would be wise to know what our goals should be, and how to achieve those goals in the most effective way. AI can help on both fronts.”
AI is both a problem and a solution
Seen in this light AI is both a problem and part of the solution. To start with, AI systems can analyse huge amounts of complex data and learn from it. AI can also give recommendations on action, and use simulations to make the consequences of this action clear.
“We will have to prevent that we incorporate our own irrational thought patterns, for instance on the supposed differences between certain people, in the algorithms”
We will have to ensure that AI does not share our biases, warns AI expert and ethicist Leon Kester: “We will have to prevent that we incorporate our own irrational thought patterns, for instance on the supposed differences between certain people, in the algorithms. And we must also make sure that AI does not gloss over actual differences, for instance between men and women, but makes us aware of them and forces us to respond to them ethically.” Korteling: “This doesn’t mean that AI sets our ultimate goals; we, humans, will always have to do that.”
Ethical GOAL functions
The recommendations given by AI systems need to be wise: that is to say, they should help us to move effectively towards ethically responsible goals. This is no small task, but it is nonetheless feasible, says Kester: “The challenge is to make ethics objectifiable and measurable, for instance by using AI to identify what unifies us and to weigh up the effects of our possible choices. Then, when this knowledge is fed back into the AI system, AI can help us to make sensible, broadly-supported choices. We call these the ethical goal functions of AI.”
Bringing people together
TNO is working hard on the development of these ethical goal functions. Sassen: “Our challenge is to develop a scientifically-based methodology to support our quest for a society worth pursuing, and to make better-informed choices on that basis.”
To this end, two important conditions need to be met. Firstly there has to be a model of reality that offers an AI system adequate guidance in drawing up its recommendations. Secondly there has to be a reasonable degree of agreement between those confronted with the consequences of these recommendations. Kester: “Taking account of biases and bridging cultural differences are a particular challenge.”
Kester foresees a circular development process: “Knowledge about human biases and limitations helps us to better formulate ethical goal functions. Moreover, AI is increasingly able to model the world, including the effects of innovations such as AI itself as well as human behaviour. This means that AI can develop policy recommendations that better support ethical goal functions. We can experience the results of this policy through simulations that allow us to understand our choices, and this means we can then formulate improved ethical goal functions. Altogether the result is an upward spiral, with society becoming ever wiser.”
This sums up the theory underlying this extremely innovative application of AI. TNO is now working on the next step: putting it into practice. Sassen considers this to be the most ambitious research project she has ever worked on: “It demands that a broad approach be taken by a project team that brings together system engineers, AI experts, psychologists, philosophers, mathematicians, neuroscientists, sociologists, political scientists and anthropologists. We have to work together outside the borders of our own professional fields – not just within our own team, but also in collaborating with universities and other knowledge institutes.”
“We are starting at the district level, researching how we can help to get energy transition off the ground”
Despite its far-reaching academic ambitions, the project is taking a practical approach. Kester: “We are currently looking at district-level energy transition, and researching how ethical goal functions can be formulated for this purpose. The next step is to develop a simulation environment in which AI can calculate future scenarios that meet these goal functions as closely as possible. AI takes the entire complexity of such an energy transition into account: the interests of everyone involved, the short-term and long-term effects, and the errors of thought that we as humans tend to make. By allowing people to experience simulations of the resulting future scenarios, they get a deeper understanding of the effects of their choices; the experience allows them to change their behaviour and even fine-tune the ethical goal functions.” Korteling: “It’s a tough challenge, but we’re going to face the challenge. Doing nothing is not an option.”