How do we harness the benefits of Artificial Intelligence (AI) while respecting our laws and ethical principles? TNO is working on different ways to achieve this: "We are a global leader in this field.”
Would you like to know more?
Would you like to talk more about Artificial Intelligence and our applications? Then contact Leon Kester.
There are stories circulating about AI that have little to do with reality, says TNO's Leon Kester, who deals with the relationship between AI and ethics: "Many discussions about AI get bogged down in linguistic confusion or emotions. You will also come across stories about AI systems that operate completely without human intervention and the question of whether AI should be assigned a status comparable to that of humans.”
More intelligent than people
Erroneous or exaggerated, says Kester: "I see AI as a mirror that confronts us with our limitations as human beings. This forces us to think about what we want to achieve together with AI and about the corresponding limits we want to set.”
“Moreover, there are applications that we in Europe find downright undesirable, such as the control of citizens by the government.”
There are, of course, threats: "The intelligence of AI systems does not yet exceed the overall intelligence of people, but most experts believe that this will happen in the not too distant future. In order to keep this development manageable, more action is urgently needed, particularly in the area of ethics. There are applications that we in Europe find downright undesirable, such as the control of citizens by the government.”
Concerns about existing applications
We have to make sure that AI does what we people want it to do, preferably in the interests of society as a whole: "You cannot leave that to specialists who only oversee their particular field or to stakeholders, such as companies that earn money from AI. It requires a structural approach and that is a matter for all of us.”
TNO's Albert Huizing works in the same field as Kester on short term applications: "AI is nowhere near as versatile as people. It is a technology that will soon become very good in certain areas, such as speech recognition and self-driving cars. At the moment, therefore, we should be particularly concerned about existing AI applications, for example via social media. After all, they already exert a strong influence on our lives today. As users, we often do not know enough about the ethical standards that are used in this process.”
There are four ways to apply AI ethically, say Kester and Huizing. The first is that a person always supervises AI. Unfortunately, this is often not workable, because people cannot handle the large flow of information and think too slowly. Also self-regulation by AI manufacturers, the second way, does not work. This is partly because commercial interests often obscure their view of the ethics of an application.
“A security robot can be told not to injure people, but that is an obstacle if it has to prevent an attack.”
The third way is a design that incorporates ethics into the AI system. That works better, says Kester: "But you don't take into account exceptions that may occur in practice. A security robot can be told not to injure people. But that's an obstacle if the robot has to prevent an attack.”
Model of the world
This leaves the fourth possibility: ethical goal function. Kester: "An AI system is given the values that society considers important to take into account in a recommendation or decision," says Kester. This way, you can use the computer's processing power to calculate the best result and thereby guarantee ethically responsible results. "A drone used for military purposes can then make the best possible choice between two, in principle, undesirable alternatives.”
“This way, you can use the computer's processing power to calculate the best result and thereby guarantee ethically responsible results.”
Huizing adds a couple of further comments: "Ethical goal functions are in the design phase, the necessary algorithms are still incomplete. In addition, we must agree on the ethical values that we share. Quite a challenge. And AI must also take the consequences for a complete system into consideration. Travel advice, for example: if everyone follows the same advice, this might lead to long rail delays.”
In order to see all the consequences, an AI system must therefore have a model of such a system, including itself. Huizing: "With the knowledge that is incorporated in such a model, AI can take all the relevant factors into account. TNO is working hard on the realisation of ethical goal functions and the associated models.”
Our privacy is at stake
And what about privacy protection? This is something that TNO's Cor Veenman specialises in: "Privacy is primarily about making data anonymous. But that's not enough. For example, if you have data about a person’s movements, you will soon find a unique pattern that corresponds to that person. Combine that with other data and you can find out much more. TNO has developed an approach that allows you to pre-process data in such a way that individuals remain unrecognisable yet still allows targeted searching for group characteristics.”
“Bias is the situation where AI draws unwanted conclusions from past data.”
Is AI objective?
Unwanted biases can also have negative consequences for individuals. Veenman: "Bias is the situation in which AI draws unwanted conclusions from past data. TNO develops fair AI, which, for example, no longer allows ethnicity to play a role in predicting suitability for a job.”
Veenman also mentions an example of fair and privacy-friendly research: "Research into tax fraud, for example. With our approach you use legally permitted group characteristics only, and individuals cannot be traced. For example, TNO is working on ways to utilise the benefits of AI without undesirable side effects. With this approach, we are also leading the way worldwide.”