The discussion about artificial intelligence (AI) ranges from the desire for smart machines that could be useful to us to the fear that our privacy and control over our lives could be lost. But what is it and why do we need it? And what needs to happen so that it can truly be applied on a large scale in a safe and responsible manner?
What is AI?
Human intelligence is the capacity to learn, understand, reason, plan, and solve problems. Computer scientists have been working on machines that also have these abilities for some time. We refer to this as artificial intelligence, or AI.
Developments in relation to artificial learning, or deep learning, in particular have been revolutionary. With fast computers, deep learning simulates a network that resembles our brain. Based on large quantities of data, the network is able to recognize patterns and links even better than people can. This technology allows your smartphone to interpret between Chinese and Dutch, you to better assess birthmarks for signs of malignancy than your specialist can, and cameras at Schiphol Airport to recognize your face. By observing numerous chess matches, IBM and Google deep-learning machines have now succeeded in defeating the world chess, Go and Jeopardy champions. This year, a machine taking part in a debate with top-level human debaters did not do badly at all.
“Deep-learning machines cannot reason and therefore cannot explain why they have made a particular choice”
Deep-learning machines are able to recognize patterns to a high level of accuracy and then connect them to courses of action without knowing the underlying causal links. They cannot reason and therefore cannot explain why they have made a particular choice.
Ability to reason
Nonetheless, autonomous intelligent machines, like driverless cars, care robots, and virtual doctors or lawyers, have to make carefully considered choices about the world around them, including about legal and ethical parameters. They also have to be able to reason about themselves and explain their behaviour. A symbolic reasoning ability is needed for that – an important second branch of AI.
We refer to the combination of deep learning and reasoning capacity as hybrid artificial intelligence. This combination could ultimately lead to a level of intelligence that is as high as, if not greater than, that of people.
Desire for AI
AI is set to change almost every social and economic aspect of our lives. The World Economic Forum is referring to it as the Fourth Industrial Revolution, in which physical, digital and biological systems largely coalesce. The United Nations AI for Good Global Summit conference identified a multiplicity of safe and reliable AI applications that are improving the quality and sustainability of our planet. Examples include targeted attempts at dealing with disasters using satellite images, smart cities centred around their residents and their needs, and basic universal healthcare.
“In healthcare, using care robots and AI diagnoses to make up for the shortage of staff and expertise is a matter of sheer necessity”
‘Smart’ is therefore becoming more and more commonplace – in healthcare, mobility, cities and industries. And this is more than just a matter of applying a technological innovation for its own sake. In healthcare, for example, using care robots and AI diagnoses to make up for the shortage of staff and expertise is a matter of sheer necessity, as is making treatments more personal and less costly. The use of AI is also needed in the energy sector to manage supply, demand, conversion and storage in energy networks in order to create a sustainable energy system. Similarly, organizing our logistics, mobility and maintenance of bridges and dikes as effectively and efficiently as possible can only be done with the help of intelligent systems. All of which means that AI provides us with a deeper understanding of complexity, greater personalisation of services, and a better alignment of supply and demand.
And then there is the ‘smart human’ who, with AI, has improved senses and is able to see without eyes. Strengthened exoskeletons provide greater power to their limbs, while e-coaches improve their reasoning and ethical skills.
Fear of AI
Although AI has now surpassed people in terms of accuracy of medical diagnoses – of various types of cancer – the giving of reasoned recommendations for treating patients remains a challenge, as does the acceptance of this. AI has almost no capacity for giving people clear explanations as to how and why choices are to be made. The same applies to demonstrable compliance with laws and regulations.
This is hindering acceptance. For example, Uber has abandoned its experiments with driverless taxis following a collision between such a vehicle and a car being driven by a person that the control system was unable to explain. The ability to explain choices and controllability within legal and ethical parameters are important preconditions for trusting and accepting AI. The ability to explain choices and controllability is referred to as ‘meaningful control’.
AI may also infringe privacy. On the one hand, we desire more personalised healthcare, transport, security and entertainment, for which AI systems have to become familiar with a very large quantity of personal information (big data). On the other, we fear big data being used to manipulate our opinions. One example is the influencing of the American presidential elections by Cambridge Analytica.
“Without meaningful control of AI and without protecting privacy, we will not be able to harness the potential offered by AI”
Without meaningful control of AI and without protecting privacy, we will not be able to use the potential offered by AI. The public and private sector have important roles in solving this. There are big differences here between the country at the forefront of these innovations, the United States, driven as it is by economic interests, China, which is strongly government-regulated, and Europe, which is seeking to make AI ‘responsible’. The European Group on Ethics in Science and New Technologies recommends that human dignity, human rights and ethical principles should be the guidelines governing the development of AI applications.
How can AI learn from big data while protecting privacy?
The EU has introduced privacy legislation for data in the form of the GDPR. The importance of such legislation cannot be overstated. That same GDPR, however, can also impede the development of AI, because big data is the raw material for AI. It is for this reason that we must invest in solutions that can help us learn from big data while protecting privacy. As well as regulations, solutions consist of new business models and technology that make privacy possible. TNO is expending considerable effort in developing the technology that is needed for this.
Examples of such technologies are ‘privacy-preserving federated computing’ (comparing data that remains encrypted) and ‘responsible AI’ (systems that are able to weigh up ethical considerations regarding the use of data, and to explain them). New business models for sharing data and learning from it, while protecting privacy, have been developed in the Smart Dairy Farming public-private innovation programme and elsewhere, and for personalised advice systems for lifestyle-related illnesses like diabetes type 2 (Health Data Cooperative).
How do you control AI within legal and ethical frameworks (meaningful control)?
Like people, autonomous AI systems must operate within legal and ethical frameworks. Due to individual and cultural differences in these frameworks, this cannot be left to designers, suppliers or owners.
This is the duty of society’s representatives – that is, government. However, if government only prescribes what AI may and must do, its potential will be limited to what people are already capable of. Working with ethical goal functions that separate the intelligence of AI from its ability to act ethically can tackle this. This will allow the AI system to choose a more intelligent approach than people for solving problems, but to achieve this within the legal and ethical framework defined by humans.
“Intelligent cameras will be able to weigh up the necessary considerations between security risks and privacy, for example in the case of revealing the identity of a person who has fallen ill in a public place”
In this way, intelligent cameras will be able to weigh up the necessary considerations between identifying security risks and safeguarding privacy, for example in the case of revealing the identity or medical history to medical services of a person who has fallen ill in a public place. Driverless cars will be able to make considered choices between avoiding a pedestrian and the risks to their passengers. This latter case would require a quantitative valuation of the results of ethical choices that AI makes. Is the life of a passenger worth more than that of the pedestrian? How do human drivers make decisions in such situations?
The development of systems that can reason is a challenge for science and industry. Specifying targets and quantifying utility – what is the value of the various outcomes? – is a government task. Indeed, Europe and the Netherlands are well positioned, in terms of systems design, multidisciplinary approach and innovations, to arrive at meaningful control and thereby harness the full potential of AI.
Working in partnership
Together with public-private partners, TNO is developing privacy solutions and meaningful control in order to open up the potential of AI. We apply AI in the healthcare sector (personalised health), the energy sector (smart grids), mobility (cooperative driving), security (dark-web interventions), infrastructure (predictive maintenance), industry (smart industries) and the circular economy (life-cycle complexity). TNO stands out in the international arena on account of its combination of multidisciplinarity and systems thinking.
Would you like to know more?
If you would like to discuss artificial intelligence and its applications? Please contact Peter Werkhoven.