It’s an enticing vision of the future – Artificial Intelligence (AI) will make our lives more pleasant and easier. Nevertheless, many people are sceptical. What do the algorithms behind AI actually do, and who keeps an eye on this? TNO is exploring ways of boosting people’s confidence in AI: “It all starts with transparency, explainability and eye-catching results.”
“What’s the best way to deal with illness?” asks Heleen Wortelboer, a TNO researcher who holds the post of Integrator Healthy Living. “Do you combat the symptoms or do you tackle the cause?” She answers her own question: “In acute cases, you choose the first option. If you want to tackle the cause, especially in chronic welfare disorders, then you have to find out what is going on right now and what happened in the past. This requires both an understanding of the problem and an extensive dataset, which you can analyse in conjunction with one another. That’s something for which AI is perfectly suited.”
Diabetes and burnout
Wortelboer mainly focuses on chronic stress, overweight, and type 2 diabetes: “We already know a lot about these disorders, and there is plenty of physiological data available. When it comes to social factors, such as a person’s domestic circumstances or working environment, things are less straightforward. Nevertheless, we need to take these factors into account when researching the possible causes of these disorders, and any potential treatments. Take the case of someone who is returning to work after a burnout. If nothing has changed in the workplace, you can be quite sure that another burnout is on the cards.”
“We can take the data that TNO obtains through various cooperative ventures and use it as input for AI systems that are designed to deliver personalized health advice”
Huge amounts of relevant data are required to provide targeted health advice. Wortelboer explains that “TNO collects this data in ‘field labs’, which are cooperative ventures with companies, treatment centres or patient associations. Any data held there can be shared securely and anonymously. We can use this data as input for AI systems that deliver personalized health advice. For the time being, this still needs to be assessed, in consultation with healthcare professionals.”
Individuals retain ownership of their own data
Wortelboer points out that gathering data and the acceptance of AI are both a matter of trust. She says that “This is why people need to determine how their own data is handled. National initiatives to address issues such as sharing personal health data while maintaining privacy can be very helpful in this regard. In addition, we must continue to ask ourselves whether we are collecting more data than is strictly necessary, and whether the advice given actually yields useful results.”
Secure Multi-Party Computation is a great step in the right direction. This innovative technique enables multiple parties to use the same data, without sharing or exchanging any of it.
“To build trust, there must be transparency concerning the way in which AI systems reach conclusions or generate advice”, says Stephan Raaijmakers, who specializes in the technical explainability of AI. He cites various ways in which explainability can be used to cultivate trust: “If an AI system corroborates an analysis by referring to the sample data used to train it, then you will have a better understanding of how it reached its conclusions. You can also show which individual aspects of the data were given special emphasis by the AI system as it formulated its conclusions or recommendations.”
“AIs are rather like clever co-workers who cannot fully explain how they do what they do. You eventually come to trust them, provided they work accurately, consistently and ethically”
Explainability is a key factor in shaping trust and acceptance, says Raaijmakers. Other factors such as accuracy, repeatability and ethics are also involved. “You could compare certain types of AI to clever co-workers who cannot fully explain to you how they do what they do. You eventually come to trust them, provided they work accurately, consistently and ethically.” In many situations, there is still a need for explanations. “In healthcare, for example, or when interpreting the analyses generated by algorithms in criminal cases. Moreover, it must be clear where the power of decision ultimately resides – with the person or with the system.”
AI confers clarity
Explainable AI is high on the international research agenda, says Raaijmakers. “It is first and foremost a question of technology – how do we open the ‘black box’ of an AI algorithm? Also, we still have a lot to learn about the intended recipients of these explanations – people. What factors make explanations understandable? Do you need personalized explanations before you can accept an AI’s conclusions?”
TNO will conduct experiments to determine what constitutes a good and effective explanation in various areas, including the field of medicine, says Wortelboer. “The focus here is personalized lifestyle advice for people with an unhealthy lifestyle and type 2 diabetes.”
Always think for yourself!
Eventually, there will be close cooperation between AIs and people. Raaijmakers explains that “AI will help us by providing targeted advice, while we, in turn, will improve the system by providing useful data and feedback. In the long term, AIs will become reliable co-workers for people. Indeed, with a little imagination, this alliance could ultimately become a symbiotic relationship.”
“Always think for yourself, and contradict the system where necessary. That way, both sides can learn useful lessons”
This also means that users do not have to blindly accept advice from an AI system. Raaijmakers’ advice is to “Always think for yourself, and contradict the system where necessary. That way, both sides can learn useful lessons."
Legislation on the move
Despite the advantages of this learning process, accidents can still happen. Last March, for instance, one of Uber’s self-driving vehicles in the United States was involved in an accident. “In the Netherlands, liability would, in theory, rest with the driver”, says Somayeh Djafari, one of TNO’s lawyers. “However, if a system malfunction was involved, then liability could also rest with the manufacturer. Ultimately, it is up to judges or legislators to provide clarity in such cases.”
Incidentally, it was recently confirmed that in the fatal accident involving Uber’s self-driving vehicle, the system itself was not at fault. The supervisor travelling in the car, whose job it is to intervene if the system fails, was watching a video at the time, and was not paying attention. In addition, the engineers had disabled the emergency stop function. As a result, even though that system had been ‘watching’ the cyclist concerned for a full eight seconds, it was unable to intervene.
Regarding the implications of cases like this for AI, well, that is still unclear. Djafari says that “One example is the ‘Draft Bill Governing the Experimental Use of Self-Driving Vehicles’, which makes it possible to use driverless cars for experimental purposes. That bill has already been passed by the Dutch Senate, so all that now remains is for it to be signed and to enter into force.”
Legislators are being very cautious about it. Djafari points out that “Our ultimate goal is to contribute to the social debate about the opportunities, risks, limitations and conditions associated with the use of AI. Experimental legislation can be a useful tool for finding a balance between the interests of innovation and other public interests such as legal protection, equality, privacy and safety.”
Would you like to know more?
Would you like to discuss Artificial Intelligence and our applications in greater detail? If so, please contact Stephan Raaijmakers.